You are on page 1of 272

Teradata Database

Resource Usage Macros and Tables


Release 14.10
B035-1099-112A
August 2014

The product or products described in this book are licensed products of Teradata Corporation or its affiliates.
Teradata, Active Data Warehousing, Active Enterprise Intelligence, Applications-Within, Aprimo Marketing Studio, Aster, BYNET, Claraview,
DecisionCast, Gridscale, MyCommerce, QueryGrid, SQL-MapReduce, Teradata Decision Experts, "Teradata Labs" logo, Teradata ServiceConnect,
Teradata Source Experts, WebAnalyst, and Xkoto are trademarks or registered trademarks of Teradata Corporation or its affiliates in the United
States and other countries.
Adaptec and SCSISelect are trademarks or registered trademarks of Adaptec, Inc.
AMD Opteron and Opteron are trademarks of Advanced Micro Devices, Inc.
Apache, Apache Hadoop, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of the Apache Software Foundation
in the United States and/or other countries.
Apple, Mac, and OS X all are registered trademarks of Apple Inc.
Axeda is a registered trademark of Axeda Corporation. Axeda Agents, Axeda Applications, Axeda Policy Manager, Axeda Enterprise, Axeda Access,
Axeda Software Management, Axeda Service, Axeda ServiceLink, and Firewall-Friendly are trademarks and Maximum Results and Maximum
Support are servicemarks of Axeda Corporation.
Data Domain, EMC, PowerPath, SRDF, and Symmetrix are registered trademarks of EMC Corporation.
GoldenGate is a trademark of Oracle.
Hewlett-Packard and HP are registered trademarks of Hewlett-Packard Company.
Hortonworks, the Hortonworks logo and other Hortonworks trademarks are trademarks of Hortonworks Inc. in the United States and other
countries.
Intel, Pentium, and XEON are registered trademarks of Intel Corporation.
IBM, CICS, RACF, Tivoli, and z/OS are registered trademarks of International Business Machines Corporation.
Linux is a registered trademark of Linus Torvalds.
LSI is a registered trademark of LSI Corporation.
Microsoft, Active Directory, Windows, Windows NT, and Windows Server are registered trademarks of Microsoft Corporation in the United States
and other countries.
NetVault is a trademark or registered trademark of Dell Inc. in the United States and/or other countries.
Novell and SUSE are registered trademarks of Novell, Inc., in the United States and other countries.
Oracle, Java, and Solaris are registered trademarks of Oracle and/or its affiliates.
QLogic and SANbox are trademarks or registered trademarks of QLogic Corporation.
Quantum and the Quantum logo are trademarks of Quantum Corporation, registered in the U.S.A. and other countries.
Red Hat is a trademark of Red Hat, Inc., registered in the U.S. and other countries. Used under license.
SAP is the trademark or registered trademark of SAP AG in Germany and in several other countries.
SAS and SAS/C are trademarks or registered trademarks of SAS Institute Inc.
SPARC is a registered trademark of SPARC International, Inc.
Symantec, NetBackup, and VERITAS are trademarks or registered trademarks of Symantec Corporation or its affiliates in the United States and
other countries.
Unicode is a registered trademark of Unicode, Inc. in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and company names mentioned herein may be the trademarks of their respective owners.

THE INFORMATION CONTAINED IN THIS DOCUMENT IS PROVIDED ON AN "AS-IS" BASIS, WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR
NON-INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSION
MAY NOT APPLY TO YOU. IN NO EVENT WILL TERADATA CORPORATION BE LIABLE FOR ANY INDIRECT, DIRECT, SPECIAL, INCIDENTAL,
OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS OR LOST SAVINGS, EVEN IF EXPRESSLY ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
The information contained in this document may contain references or cross-references to features, functions, products, or services that are not
announced or available in your country. Such references do not imply that Teradata Corporation intends to announce such features, functions,
products, or services in your country. Please consult your local Teradata Corporation representative for those features, functions, products, or
services available in your country.
Information contained in this document may contain technical inaccuracies or typographical errors. Information may be changed or updated
without notice. Teradata Corporation may also make improvements or changes in the products or services described in this information at any time
without notice.
To maintain the quality of our products and services, we would like your comments on the accuracy, clarity, organization, and value of this document.
Please email: teradata-books@lists.teradata.com.
Any comments or materials (collectively referred to as "Feedback") sent to Teradata Corporation will be deemed non-confidential. Teradata
Corporation will have no obligation of any kind with respect to Feedback and will be free to use, reproduce, disclose, exhibit, display, transform,
create derivative works of, and distribute the Feedback and derivative works thereof without limitation on a royalty-free basis. Further, Teradata
Corporation will be free to use any ideas, concepts, know-how, or techniques contained in such Feedback for any purpose whatsoever, including
developing, manufacturing, or marketing products or services incorporating Feedback.

Copyright 2000-2014 by Teradata. All Rights Reserved.

Preface
Purpose
This is a reference book for Teradata Database resource usage macros and tables. The macros
can be used to report resource usage data and, when enabled, the tables can be used to:

Observe the balance of disk and CPU usage

Obtain pdisk usage information

Check for bottlenecks

Provide an overall history of the system operation

Monitor the utilization of AMP Worker Tasks (AWTs)

Related Documentation
For information on:

Collecting RSS data, see Application Programming Reference.

Workload management (WM) capacity on demand (COD) and setting limits on resource
usage, refer to the following:

Carrie Ballinger, Workload Management Capacity on Demand: Teradata Database 14.10


and Linux SLES 11, Teradata Database Orange Book 541-0010245

Teradata Viewpoint User Guide

Audience
This book is intended to be used by administrators, programmers, and other Teradata
technical personnel responsible for administering or managing Teradata Database.

Supported Software Releases and Operating


Systems
This book supports Teradata Database 14.10.
Teradata Database 14.10 is supported on:

SUSE Linux Enterprise Server (SLES)10

SLES 11

Teradata Database client applications support other operating systems.

Resource Usage Macros and Tables

Preface
Changes to This Book

Changes to This Book


Release

Description

Teradata Database 14.10.03

Added a new feature: Workload Management Capacity on Demand.


This feature is available starting with TD 14.10.03.

August 2014

Modified the following field descriptions:


CODFactor
WM CPU COD, WM I/IO COD, and PM I/O COD Spare
columns
LdvReadRespMax
LdvWriteRespMax
NetMrgHeapFails
NetMrgHeapRequests
Added the ProcBlksTime column to the ResUsageSps Table
chapter.
Replaced instances of CASE statement with CASE expression.
Teradata Database 14.10
September 2013

Teradata Database 14.10


May 2013

Updated the following:


The Data Block Prefetch columns.
The Segment Acquired columns.
The VprType field description in the ResUsageSps table chapter.
Added a note to the AGid, RelWgt, and IOBlks columns about how
they are only valid on SLES 10 or earlier systems.
Added a note to the VPid, WDid, WaitIO, CPURunDelay,
IOSubmitted, IOCompleted, IOCriticalSubmitted, IOSubmittedKB,
IOCompletedKB, IOCriticalSubmittedKB, DecayLevel1IO,
DecayLevel2IO, DecayLevel1CPU, DecayLevel2CPU,
TacticalExceptionIO, and TacticalExceptionCPU columns about
how they are only valid on SLES 11 or later systems.
Replaced references to ResGeneralInfoView with ResSpmaView.
Added the following:
Spare02-11 fields to the ResUsageSpma table
Spare00-04 fields to the ResUsageSldv table
Spare00-01 and Spare03-14 fields to the ResUsageSps table
Spare00-01 fields to the ResUsageSpdsk and the ResUsageSvdsk
tables
Spare00-07 fields to the ResUsageSvpr table
Spare00 field to the ResUsageSawt, ResUsageScpu, ResUsageShst,
ResUsageImpa, ResUsageIvpr tables
SpareInt field to the ResUsageSpma, ResUsageSldv,
ResUsageSpdsk, ResUsageSps, and ResUsageSvdsk tables
An entry and description for VH cache to the Glossary

Resource Usage Macros and Tables

Preface
Additional Information

Release

Description

Teradata Database 14.10

Updated the following views:


ResSpmaView
ResSldvView
ResSpdskView
ResSpsView
ResSpdskView
ResSvdskView
ResSvprView
ResSpmaView
ResIvprView
Updated the following column descriptions:
CODFactor
LdvReadRespMax
LdvWriteRespMax
AwtInuseMax
Noted that the message delivery times columns for the
ResUsageIpma and ResUsageIvpr tables are not currently valid.
Replaced instances of LAN-connected with TCP/IP networkconnected.

May 2013 continued

Additional Information
URL

Description

www.info.teradata.com/

Use the Teradata Information Products Publishing Library site


to:
View or download a manual:
1 Under Online Publications, select General Search.
2 Enter your search criteria and click Search.

Download a documentation CD-ROM:


1 Under Online Publications, select General Search.
2 In the Title or Keyword field, enter CD-ROM, and click

Search.
www.teradata.com

The Teradata home page provides links to numerous sources of


information about Teradata. Links include:
Executive reports, white papers, case studies of customer
experiences with Teradata, and thought leadership
Technical information, solutions, and expert advice
Press releases, mentions and media resources

Resource Usage Macros and Tables

Preface
Teradata Database Optional Features

URL

Description

www.teradata.com/t/TEN/

Teradata Customer Education delivers training that builds skills


and capabilities for our customers, enabling them to maximize
their Teradata investment.

tays.teradata.com/

Use Teradata @ Your Service to access Orange Books, technical


alerts, and knowledge repositories, view and join forums, and
download software patches.

developer.teradata.com/

Teradata Developer Exchange provides articles on using


Teradata products, technical discussion forums, and code
downloads.

To maintain the quality of our products and services, we would like your comments on the
accuracy, clarity, organization, and value of this document. Please email teradatabooks@lists.teradata.com.

Teradata Database Optional Features


This book may include descriptions of the following optional Teradata Database features and
products:

Teradata Columnar

Teradata Row Level Security

Teradata SQL-H

Teradata Temporal

Teradata Virtual Storage (VS)

Unity Source Link

You may not use these features without the appropriate licenses. The fact that these features
may be included in product media or downloads, or described in documentation that you
receive, does not authorize you to use them without the appropriate licenses.
Contact your Teradata sales representative to purchase and enable optional features.

Resource Usage Macros and Tables

Table of Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Purpose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Supported Software Releases and Operating Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3
Changes to This Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4
Additional Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5
Teradata Database Optional Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

Chapter 1: Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Benefits of Resource Usage Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Overview of Resource Usage Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Data Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Resource Usage Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Application Programming Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Chapter 2: Planning Your Resource Usage Data . . . . . . . . . . . . . . . 19


Resource Usage Table Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Tables Based on Needed Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Types of Resource Usage Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Logging Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Logging Period . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Summary Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Active Row Filter Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Resource Usage Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
The Cost of Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Logging Cost Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Operational Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Resource Usage Macros and Tables

23
23
24
24

Table of Contents

Chapter 3: Resource Usage and Procedures . . . . . . . . . . . . . . . . . . . .27


Enabling RSS Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
ctl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
DBW . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
General Macro Input Format. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29
Parameter Use for One-Node, Multiple-Node, All-Node, and Group Macros . . . . . . . . .30
Input Format for One-Node Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
Input Format for ByGroup Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30
Retaining Data From Prior Releases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
Macro Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31
Using ENABLE and DISABLE LOGON Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
Purging Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35

Chapter 4: Resource Usage Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37


Physical Table Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
Relational Primary Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37
Resource Usage Table Rows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Occasional Event Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Types of Resource Usage Table Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
Invalid Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .41
About the Mode Column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42
Summary Mode in Resource Usage Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42

Chapter 5: ResUsageScpu Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .45
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
Process Scheduling CPU Utilization Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46
Reserved Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
Summary Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47
Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48

Resource Usage Macros and Tables

Table of Contents

Chapter 6: ResUsageSpma Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
File System Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
General Concurrency Control Database Locks Columns . . . . . . . . . . . . . . . . . . . . . . . . .
Host Controller Channel and TCP/IP Network Traffic Columns . . . . . . . . . . . . . . . . . .
Memory Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Net Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Process Scheduling Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
TASM Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teradata VS Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
User Command Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reserved Column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

52
52
55
55
56
57
60
64
65
65
66

Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Chapter 7: ResUsageSawt Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
TASM Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Reserved Column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Summary Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Chapter 8: ResUsageShst Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Host Controller Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Reserved Column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Summary Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Resource Usage Macros and Tables

Table of Contents

Chapter 9: ResUsageSldv Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .85
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87
Logical Device Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .87
Reserved Column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
Summary Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89
Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .89

Chapter 10: ResUsageSpdsk Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .93
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Teradata VS Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .95
Reserved Column. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98
Summary Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98
Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .98

Chapter 11: ResUsageSps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .102
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
TASM Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
Process Scheduling Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .109
File System Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112
Memory Allocation Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
Net Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .116
Reserved Column. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117
Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .117

10

Resource Usage Macros and Tables

Table of Contents

Chapter 12: ResUsageSvdsk Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Teradata VS Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Reserved Column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Summary Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Chapter 13: ResUsageSvpr Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
File System Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
General Concurrency Control Database Locks Columns . . . . . . . . . . . . . . . . . . . . . . . .
Memory Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Net Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Process Scheduling Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Teradata VS Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Reserved Column . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

133
133
146
146
150
151
155
158

Summary Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159


Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Chapter 14: Resource Usage Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163


ResCPUUsageByAMPView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
ResCPUUsageByPEView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
ResSawtView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
ResScpuView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
ResShstView. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
ResSldvView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
ResSpdskView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
ResSpmaView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
ResSpsView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

Resource Usage Macros and Tables

11

Table of Contents

ResSvdskView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178
ResSvprView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179

Chapter 15: Resource Usage Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181


Macro Output Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181
ResAWT Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .183
ResCPUByAMP Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
ResCPUByPE Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .192
ResCPUByNode Macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .195
ResHostByLink Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198
ResLdvByNode Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .200
ResMemMgmtByNode Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .204
ResNetByNode Macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .208
ResNode Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .211
ResPdskByNode Macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .217
ResPs Macros. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .220
ResVdskByNode Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .223

Appendix A: How to Read Syntax Diagrams . . . . . . . . . . . . . . . . . . .227


Syntax Diagram Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .227

Appendix B: ResUsageIpma Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .233
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .235
Process Scheduling CPU Switching Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .235
Net Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .236
Reserved Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239
Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .240

12

Resource Usage Macros and Tables

Table of Contents

Appendix C: ResUsageIvpr Table . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241


Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Relational Primary Index Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Miscellaneous Housekeeping Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Statistics Columns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
File System Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
General Concurrency Control Monitor Management Columns. . . . . . . . . . . . . . . . . . .
Net Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transient Journal Purge Overhead Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

243
243
248
249
250

Summary Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251


Spare Columns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252

Appendix D: ResIpmaView and ResIvprView Views . . . . . . . . 253


ResIpmaView. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
ResIvprView. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256

Appendix E: Partition Assignments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257


Table Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Partition Assignment Listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265

Resource Usage Macros and Tables

13

Table of Contents

14

Resource Usage Macros and Tables

CHAPTER 1

Introduction

This manual documents the resource usage data and settings for a variety of installation
configurations and environments in Chapter 2: Planning Your Resource Usage Data. To
implement the settings you decide on, see Chapter 3: Resource Usage and Procedures.
The only maintenance required is to purge old data regularly. See Purging Data.
For additional information on performance analysis and system tuning, see the following:

Application Programming Reference

Database Administration

Benefits of Resource Usage Data


Resource usage data is useful for the following purposes:

Measuring system performance

Measuring component performance

Assisting with on-site job scheduling

Identifying potential performance impacts

Planning installation, upgrade, and migration

Analyzing performance degradation and improvement

Identifying problems such as bottlenecks, parallel inefficiencies, down components, and


congestion

Overview of Resource Usage Data


Resource usage data is stored in system tables and views in the DBC database. Macros installed
with Teradata Database generate reports that display the data.
To load the resource usage views and macros, you can run the Database Initialization Program
(dip) script, DIPRUM. For more information on DIPRUM, see Utilities.
Note: You can run DIP scripts one at a time or all at once by running DIPALL. DIP scripts,
such as DIPRUM and DIPSYSFNC, can be executed in any order.
As with other database data, you can access resource usage data using SQL if you have the
proper privileges. You can also write your own queries or macros on resource usage data.
The following table lists the topics covered by resource usage data.

Resource Usage Macros and Tables

15

Chapter 1: Introduction
Data Reporting

Resource usage data covers

Which includes

BYNET traffic on a node

point-to-point messaging, broadcast messaging, and merge


activities.

client-to-server traffic

data for each communication link.

CPU utilization

overhead, user service, and time of session execution.

storage device traffic

the number of reads/writes and amount of data transferred


as seen from the storage driver.

pdisk device traffic

pdisk I/O, cylinder allocation, and migration statistics.

vdisk device traffic

all the cylinders allocated by an AMP (which can come from


any pdisks in the clique).

Priority Scheduler information

On SLES 10 or earlier systems, data by Performance


Group (PG) from the Priority Scheduler and the ability
to report resource usage data by Teradata Active System
Management (TASM) workload definitions (WDs).
On SLES 11 or later systems, data by Priority Scheduler
WD.
Note: Priority Scheduler is managed by TASM on SLES
10 systems, and is configured using the Teradata
Viewpoint workload management portlets. For more
information on those portlets, see Teradata Viewpoint
User Guide.

AMP Worker Task information

AMP Worker Task statistics.

memory management activity

memory allocation.

Data Reporting
Data is reported at the logging period. When a new logging period starts, the data is gathered
in the Gather Buffer, then updated to the Log Buffer and logged to the database resource usage
tables.

16

Resource Usage Macros and Tables

Chapter 1: Introduction
Resource Usage Macros

Data Collection Macros


and
Routines

Gather Buffer

Log Buffer

ResUsage Write Queue

ResUsage
Tables

ResUsage Reports

1099B001

Resource Usage Macros


Resource usage macros produce reports from data logged to the resource usage tables. They
can generate reports for a selected period of time and nodes.
You can use the reports to analyze key operational statistics and evaluate the performance of
your system.
Like other macros, resource usage macros consist of one or more Teradata SQL statements
stored in Teradata Database and executed by a single EXECUTE statement.
Refer to Chapter 3: Resource Usage and Procedures for more information on the resource
usage macros, and SQL Quick Reference for details about how to use the EXECUTE statement.

Resource Usage Macros and Tables

17

Chapter 1: Introduction
Application Programming Interfaces

Application Programming Interfaces


You can use the Teradata application programming interfaces (APIs) to:

Set the resource monitoring and logging rates

Collect RSS data and return node, vproc, and WD oriented data

Examples of these APIs, include the System PMPC SET RESOURCE RATE, MONITOR
PHYSICAL RESOURCE, MONITOR VIRTUAL RESOURCE, and MONITOR WD requests.
For more information on these APIs, see Application Programming Reference.

18

Resource Usage Macros and Tables

CHAPTER 2

Planning Your Resource Usage


Data

Resource Usage Table Settings


The default resource usage settings provide a good starting point for system monitoring.The
default results in the ResUsageSpma (SPMA) table being logged every 10 minutes (600
seconds).
The ResUsageSpma table provides a high level summary of how the system is operating and
contains summarized or key elements from most of the other tables. If you want to record
detailed statistics covered by any of the resource usage tables, you should enable them for
logging, along with specifying the largest logging period that will meet your needs. You should
not log data that you do not have a planned need for since this does incur additional database
system overhead and uses up additional database space.
The more tables you enable for logging and the shorter the logging period used, the more
overhead the system will use.

Tables Based on Needed Reports


If you plan on using the report macros provided in Chapter 15: Resource Usage Macros, you
need to enable the associated table.
Related Topics
For...

See...

instructions on setting resource usage


tables

Enabling RSS Logging on page 27.

instructions on using macros

General Macro Input Format on page 29 and


Macro Execution on page 31.

descriptions and examples of the macros

Chapter 15: Resource Usage Macros.

Resource Usage Macros and Tables

19

Chapter 2: Planning Your Resource Usage Data


Types of Resource Usage Tables

Types of Resource Usage Tables


The following table describes the tables and provides guidance about which ones to enable.
Note: The shaded rows in the table below are the resource usage tables that are generally not
used at customer sites.
Table Name

Covers

When You Should Enable

ResUsageIpma

System-wide node information, intended


primarily for Teradata engineers.

---

ResUsageIvpr

System-wide virtual processor information,


intended primarily for Teradata engineers.

---

ResUsageSawt

Data specific to the AMP Worker Tasks.

When you want to monitor the utilization of the


AMP Worker Task and determine if work is backing
up because the AMP Worker Tasks are all being used.

ResUsageScpu

Statistics on the CPUs within the nodes.

When the performance analysis suggests that the


overall performance is limited or to check if a
program is spinning in an infinite loop on an
individual processor.
For example, saturation of a particular CPU on each
node or on a particular node while others are idle
could indicate a task always uses that CPU.
Also, you should enable when the system is first
brought online to verify the following:
That all CPUs are functioning on all nodes
There is a good load balance among the CPUs

ResUsageShst

Statistics on the host channels and LANs


that communicate with Teradata Database.

To determine details about the traffic over the IBM


Host channels and if there is a bottleneck.

ResUsageSldv

System-wide, logical device statistics


collected from the storage driver.

To observe the balance of disk usage. The storage


device statistics are often difficult to interpret with
disk arrays attached due to multi-path access to disks.
Note: Use the ResUsageSvdsk table first to observe
general system disk utilization unless specifically
debugging at a low level.

ResUsageSpdsk

20

Statistics collected from the pdisk device.

To obtain detailed usage information about pdisks.

Resource Usage Macros and Tables

Chapter 2: Planning Your Resource Usage Data


Logging Rate

Table Name

Covers

When You Should Enable

ResUsageSpma

System-wide node information provides a


summary of overall system utilization
incorporating the essential information
from most of the other tables.

To provide an overall history of the system operation.

Use the columns in ResUsageSpma to view


BYNET utilization.
Note: The BYNET can transmit and receive
at the same time, resulting in 100%
transmitting and 100% receiving values
simultaneously.
Another method of determining BYNET
utilization and traffic is to use the blmstat
tool.
ResUsageSps

Statistics based on the WD the work is being


performed for.

When you need to track utilization by the WD level.

ResUsageSvpr

Data specific to each virtual processor and


its file system.

To view details about the resources being used by each


vproc on the system. This table is useful for looking
for hot AMPS or PEs that may be CPU bound or
throttled on other resources.

Logging Rate
The logging rate controls the frequency (number of seconds) at which resource usage data is
logged to the resource usage tables.
When you have decided what rate to set, see Chapter 3: Resource Usage and Procedures for
details on how to set the logging rate.

Logging Period
Resource usage logging means the writing of resource data as rows to one or more of the
resource usage database tables. The tables are named DBC.ResUsagexxxx, where xxxx is the
name of the resource usage table (for example, SPMA, IPMA, and so on) as listed in Types of
Resource Usage Tables on page 20.
The shorter the logging period, the more frequently data is logged, and the more disk space is
used.
When the system is so busy that the resource usage table logging gets backed up, RSS will
automatically double the logging period which effectively summarizes the data by providing
values for a time period twice that provided by the previous logging period.
If you see the resource usage logging rates change without user intervention, this means that
the database is busy. When no longer busy, the system resumes logging as before.

Resource Usage Macros and Tables

21

Chapter 2: Planning Your Resource Usage Data


Summary Mode

Note: Events in the event logs related to this doubling of the logging period do not represent
fatal errors but are informational to indicate that the automatic operations of the RSS are
attempting to maintain data logging.

Rule
The following rule on the logging rate is imposed by the system.
Intervals must evenly divide into 3600 (the number of seconds in an hour). The following
table shows the valid logging rate.

The white area of the table shows rates recommended only for short-term use for
debugging a specific issue.

The highlighted area of the table shows rates recommended for production processing.
1

10

12

15

16

18

20

24

25

30

36

40

45

48

50

60

72

75

80

90

100

120

144

150

180

200

225

240

300

360

400

450

600

720

900

1200

1800

3600

A practical minimum log interval during production processing is 60 seconds. Intermediate


log intervals, such as 120 seconds or 300 seconds can also be used. The default rate is 600
seconds.
Rates and enabled tables may be changed at any time and the changes take effect immediately.

Summary Mode
You can use Summary Mode to reduce the system overhead from logging tables that produce
multiple rows per logging period. Summary Mode helps reduce overhead by combining data
from multiple rows into one or more summary rows based on specific criteria for each table.
For example, if you want to log information provided in the ResUsageSvpr table but do not
need data for each individual vproc, use Summary Mode to produce one row per vproc type
instead of one row per vproc.
The ResUsageSpma table, in comparison, provides node level summary of key fields from
most of the other resource usage tables. When more details are required than the
ResUsageSpma table provides then the next level of information is provided by using
Summary Mode logging for the table of interest. This helps minimize the cost of the data
logging.

22

Resource Usage Macros and Tables

Chapter 2: Planning Your Resource Usage Data


Active Row Filter Mode

You can select Summary Mode for each table individually. For details on how Summary Mode
affects that particular table, see the description for each table.
For example, for the ResUsageSvpr table in Summary Mode, all the individual vproc rows of
the same vproc type are combined into a single row. Since the data values are added together,
you need to divide the summary row data value by the number of rows that made up the
Summary Mode row to get the average per vproc. For example, divide the AMP summary row
data value by the number of AMPs on that node to determine the average value per AMP. A
similar computation needs to be done to derive the average value per PE from the summary
row data value.
Note: To determine the number of AMP, PEs, and all other vproc types on your system, you
can use the ResUsageSpma table or use the Vproc Manager utility.
Resource usage columns that represent a maximum statistic are not summed together. Instead
the maximum value from the rows is used. For example, the ResUsageSvpr table
MsgWorkQLenMax column in the Summary Mode row for the AMPs will contain the
maximum value from all the AMP rows that would have been logged in non-Summary Mode.
The columns that represent a minimum statistic are summarized by storing the minimum
value from all the constituent rows.
Summary Mode has either no effect on the values of the Housekeeping Columns or it is
specifically detailed in the description of each affected field.
To enable Summary Mode, see Enabling RSS Logging on page 27.
For more information on Summary Mode, see Summary Mode in Resource Usage Tables on
page 42.

Active Row Filter Mode


Active Row Filter Mode reduces the overhead of logging for some of the resource usage table
by limiting the data rows that are logged to the database.
When you enable active row filtering, it may appear that rows are missing when looking at the
query results. This is because the index values of the inactive rows varies over time so that a
row with one index may be logged one period but not in another. To determine if rows are not
being logged to the database, you should look in the event logs for messages indicating that
rows have been lost.
Active Row Filtering should not be disabled for the ResUsageSps table.

Resource Usage Logging


The Cost of Logging
Logging resource usage data to database tables incurs costs:

Resource Usage Macros and Tables

23

Chapter 2: Planning Your Resource Usage Data


Resource Usage Logging

Writing to the database adds to the system I/O load. On a heavily loaded system, this could
affect the production workload throughput.

The rows written to the database take up space. If this space is never reclaimed, it will
eventually grow to consume all available space in user DBC.

In an extremely loaded system, it is possible that the RSS can fall behind in writing data to
the database. Although it caches such data and eventually catches up if given a chance, the
RSS is forced to start discarding rows if the system load persists and its cache capacity is
exceeded.

Logging Cost Contributors


Logging costs are difficult to quantify. They depend on a number of interrelated factors:

How busy is the system

Which resource usage tables are enabled

What resource usage logging rates are in effect

The system configuration (vproc, CPU, host driver, logical devices or device controllers)

Operational Methods
To optimize performance and reduce the cost of resource usage logging on your system,
Teradata recommends that you:
1

Use Summary Mode to reduce the number of rows inserted into the resource usage tables
if Summary Mode data provides sufficient information for your needs.

Do not disable Active Row Filter Mode for the tables that it is by default enabled for (for
example, the ResUsageSps table). Active Row Filter Mode limits the number of rows
written to the database each logging period and minimizes the amount of system resources
used.

Avoid unnecessarily using or exhausting available disk space by doing the following:

Never enable logging on tables that you do not intend to use.


For example, logging only to the ResUsageSpma table provides a lot of useful
information with a minimal operational load on the system.

Use the largest rates that provide enough detail information for your purposes.
Generally, you should use a logging rate no smaller than 60. The default rate is 600.
These values can be adjusted any time, regardless of whether the database system is
busy. New values take effect as soon as the adjustment command is issued. (For
example, with ctl, when you issue the WRITE command.)

24

Purge old data from the resource usage tables periodically.

Resource Usage Macros and Tables

Chapter 2: Planning Your Resource Usage Data


Resource Usage Logging

Related Topics
For instructions on...

See...

enabling resource usage tables, setting the


logging rates, and summarizing or filtering
rows

Enabling RSS Logging on page 27.

purging old data from resource usage tables

Purging Data on page 35.

Resource Usage Macros and Tables

25

Chapter 2: Planning Your Resource Usage Data


Resource Usage Logging

26

Resource Usage Macros and Tables

Resource Usage and Procedures

CHAPTER 3

Enabling RSS Logging


You can enable the resource usage tables using the Control GDO Editor (ctl) utility or the
database commands in Database Window (DBW).
Before you enable the resource usage tables, determine which tables apply to the resource
usage macros you want to run. For more information, see:

Resource Usage Table Settings on page 19.

Logging Rate on page 21.

ctl
You can set various Teradata Database configuration settings using ctl. The RSS screen in ctl
allows you to specify the rates of resource usage data logging. For detailed information on
starting ctl and modifying the settings, see ctl in Utilities.

DBW
Note: For instructions on starting the Database Window, see Database Window (xdbw) in
Utilities.
To enable RSS logging from DBW
1

Open the Supvr window.

Set the Node Logging Rate using the database command below.

SET RESOURCE

LOGGING

number

LOG
1099D002

where number is the number of seconds.


Note: A rate of zero disables the logging function.

Resource Usage Macros and Tables

27

Chapter 3: Resource Usage and Procedures


Enabling RSS Logging
3

Specify the table you want to enable logging to using the database command below.

SET LOGTABLE

tablename

ON

ALL

OFF

FE0CA030

where tablename is the suffix part of ResUsageXxxx. For example, for the
DBC.ResUsageSpma table, the tablename would be Spma.
After the table is enabled for logging, you can log rows in Summary Mode. For more
information, see Summary Mode on page 22.
Note: To log rows in Summary Mode, you must enable the table specified in both the RSS
Table Logging Enable group and in the RSS Summary Mode Enable group.
4

(Optional) Enable Summary Mode on the table specified using the command below.

SET SUMLOGTABLE

tablename

ON
OFF

1095A010

Example
The following example shows you how to enable table logging and set the Logging rate using
the database commands in DBW. Suppose you want to enable the ResUsageShst table and set
the logging rate for 10 minutes (600 seconds). You would enter the following:
set logtable shst on
set resource log 600

28

Resource Usage Macros and Tables

Resource Usage Macros and Tables

General Macro Input Format


As shown in the table below, there are four kinds of macros:

Multiple-node

One-node

All-node

ByGroup

For any given line in the following table, the macros on that line report the same statistics for either multiple nodes, one node,
all nodes, or group nodes as indicated.
Description

Multinode Macro

AWTs in use by node

ResAWTByAMP
ResAWTByNode

CPU usage by AMP Vprocs

ResCPUByAMP

ResCPUByAMPOneNode

ResAmpCpuByGroup

CPU usage by PE Vprocs

ResCPUByPE

ResCPUByPEOneNode

ResPeCpuByGroup

CPU usage by nodes

ResCPUByNode

ResCPUOneNode

ResCpuByGroup

Host statistics

One-Node Macro

All-Node Macro

ByGroup Macro

ResAWT

ResHostOneNode

ResHostByLink

ResHostByGroup

Ldv disk statistics

ResLdvByNode

ResLdvOneNode

ResLdvByGroup

Memory management

ResMemMgmtByNode

ResMemMgmtOneNode

ResMemByGroup

General network statistics

ResNetByNode

ResNetOneNode

ResNetByGroup

General node-level statistics

ResNodeByNode

ResOneNode

pdisk level I/O statistics

ResPdskByNode

ResPdskOneNode

Priority Scheduler and TASM


Workload statistics

ResPsByNode

AMP level I/O statistics

ResVdskByNode

ResNode

ResNodeByGroup
ResPdskByGroup
ResPsByGroup

ResVdskOneNode

ResVdskByGroup

29

Chapter 3: Resource Usage and Procedures


General Macro Input Format

Parameter Use for One-Node, Multiple-Node, All-Node, and Group Macros


The following table explains parameter use for one-node, multiple-node, all-node, and group
macros.
Macro Type

Number of Parameters

Node Parameters Used

Multiple node

Six

FromNode, ToNode

One node

Five

Node

All node

Four

None; this macro reports system-wide


statistics.

Group

Four

None; this macro reports statistics for


all nodes in the group.

For instructions on using these macros, see Macro Execution on page 31.

Input Format for One-Node Macros


One-node macro versions are primarily used on single-node systems. Alternatively, you can
use the corresponding multiple-node macro to report on just one node by supplying equal
FromNode and ToNode parameters. One-node versions are recommended, however, because
they eliminate redundant report columns on a single-node system. Examples of redundant
columns are the NodeID column and columns that focus on cross-node load balancing.
OneNode macros have the same general input format as the other macros. The only
differences are that the single-node version of each macro has both of the following:

OneNode qualifier in the macro name.

A single node specification, instead of the FromNode and ToNode parameters to specify a
range of nodes. The default is '001-01'.

Input Format for ByGroup Macros


ByGroup macro versions are used on systems with co-existing nodes. In Teradata Database,
co-existing nodes are nodes of different model types in the same configurations. Because of
the differences, the nodes may become bottlenecks in the throughput of the system as a whole.
Therefore, ByGroup macros were developed to provide the system user with a summary of the
performance data based on node groupings.
Note: The Database Administrator must identify the groupings of nodes when the system is
first configured.
ByGroup macros are similar to the other macros. The only difference is that they use the
GroupId column of the views to report system usage for a specific set of nodes grouped by a
GroupId. The input format of the ByGroup macros is the same as the other macros except
ByGroup appears as the qualifier in the macro name.
For an example of a ByGroup macro, see ResAmpCpuByGroup Sample Output on
page 190.
30

Resource Usage Macros and Tables

Chapter 3: Resource Usage and Procedures


Macro Execution

Retaining Data From Prior Releases


If you expect an ongoing need to retain and analyze data from prior releases, ask your System
Administrator to retain two sets of view and macro Data Definition Language (DDL) files in
separate places. Rename the views and macros so that you can use either.
You could, for example, use ResNodeR12 as the name of the resource usage macro from an
older release, and use it when you want to analyze the data from that release.

Macro Execution
Function
Macro execution is illustrated in the following diagram. For details about each macro and its
resulting report, see Chapter 15: Resource Usage Macros.

EXECUTE MACRO Syntax


The execution of each resource usage macro has the following form. For information on
interpreting the syntax diagrams, see Appendix A: How to Read Syntax Diagrams.
EXECUTE

MacroNameMultiNode

FromDate

EXEC
MacroNameAllNode

(
FromDate

MacroNameOneNode

,
ToTime

ToDate
,

FromTime

ToDate

FromDate
A

,
ToDate

FromDate
MacroNameByGroup

ToDate

FromNode

);
ToNode

B
FromTime

ToTime
,

C
FromTime

,
ToTime

Node

D
FromTime

ToTime

1099A004

where:

Resource Usage Macros and Tables

31

Chapter 3: Resource Usage and Procedures


Macro Execution

Syntax element

Description

MacroNameMultiNode

Name of a multinode resource usage macro:

MacroNameAllNode

ResAwtByNode
ResCPUByAMP
ResCPUByPE
ResCPUByNode
ResLdvByNode
ResMemMgmtByNode

ResNetByNode
ResNodeByNode
ResPdskByNode
ResPsByNode
ResVdskByNode

Name of an all-node resource usage macro:


ResNode
ResHostByLink
The ResHostByLink and ResNode macros do not use the FromNode and
ToNode parameters.

MacroNameByGroup

Name of a ByGroup resource usage macro:

FromDate

ResAmpCpuByGroup
ResCPUByGroup
ResHostByGroup
ResLdvByGroup
ResMemByGroup
ResNetByGroup

ResNodeByGroup
ResPeCpuByGroup
ResPdskByGroup
ResPsByGroup
ResVdskByGroup

Start date to report resource usage data.


The date may be entered either as a character string (for example,
character format for May 31, 2007 would appear as '2007-05-31') or as a
numeric value (for the same date in numeric format, 1070531). The
character string is the recommended format. The default is the current
system date.
For more information on using numeric dates with macros, see SQL
Data Types and Literals.
Note: The character string date format is changed from yymmdd to
'yyyy-mm-dd' to accommodate dates in the 21st century.

ToDate

End date to report resource usage data.


See the FromDate syntax element column for a further explanation of
date formats.
The character string is the recommended format.

32

FromTime

Start time to report resource usage data. The format is hhmmss. The
default is 000000.

ToTime

End time to report resource usage data. The format is hhmmss. The
default is 999999.

Resource Usage Macros and Tables

Chapter 3: Resource Usage and Procedures


Macro Execution

Syntax element

Description

FromNode

Starting range of nodes to report resource usage data. The format is


'nnn-nn'. A hyphen must be included in the fourth character position.
The default is '000-00'.
Note: To identify the node ID numbers for your system, type
get config in the DBW Supervisor Window (Supvr).

ToNode

Ending range of nodes to report resource usage data. The format is 'nnnnn'. A hyphen must be included in the fourth character position. The
default is '999-99'.
Note: To identify the node ID numbers for your system, type
get config in the DBW Supervisor Window (Supvr).

Node

Single-node ID to report resource usage data. The format is 'nnn-nn',


and hyphen must be included in the forth character position. For
example,1-01 should be typed out as '001-01'. The default is '001-01'.

Example 1: The ResCPUByAMP Macro


The following statement executes the ResCPUByAMP macro, producing a report for the
period beginning 8:00 a.m. on December 25, 2006 and ending 12:00 midnight, on December
31, 2006. It includes data for nodes 123-02 through 125-04.
EXECUTE ResCPUByAmp('2006-12-25','2006-12-31', 080000, 240000,
'123-02','125-04');

where:
Statement Element

Description

ResCPUByAMP

Name of the resource usage macro

'2006-12-25'

Starting date of December 25, 2006

'2006-12-31'

Ending date of December 31, 2006

080000

Starting time of 8:00 a.m.

240000

Ending time of 12:00 midnight

'123-02'

Starting node of a range of nodes

'125-04'

Ending node of a range of nodes

For information on using numeric values for dates, see SQL Data Types and Literals.

Example 2: The ResCPUByAMPOneNode Macro


The following statement executes the OneNode version of the ResCPUByAMP macro shown
in Example 1. It uses the same starting and ending dates and times (using character string
format), except the report is for a single node, node 123-02.

Resource Usage Macros and Tables

33

Chapter 3: Resource Usage and Procedures


Using ENABLE and DISABLE LOGON Commands
EXECUTE ResCpuByAmpOneNode ('2006-12-25','2006-12-31',080000,
240000,'123-02');

where:
Statement Element

Description

ResCPUByAMPOneNode

Name of the resource usage macro

'2006-12-25'

Starting date of December 25, 2006

'2006-12-31'

Ending date of December 31, 2006

080000

Starting time of 8:00 a.m.

240000

Ending time of 12:00 midnight

'123-02'

Node

For information on using numeric values for dates, see SQL Data Types and Literals.

Example 3: The ResAMPCpuByGroup Macro


The following statement executes the ByGroup version of the ResCPUByAmp macro shown in
Example 1. It uses the same starting and ending dates and times (using character string
format), except the report is for a node grouping.
EXECUTE ResAMPCpuByGroup ('2006-12-25','2006-12-31',080000,
240000);

where:
Statement Element

Description

ResCPUByAMPByGroup

Name of the resource usage macro

'2006-12-25'

Starting date of December 25, 2006

'2006-12-31'

Ending date of December 31, 2006

080000

Starting time of 8:00 a.m.

240000

Ending time of 12:00 midnight

For information on using numeric values for dates, see SQL Data Types and Literals.

Using ENABLE and DISABLE LOGON Commands


The DISABLE LOGONS command prevents new sessions from logging on. When logons are
disabled, resource usage data stops logging to the tables even if there are still active sessions

34

Resource Usage Macros and Tables

Chapter 3: Resource Usage and Procedures


Purging Data

logged on. (DISABLE ALL LOGONS prevents all users, including user DBC, from logging on
and also stops logging to the tables.)
To enable logons from:

Database Window, run ENABLE LOGONS or ENABLE ALL LOGONS.

Teradata Command Prompt, use the Start With Logons field of the Screen Debug menu of
ctl. For more information, see Control GDO Editor (ctl) in Utilities.

For more information on enabling and disabling logons, see Changing Logon States and
Restarting the System in Database Administration.

Purging Data
The RSS does not automatically delete data from the resource usage tables. You need to purge
data you no longer need on a regular basis.
You can directly remove old resource usage data by submitting SQL statements. For example,
use the following SQL statement to remove data more than seven days old from the
ResUsageSpma table:
DELETE FROM ResUsageSpma WHERE TheDate < CURRENT_DATE - 7;

For more information about the DELETE syntax, see Statement Syntax in SQL Data
Manipulation Language.

Resource Usage Macros and Tables

35

Chapter 3: Resource Usage and Procedures


Purging Data

36

Resource Usage Macros and Tables

CHAPTER 4

Resource Usage Tables

Physical Table Naming Conventions


Each physical table name follows this general naming convention:
ResUsage Information_type Table_name
where:
Element

Is one of the following...

Information_type

S = System-wide information
I = Internal Teradata Database information

Table_name

pma = Node information


vpr = Vproc information
cpu = CPU-specific information
ldv = Logical device statistics
pdsk = Pdisk device statistics
vdsk = Vdisk device statistics
awt = AMP Worker Task statistics
sps = WD resolution statistics
hst = Mainframe and workstation host information

Relational Primary Index


All resource usage tables have the same nonunique primary index:

The nonunique primary index consists of TheDate, TheTime, and NodeID columns.

The primary index is nonunique because:

Of duplicate rows that will appear with the same timestamp during daylight savings
time.

Some tables have multiple rows per logging period.

Rows that have duplicate timestamps can be distinguished by the GmtTime column.

Because the primary index is nonunique, all resource usage tables are created as
MULTISET tables. This prevents the system from checking for duplicate rows.

Resource Usage Macros and Tables

37

Chapter 4: Resource Usage Tables


Resource Usage Table Rows

For more information on MULTISET tables, see CREATE TABLE (Table Kind Clause) in
SQL Data Definition Language or Duplicate Rows in Tables in SQL Fundamentals.

Resource Usage Table Rows


For information on how rows will be inserted into these tables based on the current resource
usage control settings, see Chapter 2: Planning Your Resource Usage Data. For information
on the number of rows inserted in a resource usage table for each applicable log period, refer
to Summary Mode on page 22.

Occasional Event Data


Occasional event data is considered outside the scope of resource usage and is, therefore,
logged in the ERRORLOG and the DBCINFO tables rather than in the resource usage tables.

Types of Resource Usage Table Columns


This manual describes what each of the resource usage table columns report (that is, what
each DBC.ResUsageXxxx.ColumnName reports) in a table format.
Note: The actual table definitions are obtainable by executing the SHOW TABLE statement.
For more information about SHOW TABLE, see SQL Data Definition Language.
All columns described in the following chapters and appendixes are type FLOAT unless
otherwise specified in the description of that column. All nonexistent values are stored as
NULL.
For each resource usage table column, this manual describes the:

Column Name

Mode
For descriptions of the different modes of data reporting, see About the Mode Column
on page 42.

Description

Data Type
For descriptions of the data types described in this manual, see SQL Data Types and
Literals.

The columns are grouped into either housekeeping columns or statistics columns. Statistic
columns are further grouped by category and subcategory as shown in the following table.

38

Resource Usage Macros and Tables

Chapter 4: Resource Usage Tables


Types of Resource Usage Table Columns

Column Name

Mode

Description

Data Type

HOUSEKEEPING OR STATISTICS COLUMNS


CATEGORY
Subcategory

Each table has:

Housekeeping columns which contain statistics on timestamp and current logging


characteristics.

Statistics columns which can be further categorized into subcategories. Categories and
subcategories may vary from table to table.

The following table shows the types of statistics subdivided into their respective subcategories.
Category

Subcategories

Description

File System

Some of the columns can be viewed as a subset of


memory columns by expanding on the operations
performed on disk memory segments. Operations
counted are logical memory and physical disk reads
and writes (including aging) and locking control
activities. Other columns identify the purpose of
operations being performed on disk segments such
as cylinder migration or data updates, the requests
being made by database software on the file system,
the number of cylinders that were selected or
scanned by AutoCylPack, or the number of data
blocks that have been compressed or uncompressed.
The WAL columns identify the log-based file system
recovery scheme in which modifications to
permanent data are written to a log file, the WAL
log.

AutoCylPack
Block-level compression (BLC)
Cylinder Defragmentation Overhead
Cylinder Management Overhead
Events
Cylinder MiniCylPack Overhead
Cylinder Split and Migrate Overhead
File Segment (FSG) Cache Wait
FSG I/O
Data Block Creations
Data Block Merge
Data Block Prefetches
Data Block Update Operations
Data Segment Lock Requests
Depot
Master Index (MI)
Multi-Row Requests
Segments Acquired
Segments Released
Single-Row Requests
Synchronized Full Table Scans
Transient Journal Requests
Transient Journal Overhead
Write Ahead Logging (WAL)

Resource Usage Macros and Tables

39

Chapter 4: Resource Usage Tables


Types of Resource Usage Table Columns

Category

Subcategories

Description

General Concurrency
Control

Database Locks
Monitor Management

These columns identify concurrency control


activities. These activities provided are subdivided
into control done for user level processing, system
overhead processing, and database locks. It does not
include control specific to disk, memory or net
concurrency control, which are included in the disk,
memory or net columns.

Host Controller (SHST)

Channel Traffic
Channel Management
Controller Overhead

These columns identify traffic on the host-to-node


channels and TCP/IP networks. Some also give
overhead and management information on the host
channel and TCP/IP network.

Memory

Memory Allocations
Memory Availability Management
Memory Pages
Memory Resident
Paging
Task Context Segment Usage

These columns collect memory allocation and


deallocation, logical memory and physical disk
reads and writes (including paging), access, deaccess
and memory control.

Input and Output Traffic


Outstanding Requests
Response Time
Seek Statistics

These columns identify individual logical device


activities for external storage components
connected through the buses.

Broadcast Net Traffic


Group Coordination
Message Type
Message Delivery Times
Merge Services
Net Controller Status and
Miscellaneous Management
Net Circuit Management
Net Miscellaneous Contention
Management
Net Queues
Network Transport Data
Point-to-point Net Traffic
Work Mailbox Queue

These columns identify traffic over the BYNET


through the number and direction of messages,
subdivided into the type of transmission, as well as
physical utilization of the BYNET. Logical messages
and direction are identified through subdivisions of
the message class. Controller overhead, channel
utilization, and Teradata net contention are
identified as well.

Logical Device

Net

40

Memory management columns are also provided to


identify events leading up to paging and aging
activities. Finally, a detailed snapshot of the memory
is provided by tracking the current states per
memory types.

The storage device statistics are calculated only on


what can be derived from statistics collected by the
operating system, since the disk array controllers do
not provide us with any useful data for resource
usage.

Resource Usage Macros and Tables

Chapter 4: Resource Usage Tables


Types of Resource Usage Table Columns

Category

Subcategories

Description

Process Scheduling

These columns provide a CPU-level snapshot of


work started with current characteristics and states.
Expanded detail is provided for work started but
waiting on resources. This identifies the ability or
inability of the system to effectively utilize resources.
Time allotments are tracked by monitoring the time
spent waiting for resources or processing code.
Some of these columns also track the number of
times processing was switched to another process
for multitasking purposes or to perform interrupt
services; and others can provide information about
wether the UDFs were doing work for the AMP and
PE vprocs.

ChnSignal Status Tracking


CPU Utilization
Cylinder Read
Interrupted CPU Switching
PE and AMP User-Defined Function
(UDF) CPU
Process Allocation
Process Pending Snapshot
Process Block Counts
Process Pending Wait Time
Scheduled CPU Switching

Reserved

None

These columns are not used.

Spare

None

These columns are reserved for a future release or


are for internal manipulation by Teradata
developers.

TASM

These columns report statistics on the AMP Worker


Tasks (AWTs) and Priority Scheduler. These
columns also provide RSS data to the System PMPC
APIs (for example, the MONITOR WD request and
MonitorWD function). For information, see
Application Programming Reference.

Transient Journal
Management

Purge Overhead

These columns identify background overhead


associated with the occasional transient journal
purge operation.

User Commands

User command
User command Arrival and
Departure

These columns describe the types of commands


given to Teradata Database by the user and the
progress of those commands.

Teradata Virtual Storage


(VS)

These columns identify individual pdisk and vdisk


device activities.

AMP Worker Task


In use and Max Array Data
Priority Scheduler
Worktype Descriptions
Monitor WD

Allocation
I/O
Migration
Node Agent

Invalid Columns
Some of the resource usage table columns described in this manual are not currently valid.
These columns are shaded in gray. For example:
Column Name

Mode

Description

Data Type

SampleColumn

track

An example of a column that is


not currently valid.

FLOAT

Resource Usage Macros and Tables

41

Chapter 4: Resource Usage Tables


About the Mode Column

About the Mode Column


The Mode column describes the kind of data reported by the resource usage column.
IF the Mode column is ....

THEN these resource usage columns ...

count

tally the number of times an event occurred, such as the number of


disk reads or writes during the logging period.

max

report the maximum value recorded during the logging period. Most
of the max columns have a max suffix in the column name (for
example, IoRespMax).

min

report the minimum value recorded during the logging period. The
min columns have a min suffix in the column name (for example,
AvailableMin).

track

show the value of a countable item achieved at the end of the current
logging period. An example of a countable item is a queue length.

Summary Mode in Resource Usage Tables


Summary Mode combines data from the multiple data rows normally generated into one or
more rows. When multiple rows are condensed into a single row, the data is combined using
the rules in the following table.
For the following
Mode

Summary fields are combined by

Count

summing the values from all the contributing rows.


For example, the ResUsageSawt.FlowCtlCnt field provides the total number of times that the
system entered flow control state from a non-flow control state.
In normal mode, the values are reported per AMP.
In Summary Mode, the value needs to be divided by the number of AMPs if you wish to
determine the average number per AMP rather than the total.

Max

taking the maximum value from all the contributing rows.


In Summary Mode, the reported value for a Max field such as
ResUsageSpdsk.ConcurrentWriteMax is the maximum value from all the rows that are combined
into a single summary row.

Min

taking the minimum value from all the contributing rows.


In Summary Mode, the reported value for a Min field, such as AvailableMin is the minimum value
from all the rows that are combined into a single summary row.

42

Resource Usage Macros and Tables

Chapter 4: Resource Usage Tables


Summary Mode in Resource Usage Tables

For the following


Mode

Summary fields are combined by

Track

summing the values from all the contributing rows.


In Summary Mode, the Track values from each row to be combined are summed together. For
example, ResUsageSawt.FlowControlled is a track field so that in Summary Mode, all the AMP
vproc rows are combined into a single row and the FlowControlled field will report the summed
value from each of the AMP vproc rows.

Summary Mode is applicable to all tables except ResUsageSpma, ResUsageIpma, and


ResUsageSps. If the information for a row of a table is in Summary Mode, the SummaryFlag
value is set to 'S'. If the row is being logged normally, the SummaryFlag value is set to 'N'.

Resource Usage Macros and Tables

43

Chapter 4: Resource Usage Tables


Summary Mode in Resource Usage Tables

44

Resource Usage Macros and Tables

CHAPTER 5

ResUsageScpu Table

This resource usage table contains resource usage information specific to the CPUs within the
nodes. The ResUsageScpu table includes resource usage data for available system-wide,
CPU information.
Note: This table is created as a MULTISET table. For more information see Relational
Primary Index on page 37.

Housekeeping Columns
Relational Primary Index Columns
These columns, taken together form the nonunique primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be logged


late (typically, by no more than one or two seconds), but this
column will still contain the time value when the entry should have
been logged. See the Secs and NominalSecs columns.
NodeID

n/a

Node ID. The Node ID is formatted as CCC-MM, where CCC


denotes the three-digit cabinet number and MM denotes the twodigit chassis number of the node. For example, a node in chassis 9 of
cabinet 3 has a node ID of '003-09'.

INTEGER

Note: Symmetric Multi-Processing (SMP) nodes have a chassis and


cabinet number of 1. For example, the node ID of an SMP node is
'001-01'.

Miscellaneous Housekeeping Columns


These columns provide statistics on current logging characteristics.
Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR(8)

Resource Usage Macros and Tables

45

Chapter 5: ResUsageScpu Table


Statistics Columns

Column Name

Mode

Description

Data Type

CPUId

n/a

Identifies the CPU within this node. The values are 0 through
NCPUs-1.

SMALLINT

In Summary Mode, the value is -1.


Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally the same as NominalSecs, but can be different in three
cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
CentiSecs

n/a

Number of centiseconds in the logging period. This column is


useful when performing data calculations with small elapsed times
where the difference between centisecond-based data and whole
seconds results in a percentage error.

INTEGER

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

SummaryFlag

n/a

Summarization status of this row. Possible values are 'N' if the row
is a non-summary row, and 'S if the row is a summary row.

CHAR (1)

Active

max

Controls whether or not the rows will be logged to the resource


usage tables if Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.
For example, if you enable Active Row Filter Mode, the rows that
have a zero Active column value will not be logged to the resource
usage tables.
TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.
CODFactor

n/a

Platform Metering (PM) CPU COD value in one tenths of percent.


For example, a value of 500 represents a PM CPU COD value of
50%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.

Statistics Columns
Process Scheduling CPU Utilization Columns
These columns count all CPU activities, including activities performed for virtual processors.
The CPU utilization columns are aggregates representing all CPUs on the node. CPU
utilization by user code is further subdivided by the vproc tables.

46

Resource Usage Macros and Tables

Chapter 5: ResUsageScpu Table


Summary Mode

CPU idle time = CPUIdle + CPUIoWait

CPU busy time = CPUUServ + CPUUExec

Theoretically, the values of these four columns, for any given interval, account for total CPU
time on the node. That is, these columns should total to 100 * Secs * number of CPUs on the
node, since each CPU is always in exactly one of these four states. In practice, there is
occasionally a very small plus or minus difference from this theoretical total.
Column Name

Mode

Description

Data Type

CPUIdle

count

Time in centiseconds the CPU is idle and not waiting for I/O.

FLOAT

CPUIoWait

count

Time in centiseconds CPU is waiting for I/O completion.

FLOAT

Note: This represents another variety of Idle, since the CPU is only
recorded as being in this state if there are no processes eligible for
execution. This is because if there were any such process, the CPU
would be immediately dispatched for that process.
CPUUServ

count

Time in centiseconds CPU is busy executing kernel system calls or


servicing I/O and timer hardware interrupts.

FLOAT

CPUUExec

count

Time in centiseconds CPU is busy executing user execution code,


that is, time spent in a user state on behalf of a process.

FLOAT

Reserved Columns
Column Name

Mode

Description

Data Type

Reserved

n/a

Note: This column is not used.

CHAR (1)

Reserved2

n/a

Note: This column is not used.

CHAR (2)

Summary Mode
When Summary Mode is active for tables in this group, one row is written to the database for
each node, summarizing all CPUs per node, for each log interval.
You can determine if a row is in Summary Mode by checking the SummaryFlag column for
that row.
IF the SummaryFlag column value is

THEN the data for that row is being logged

'S'

in Summary Mode.

'N'

normally.

Resource Usage Macros and Tables

47

Chapter 5: ResUsageScpu Table


Spare Columns

Spare Columns
The ResUsageScpu table spare fields are named Spare00 through Spare04, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare field currently being used.
Column Name

Description

Spare00

Workload management (WM) CPU COD value in one tenths of a


percent. For example, a value of 500 represents a WM CPU COD
value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the WM_CPU_COD column name in the ResScpuView
view. For details, see ResScpuView on page 169
Note: WM CPU COD is not supported on SLES 10. Its value is set to
1000 on SLES 10.

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

48

Resource Usage Macros and Tables

CHAPTER 6

ResUsageSpma Table

The ResUsageSpma table includes resource usage data for available system-wide information.
You can use the ResUsageSpma table to identify node-level skew by comparing maximum
CPU usage to the average CPU consumed across all nodes.
The ResUsageSpma table is similar to the ResUsageIpma table. For information on this table,
see Appendix B: ResUsageIpma Table.
Note: Summary Mode is not applicable to the ResUsageSpma table.
This table is created as a MULTISET table. For more information, see Relational Primary
Index on page 37.
Note: Always use the views provided in Chapter 14: Resource Usage Views to access the data
rather than accessing the resource usage table directly.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the nonunique primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be


logged late (typically, by no more than one or two seconds), but this
column will still contain the time when the entry should have been
logged. For more information, see the Secs and NominalSecs
columns.
NodeID

n/a

Node ID. The Node ID is formatted as CCC-MM, where CCC


denotes the three-digit cabinet number and MM denotes the twodigit chassis number of the node. For example, a node in chassis 9
of cabinet 3 has a node ID of '003-09'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


These columns provide a generalized picture of the vprocs running on this node, shown as
Type n virtual processors where n = 1 to 7. Under the current implementation, only Type 1

Resource Usage Macros and Tables

49

Chapter 6: ResUsageSpma Table


Housekeeping Columns

(AMP), Type 2 (PE), Type 3 (GTW), Type 4 (RSG), and Type 5 (TVS) vprocs exist; vproc
types 6 through 7 are not currently used.
Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

NodeType

n/a

Type of node, representing the per node system family type. For
example, 5100C, 5600C or 5555H.

CHAR(8)

TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.
CentiSecs

n/a

Actual number of centiseconds in the logging period.

INTEGER

This column is useful when performing data calculations with small


elapsed times where the difference between centisecond-based data
and whole seconds results in a percentage error.
Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally the same as NominalSecs, but can be different in three
cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

CODFactor

n/a

PM CPU COD value in one tenths of percent. For example, a value


of 500 represents a PM CPU COD value of 50%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.


NCPUs

n/a

Number of CPUs on this node.

SMALLINT

This column is useful for normalizing the CPU utilization column


values for the number of CPUs on the node. This is especially
important in coexistence systems where the number of CPUs can
vary across system nodes.
Vproc1

n/a

Current count of type 1 (AMP) virtual processors running on the


node.

SMALLINT

Vproc2

n/a

Current count of type 2 (PE) virtual processors running under the


node.

SMALLINT

Vproc3

n/a

Current count of type 3 (GTW) virtual processors running under


the node.

SMALLINT

Vproc4

n/a

Current count of type 4 (RSG) virtual processors running under


the node.

SMALLINT

Vproc5

n/a

Current count of type 5 (TVS) virtual processors running under the


node.

SMALLINT

50

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

Vproc6

n/a

Current count of type 6 virtual processors running under the node.

SMALLINT

This column reports zeros and (blanks).


Vproc7

n/a

Current count of type 7 virtual processors running under the node.

SMALLINT

This column reports zeros and (blanks).


VprocType1

n/a

Type of virtual processor for Vproc1. When the vproc is present on


the node, the value is AMP.

CHAR(4)

VprocType2

n/a

Type of virtual processor for Vproc2. When the vproc is present on


the node, the value is PE.

CHAR(4)

VprocType3

n/a

Type of virtual processor for Vproc3. When the vproc is present on


the node, the value is GTW.

CHAR(4)

VprocType4

n/a

Type of virtual processor for Vproc4. When the vproc is present on


the node, the value is RSG.

CHAR(4)

VprocType5

n/a

Type of virtual processor for Vproc5. When the vproc is present on


the node, the value is TVS.

CHAR(4)

VprocType6

n/a

Type of virtual processor for Vproc6.

CHAR(4)

Note: This column is not currently valid. It should not be used.


VprocType7

n/a

Type of virtual processor for Vproc7.

CHAR(4)

Note: This column is not currently valid. It should not be used.


MemSize

n/a

Amount of memory on this node in mega bytes. Useful for


performing memory usage calculations.

INTEGER

NodeNormFactor

n/a

A per CPU normalization factor that is used to normalize the


reported CPU values of the ResUsageSpma table to a single 5100
CPU.

INTEGER

This value is scaled by a factor of 100. For example, if the actual


factor is 5.25, the value of the NodeNormFactor will be 525.
Active

max

Gets set to a non-zero value whenever one of the other data


columns in the row is set.

FLOAT

NetSamples

count

Sample count for sampled statistics for a Bynet.

FLOAT

Note: NetSamples is used to normalize all net time monitored


statistics to a percent-of-time basis. For example, dividing
(NetTxIdle/NetSamples) yields the transmitter-idle time ratio for
the net statistics.

Resource Usage Macros and Tables

51

Chapter 6: ResUsageSpma Table


Statistics Columns

Statistics Columns
File System Columns
Segments Acquired Columns
These columns identify the total disk memory segments acquired by the File System during
the log period.
For more information, see Segment Acquires Columns in the Chapter 13: ResUsageSvpr
Table.
Column Name

Mode

Description

Data Type

FileAcqKB

count

Total KB logically acquired by FileAcqs.

FLOAT

Note: Use the views provided in Chapter 14: Resource Usage


Views instead of accessing the data for this column directly from
this table.
FileAcqOtherKB

count

Total number of scratch disk segments acquired in KB.

FLOAT

FileAcqReadKB

count

Total KB physically read by FileAcqReads.

FLOAT

FileAcqReads

count

Total number of disk segment acquired that caused a physical read.

FLOAT

FileAcqs

count

Total number of disk segments acquires that were logically


acquired.

FLOAT

Note: Only the FileAcqs column is counted as a cache hit.


FileAcqsOther

count

Total number scratch disk segments that were logically acquired.

FLOAT

Segments Released Columns


These columns identify the total disk memory segments released by the File System, as well as
those segments that are dropped from memory during the log period.
For more information, see Segments Released Columns in Chapter 13: ResUsageSvpr
Table.
Column Name

Mode

Description

Data Type

FileRels

count

Total number of disk segments released by tasks.

FLOAT

FileRelsOther

count

Total number of scratch disk segments released as part of them


being deleted.

FLOAT

FileRelKB

count

Total KB logically released by FileRels.

FLOAT

Note: Use the views provided in Chapter 14: Resource Usage


Views instead of accessing the data for this column directly from
this table.
FileRelOtherKB

52

count

Total number of scratch disk segments released in KB.

FLOAT

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileWriteKB

count

Total KB physically written by FileWrites.

FLOAT

Note: This column may produce a negative number due to


overflow. To avoid this issue, see ResSpmaView on page 173 to
access the ResUsageSpma table data.
FileWrites

count

Total number of disk segment immediate or delayed physical


writes.

FLOAT

Data Block Prefetches Columns


These columns summarize the effects of prefetching data blocks on the file system. For more
information, see Data Block Prefetches Columns in Chapter 13: ResUsageSvpr Table.
A prefetch is either a cylinder read operation or individual block reads operation. Either of
these operations are generically called a prefetch.
When all cylinder slots are in use, the cylinder reads revert back to the original algorithm of a
block-at-a-time read ahead. So the column FilePreKB is the sum of the size of data blocks
logically read by either cylinder reads or data block pre-reads. This also applies to the physical
pre-reads. FilePreReadKB includes both physical cylinder reads and single block pre-reads.
The number of data blocks that are pre-read at a time is controlled by the DBS Control
performance parameter ReadAhead Count. The default is one block at a time pre-read.
If you enable cylinder reads, there will be extra sectors read in on cylinder reads. An accurate
calculation of the wasted KB read by cylinder read is not possible since there are legitimate
logical pre-reads that do not incur physical pre-reads.
For more information about cylinder reads, see Database Administration.
Column Name

Mode

Description

Data Type

FilePreKB

count

Sum of the sizes of data blocks logically loaded with data prefetches
(for example, either cylinder reads or individual block reads).

FLOAT

For cylinder reads, this column does not include the disk sectors in
between the loaded data blocks.
Note: Use the views provided in Chapter 14: Resource Usage
Views instead of accessing the data for this column directly from
this table.
FilePreReadKB

count

Size of the data prefetch (cylinder section or individual blocks being


read) that is physically loaded from disk.

FLOAT

For cylinder reads, this column includes the disk sectors in between
the loaded data blocks.
Note: This column may produce a negative number due to
overflow. To avoid this issue, see ResSpmaView on page 173 to
access the ResUsageSpma table data.

Resource Usage Macros and Tables

53

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

FilePreReads

count

Number of times a data prefetch was physically performed either as


a cylinder read or individual blocks read.

FLOAT

FilePres

count

Total number of times a logical data prefetch was performed (either


as a cylinder read or individual block reads).

FLOAT

Data Segment Lock Requests Columns


These columns summarize the number of lock requests, blocks, and deadlocks on a disk
segment.
For more information, see Data Segment Lock Requests Columns in Chapter 13:
ResUsageSvpr Table.
Column Name

Mode

Description

Data Type

FileLockBlocks

count

Number of lock requests that were blocked.

FLOAT

FileLockDeadlocks

count

Number of deadlocks detected on lock requests.

FLOAT

FileLockEnters

count

Number of times a lock was requested.

FLOAT

Depot Columns
These columns summarize the physical writes to the Depot used to protect in-place
modifications.
Column Name

Mode

Description

Data Type

FileLargeDepotBlocks

count

Total number of blocks (either WAL or database) that have been


protected by large Depot writes.

FLOAT

Since a large Depot write protects multiple blocks, the following


calculation results in the average number of blocks protected by
each large Depot write:
FileLargeDepotBlocks / FileLargeDepotWrites
FileLargeDepotWrites

54

count

Number of large writes to the Depot performed to protect in-place


modifications. Each large Depot write protects multiple in-place
writes of either WAL data blocks or database data blocks. The large
Depot is typically used when blocks age out of memory in the
background. Large Depot writes are also counted against
FileWrites. Therefore, FileWrites still indicates the total writes
regardless of whether it was a Depot write or a database write.

FLOAT

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileSmallDepotWrites

count

Number of small writes to the Depot performed to protect in-place


modifications. Each small Depot write protects a single in-place
write of either a write ahead logging (WAL) data block or a
database data block. The small Depot is typically used when the inplace writes are initiated by a foreground task. Small Depot writes
are also counted against FileWrites. Therefore, FileWrites still
indicates the total writes regardless of whether it was a Depot write
or a database write.

FLOAT

General Concurrency Control Database Locks Columns


These columns identify database locking occurrences.
Column Name

Mode

Description

Data Type

DBLockBlocks

count

Number of times a database lock was blocked.

FLOAT

DBLockDeadlocks

count

Number of times a database lock was deadlocked.

FLOAT

Host Controller Channel and TCP/IP Network Traffic Columns


These columns identify the traffic between the host and the node channel and TCP/IP
network connections in three levels of granularity:

Blocks

Messages

KB

Blocks are made up of some amount of variable sized messages. ReadKB and WriteKB identify
the KB involved in the traffic.
For host controller columns that provide overhead and management information, see
Chapter 8: ResUsageShst Table for details.
Column Name

Mode

Description

Data Type

HostBlockReads

count

Number of blocks read in from the host.

FLOAT

HostBlockWrites

count

Number of blocks written out to the host.

FLOAT

HostMessageReads

count

Number of messages read in from the host.

FLOAT

HostMessageWrites

count

Number of messages written out to the host.

FLOAT

HostReadKB

count

KB transferred in from the host.

FLOAT

HostWriteKB

count

KB transferred out to the host.

FLOAT

Resource Usage Macros and Tables

55

Chapter 6: ResUsageSpma Table


Statistics Columns

Memory Columns
Memory Allocation Column
Column Name

Mode

Description

Data Type

MemVprAllocKB

track

Change in memory.

FLOAT

MemVprAllocKB represents a delta from the previous reporting


period and will report negative values as less memory is used.
Note: The original meaning of this column was the total KB
attributed to allocations and size-increasing alters for vproc
memory types.

Memory Pages Column


Column Name

Mode

Description

Data Type

MemFreeKB

track

Approximate amount of memory that is available for use. The


Linux operating system uses most free memory for buffers and
caching to improve performance, but the operating system can
reclaim that memory if it is needed by programs.

FLOAT

The following formula is used by the RSS to calculate the


MemFreeKB value:
MemFreeKB = MemFree + Buffers + Cached + SwapCached fsgavailpgs*kbperpage - (active_slabs*pgsperslab*kbperpage)
where the values:
MemFree, Buffers, Cached, and SwapCached come from /proc/
meminfo.
fsgavailpgs come from the PDE FSG subsystem.
active_slabs and pgsperslab come from /proc/slabinfo.

Memory Availability Management Columns


These columns identify overhead to managing memory when memory availability is a
problem.
Column Name

Mode

Description

Data Type

MemCtxtPageReads

count

Number of pages swapped in.

FLOAT

MemCtxtPageWrites

count

Number of pages swapped out.

FLOAT

MemTextPageReads

count

Number of pages paged minus the pages swapped in.

FLOAT

56

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Statistics Columns

Net Columns
Point-to-Point Net Traffic Columns
These columns identify the number (Reads, Writes) and amount (ReadKB, WriteKB) of input
and output messages passing through the Teradata Database nets through point-to-point
(1:1) methods (PtP). It excludes TCP/IP traffic.
Column Name

Mode

Description

Data Type

MsgPtPReadKB

count

Total KB of net point-to-point messages input to processes on the


node via the message subsystem.

FLOAT

MsgPtPReads

count

Number of net point-to-point messages input to processes on the


node via the message subsystem.

FLOAT

MsgPtPWriteKB

count

Total KB of net point-to-point messages output from processes on


the node via the message subsystem.

FLOAT

MsgPtPWrites

count

Number of net point-to-point messages output from processes on


the node via the message subsystem.

FLOAT

Broadcast Net Traffic Columns


These columns identify the number (Reads, Writes) and amount (ReadKB, WriteKB) of input
and output messages passing through the Teradata Database nets through broadcast (1:many)
methods (Brd).
Note: If a single broadcast message is delivered to multiple processes in this node, the
NetBrdReads and NetBrdReadKB are only incremented once.
Column Name

Mode

Description

Data Type

MsgBrdReadKB

count

Total KB of net broadcast messages input to processes on the node


via the message subsystem.

FLOAT

MsgBrdReads

count

Number of net broadcast messages input to processes on the node


via the message subsystem.

FLOAT

MsgBrdWriteKB

count

Total KB of net broadcast messages output from processes on the


node via the message subsystem.

FLOAT

MsgBrdWrites

count

Number of net broadcast messages output from processes on the


node via the message subsystem.

FLOAT

Network Transport Data Columns


These columns identify the number (Reads, Writes) and amount of input and output passing
through the Teradata Database nets.
On a single-node (virtual network [vnet]) system, net-specific statistics are not meaningful
and are always zero.

Resource Usage Macros and Tables

57

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

NetMsgPtpWriteKB

count

Amount of point-to-point message data in KB transmitted by both


Bynets.

FLOAT

NetMsgPtpWrites

count

Number of point-to-point messages transmitted by both Bynets.

FLOAT

NetMsgBrdWriteKB

count

Amount of broadcast message data in KB transmitted by both


Bynets.

FLOAT

NetMsgBrdWrites

count

Number of broadcast messages transmitted by both Bynets

FLOAT

NetMsgPtpReadKB

count

Amount of point-to-point message data in KB received by both


Bynets.

FLOAT

NetMsgPtpReads

count

Number of point-to-point messages received by both Bynets.

FLOAT

NetMsgBrdReadKB

count

Amount of broadcast message data in KB received by both Bynets.

FLOAT

NetMsgBrdReads

count

Number of broadcast messages received by both Bynets.

FLOAT

NetRxKBBrd

count

Total broadcast KB received over all Bynets.

FLOAT

NetRxKBPtP

count

Total point-to-point KB received over all Bynets.

FLOAT

NetTxKBBrd

count

Total broadcast KB transmitted over all Bynets.

FLOAT

NetTxKBPtP

count

Total point-to-point KB transmitted over all Bynets.

FLOAT

Net Controller Status and Miscellaneous Management


These columns provide utilization and other status information about the Teradata Database
net controllers.
On a single-node (VNET) system, net-specific statistics are not meaningful and always report
zero.
Column Name

Mode

Description

Data Type

NetTxConnected

count

Number of samples showing the transmitter connected on a Bynet.

FLOAT

NetRxConnected

count

Number of samples showing the receiver connected on a Bynet.

FLOAT

NetTxRouting

count

Number of samples showing the transmitter routing on a Bynet.

FLOAT

NetTxIdle

count

Number of samples showing the transmitter idle on a Bynet.

FLOAT

NetRxIdle

count

Number of samples showing the receiver idle on a Bynet.

FLOAT

Net Circuit Management Columns


The Net Circuit Management columns identify the management of Teradata net circuits
(Circ) and raw data traffic on the network (hardware) on all networks.

58

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

NetBackoffs

count

Software backoffs, defined as BNS service blocked occurrences,


without regard for which net was involved.

FLOAT

NetHWBackoffs

count

Hardware backoffs reported by the BLM for all Bynets.

FLOAT

NetRxCircBrd

count

Total number (both normal and high priority) of broadcast circuits


received on all Bynets.

FLOAT

NetRxCircPtp

count

Total number (both normal and high priority) of point-to-point


circuits received on all Bynets.

FLOAT

NetTxCircBrd

count

Total number (both normal and high priority) of broadcast circuits


transmitted on all Bynets.

FLOAT

NetTxCircHPBrd

count

Number of high priority broadcast circuits transmitted on all


Bynets.

FLOAT

NetTxCircHPPtP

count

Number of high priority point-to-point circuits transmitted on all


Bynets.

FLOAT

NetTxCircPtp

count

Total number (both normal and high priority) of point-to-point


circuits transmitted on all Bynets.

FLOAT

Group Coordination Messages Columns


These columns identify messages that are communicated through the Teradata Database net
for coordination of a process among a group of vprocs. Coordination is handled either
through semaphores, groups, or channels.
Column Name

Mode

Description

Data Type

NetChanInUse

track

Number of channels in use at the current time.

FLOAT

NetChanInUseMax

max

Maximum number of channels in use.

FLOAT

MsgChnLastDone

count

Number of last done events that occurred on this node.

FLOAT

Note: The last AMP to finish an operation may send a last done
broadcast message indicating the work is done for this step. This is
used in tracking down the slowest node or AMP in the system. A
node or AMP that has more last done messages than the others
could be a bottleneck in the system performance.
NetGroupInUse

track

Number of groups in use at the current time. This number should


be same across all nodes.

FLOAT

NetGroupInUseMax

max

Maximum number of groups in use during each log interval.

FLOAT

NetSemInUse

track

Number of semaphores in use at the current time.

FLOAT

NetSemInUseMax

max

Maximum number of semaphores in use during each log interval.

FLOAT

Resource Usage Macros and Tables

59

Chapter 6: ResUsageSpma Table


Statistics Columns

Merge Services Columns


These columns identify activity occurring through merge (many:1) methods (Mrg) on
Teradata Database net.
Column Name

Mode

Description

Data Type

NetMrgRxKB

count

Number of KB received, without regard to which net, by merge


receive services for currently active merge operations.

FLOAT

NetMrgRxRows

count

Number of data rows received, without regard to which net, by


merge receive services for currently active merge operations.

FLOAT

NetMrgTxKB

count

Number of KB transmitted, without regard to which net, by merge


transmission services for currently active merge operations.

FLOAT

NetMrgTxRows

count

Number of data rows transmitted, without regard to which net, by


merge transmission services for currently active merge operations.

FLOAT

Process Scheduling Columns


Process Allocation Columns
These columns represent all currently allocated processes, subdivided into the possible process
states of running, ready, blocked or suspended.
Column Name

Mode

Description

Data Type

ProcBlocked

track

Number of threads blocked waiting for I/O at the current time.

FLOAT

ProcReady

track

Number of runnable or ready tasks, also called threads, able to


execute on CPUs when a CPU becomes available.

FLOAT

ProcReadyMax

max

Maximum number of ready tasks, also called threads, able to


execute on CPUs when a CPU becomes available.

FLOAT

Process Pending Snapshot Columns


These columns identify how many processes are blocked for each possible reason. These
columns total (minus ProcPendDBLock) approximately ProcBlocked, since we can only be
blocked on one blocking type at a time.
Note: In analyzing resource usage, a distinction should be made between the following two
kinds of process blocks:

Block involves a process that is logically idle, waiting to receive work on its primary
mailbox, or for a timer to elapse. This block does not affect throughput.

Block involves a process that has work to do but is being prevented from proceeding by
some circumstance like a segment lock or flow control. This kind of block does affect
throughput.

The first kind of block is represented by column ProcPendNetRead; the second kind is
represented by the remaining columns described here.

60

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

ProcPendDBLock

track

Number of processes blocked pending database locks.

FLOAT

ProcPendFsgLock

track

Number of processes blocked pending an FSG lock.

FLOAT

ProcPendFsgRead

track

Number of processes blocked pending a File Segment (FSG) read


from disk.

FLOAT

ProcPendFsgWrite

track

Number of processes blocked pending an FSG write to disk.

FLOAT

ProcPendMemAlloc

track

Number of processes blocked pending memory allocations.

FLOAT

ProcPendMisc

track

Number of processes blocked pending miscellaneous events.

FLOAT

ProcPendMonitor

track

Number of processes blocked pending a user monitor.

FLOAT

ProcPendMonResume

track

Number of processes blocked pending a user monitor resume from


a yield.

FLOAT

ProcPendNetRead

track

Number of processes blocked pending non-step work, that is, the


number of processes blocked on any mailbox other than the work
mailbox.

FLOAT

Note: Non-step work is anticipated work the process spawned off


and is now waiting for some type of response from the spawned
process or processes. Non-step work is not unanticipated work such
as a new work request sent when a user initiates a request from the
host.
ProcPendNetThrottle

track

Number of processes blocked pending delivery of outstanding


outgoing messages.

FLOAT

ProcPendQnl

track

Number of processes blocked pending a TSKQNL lock.

FLOAT

ProcPendSegLock

track

Number of processes blocked pending a segment lock.

FLOAT

Process Block Count Columns


These columns identify how many times a process became blocked on which blocking type.
Average time blocked can be approximated by dividing the corresponding process pending
wait time column by the process block count column. For a list of these columns, see Process
Pending Wait Time Columns on page 62.
Column Name

Mode

Description

Data Type

ProcBlksFsgRead

count

Number of process blocks for an FSG read from disk.

FLOAT

ProcBlksFsgWrite

count

Number of process blocks for an FSG write to disk.

FLOAT

ProcBlksDBLock

count

Number of process blocks for database locks. The AMP Worker


Task can do other work while the lock is blocked.

FLOAT

ProcBlksFsgLock

count

Number of process blocks for an FSG lock.

FLOAT

ProcBlksTime

count

Number of process blocks waiting only for timer expiration.

FLOAT

Resource Usage Macros and Tables

61

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

ProcBlksMemAlloc

count

Number of process blocks for memory allocations.

FLOAT

ProcBlksMisc

count

Number of process blocks for miscellaneous events.

FLOAT

ProcBlksMonitor

count

Number of process blocks for a user monitor.

FLOAT

ProcBlksMonResume

count

Number of process blocks for a user monitor resume from a yield.

FLOAT

ProcBlksMsgRead

count

Number of process blocks for non-step work.

FLOAT

ProcBlksNetThrottle

count

Number of process blocks for delivery of outstanding outgoing


messages.

FLOAT

ProcBlksQnl

count

Number of process blocks for a TSKQNL lock.

FLOAT

ProcBlksSegLock

count

Number of process blocks for a disk or task context (for example,


scratch, stack, and so on) segment lock.

FLOAT

Process Pending Wait Time Columns


These columns identify the how long the processes were blocked for each possible reason
listed below.
Note: Since this time is only accounted for when a blocked process leaves the blocked state, it
is possible for this statistic to be much larger than the amount of time available to all processes
in a single log period.
Column Name

Mode

Description

Data Type

ProcWaitDBLock

count

Total time in centiseconds processes were blocked pending database


locks.

FLOAT

ProcWaitFsgLock

count

Total time in centiseconds processes were blocked pending an FSG


lock.

FLOAT

ProcWaitFsgRead

count

Total time in centiseconds processes were blocked pending an FSG


read from disk.

FLOAT

ProcWaitFsgWrite

count

Total time in centiseconds processes were blocked pending an FSG


write to disk.

FLOAT

ProcWaitMemAlloc

count

Total time in centiseconds processes were blocked pending memory


allocations.

FLOAT

ProcWaitMisc

count

Total time in centiseconds processes were blocked pending


miscellaneous events.

FLOAT

ProcWaitMonitor

count

Total time in centiseconds processes were blocked pending a user


monitor.

FLOAT

ProcWaitMonResume

count

Total time in centiseconds processes were blocked pending a user


monitor resume from a yield.

FLOAT

ProcWaitMsgRead

count

Total time in centiseconds processes were blocked pending nonstep work.

FLOAT

62

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

ProcWaitNetThrottle

count

Total time in centiseconds processes were blocked pending delivery


of outstanding outgoing messages.

FLOAT

ProcWaitPageRead

count

Total time in centiseconds processes were blocked pending a page


read from disk.

FLOAT

ProcWaitTime

count

Total time in centiseconds processes were blocked pending some


amount of elapsed time only.

FLOAT

ProcWaitQnl

count

Total time in centiseconds processes were blocked pending a


TSKQNL lock.

FLOAT

ProcWaitSegLock

count

Total time in centiseconds processes were blocked pending a disk or


task context (for example, scratch, stack, and so on) segment lock.

FLOAT

CPU Utilization Columns


The CPU utilization columns count all CPU activities, including activities performed for
virtual processors, and represent the sum of all CPUs on the node.
To obtain the average node CPU value for each column, CPU (Idle, IOWait, UServ, UExec),
divide the column data by the number of CPUs per node (the value in the NCPUs column)
and the number of centiseconds (CentiSecs column) in the logging interval.

CPU idle time = CPUIdle + CPUIoWait

CPU busy time = CPUUServ + CPUUExec

The NodeNormFactor is the per node normalization factor. This is related to the NodeType
value reported in this resource usage table. The normalization factor modifies the reported
CPU times to the equivalent time of a specified virtual processor. This does not add up to the
reported CPU time.
To calculate the non-normalized total CPU time, use the following formula:
CentiSecs x NCPUs = CPUIdle + CPUIoWait + CPUUServ + CPUUExec
Note: The CPU time returned in centiseconds is more accurate than those returned in
seconds.
Column Name

Mode

Description

Data Type

CPUIdle

count

Time in centiseconds CPUs are idle and not waiting for I/O.

FLOAT

CPUIoWait

count

Time in centiseconds CPUs are idle and waiting for I/O


completion.

FLOAT

Note: This time represents another variety of Idle, since a


CPU is only in this state if there are no processes eligible for
execution. If there was a process available, the CPU would
be immediately dispatched for that process.

Resource Usage Macros and Tables

63

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

CPUUExec

count

Time in centiseconds CPUs are busy executing user


execution code, that is, time spent in a user state on behalf of
a process.

FLOAT

CPUUExec reports the CPU time not used in the system


call, or in the kernel.
CPUUServ

count

Time in centiseconds CPUs are busy executing user service


code, that is, privileged work performing system services on
behalf of user execution processes which do not have root
access.

FLOAT

CPUUServ reports if a task executing a step used CPU while


in the kernel.

TASM Columns
AMP Worker Task Columns
These columns report statistics about the AMP Worker Tasks.
For more information about the ResUsageSawt table and columns, see Chapter 7:
ResUsageSawt Table.
Column Name

Mode

Description

Data Type

AwtFlowControlled

track

Number of AMPs currently in flow control on the work input


mailbox.

FLOAT

AwtFlowCtlCnt

count

Number of times this log period that the node entered the flow
control state from a non-flow controlled state.

FLOAT

AwtInuse

track

Number of AMP Worker Tasks currently in use for this node.

FLOAT

AwtInuseMax

max

Peak number of AWTs for any one of the AMPs on the node.

FLOAT

Note: AwtInuseMax is not the peak number in use on the node for
all AMPs at any one point in time.
AwtInuseMax represents the largest number of AWTs in use on any
single AMP.

Priority Scheduler Columns


These columns provide data specific to the Priority Scheduler.
For more information about the ResUsageSps table and columns, see Chapter 11:
ResUsageSps.

64

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

PSNumRequests

count

Number of work requests received for all Performance Groups on


this node.

FLOAT

PSQWaitTime

count

Time in centiseconds that work requests waited on an input queue


before being serviced.

FLOAT

To get an approximate average QWaitTime per request during this


period, divide QWaitTime by NumRequests.
PSServiceTime

count

Time in centiseconds that work requests required for service.

FLOAT

To get an approximate average ServiceTime per request during this


period, divide ServiceTime by NumRequests.

Teradata VS Columns
These columns identify pdisk I/O statistics that are reported by the Node Agent.
Column Name

Mode

Description

Data Type

TvsReadCnt

count

Number of logical device reads.

FLOAT

The TvsReadCnt column is a summary of the ReadCnt column in


the SPDSK table.
TvsReadRespMax

max

Maximum read response time value during the reporting period.

FLOAT

TvsReadRespTot

count

Total of individual read response time in centiseconds.

FLOAT

The TvsReadRespTot column is a summary of the ReadRespTot


column in the SPDSK table.
TvsWriteCnt

count

Number of logical device writes.

FLOAT

The TvsWriteCnt column is a summary of the WriteCnt column


in the SPDSK table.
TvsWriteRespMax

max

Maximum response time of logical device writes during the


reporting period.

FLOAT

TvsWriteRespTot

count

Total of individual write response time in centiseconds.

FLOAT

TvsWriteRespTot is a summary of the WriteRespTot column in


the SPDSK table.

User Command Columns


These columns summarize the type of statements given to Teradata Database by the user.
For more information, see Chapter 8: ResUsageShst Table.
Column Name

Mode

Description

Data Type

CmdDDLStmts

count

Number of alter, modify, drop, create, replace, grant or revoke


commands.

FLOAT

Resource Usage Macros and Tables

65

Chapter 6: ResUsageSpma Table


Spare Columns

Column Name

Mode

Description

Data Type

CmdDeleteStmts

count

Number of delete commands.

FLOAT

CmdInsertStmts

count

Number of insert commands.

FLOAT

CmdSelectStmts

count

Number of select commands.

FLOAT

CmdUpdateStmts

count

Number of update commands.

FLOAT

CmdUtilityStmts

count

Number of utility commands.

FLOAT

CmdOtherStmts

count

Number of other commands.

FLOAT

User Command Arrival and Departure Columns


These columns summarize the arrival and departure of user statements.
For more information, see Chapter 8: ResUsageShst Table.
Column Name

Mode

Description

Data Type

CmdStmtErrors

count

Number of statements that departed in error.

FLOAT

CmdStmtFailures

count

Number of statements that departed in failure or were aborted.

FLOAT

CmdStmtSuccesses

count

Number of statements that departed normally.

FLOAT

Column Name

Mode

Description

Data Type

Reserved

n/a

Note: This column is not used.

CHAR (2)

Reserved Column

Spare Columns
The ResUsageSpma table spare fields are named Spare00 through Spare19, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare fields currently being used.

66

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Spare Columns

Column Name

Description

Spare02

Number of I/Os completed on data blocks that were marked for contiguous
write.
You can get the average number of data blocks combined into a single I/O
by using the calculation:
FileContigWBlocks / FileContigWIOs
Note: This field will be converted to the FileContigWIOs column in
Teradata Database 15.0. You can access resource usage data for this field
using the FileContigWIOs column name in ResSpmaView view. For details,
see ResSpmaView on page 173.

Spare03

Number of data blocks marked for contiguous write.


Data blocks created using the contiguous write scheme are accumulated in
the cache until the optimal I/O size, the end of the cylinder, or the end of
the data created has been reached. At such a point, the blocks will be
written with a single I/O and removed from the cache.
Note: This field will be converted to the FileContigWBlocks column in
Teradata Database 15.0. You can access resource usage data for this field
using the FileContigWBlocks column name in the ResSpmaView view. For
details, see ResSpmaView on page 173.

Spare04

Total size, in KB, of the data blocks that were marked for contiguous write.
You can get the average size of the writes that were candidates for being
combined into one single I/O by using the calculation:
FileContigWKB /FileContigWBlocks
Note: This field will be converted to the FileContigWKB column in
Teradata Database 15.0. You can access resource usage data for this field
using the FileContigWKB column name in the ResSpmaView view. For
details, see ResSpmaView on page 173.

Spare05

Number of times the TDAT Control Group (cgroup) query is throttled due
to the WM CPU COD. A cgroup provides a mechanism for aggregating or
partitioning sets of tasks, and all its future children, into hierarchical
groups with a specialized behavior.
Note: This field will be converted to the CpuThrottleCount column in
Teradata Database 15.0. You can access resource usage data for this field
using the CpuThrottleCount column name in the ResSpmaView view. For
details, see ResSpmaView on page 173.

Spare06

Time, in centiseconds, that the TDAT cgroup query is throttled due to the
WM CPU COD.
The data is reported to RSS in nanoseconds. During the data gathering, the
RSS converts the data to centiseconds.
Note: This field will be converted to the CpuThrottleTime column in
Teradata Database 15.0. You can access resource usage data for this field
using the CpuThrottleTime column name in the ResSpmaView view. For
details, see ResSpmaView on page 173.

Resource Usage Macros and Tables

67

Chapter 6: ResUsageSpma Table


Spare Columns

Column Name

Description

Spare07

Sum of the full potential input or output token allocations from all devices
attached to the node.
The input or output token allocations are not limited by the IO_COD
setting.
The value is computed by summing the cod_full_potential_iota field for
each device in the /proc/tdmeter/disk_cod_stats file.
Note: This field will be converted to the FullPotentialIota column in
Teradata Database 15.0. You can access resource usage data for this field
using the FullPotentialIota column name in the ResSpmaView view. For
details, see ResSpmaView on page 173.

Spare08

Sum of potential input or output token allocations from all devices


attached to the node.
The input or output token allocations are accumulated only when an I/O is
pending on a device and are limited by the IO_COD setting.
The value is computed by summing the cod_allocated_iota field for each
device in the /proc/tdmeter/disk_cod_stats file.
Note: This field will be converted to the CodPotentialIota column in
Teradata Database 15.0. You can access resource usage data for this field
using the CodPotentialIota column name in the ResSpmaView view. For
details, see ResSpmaView on page 173.

Spare09

Sum of used input or output token allocations from all devices attached to
the node.
The value is computed by summing the cod_used_iota field for each device
in the /proc/tdmeter/disk_cod_stats file.
Note: This field will be converted to the UsedIota column in Teradata
Database 15.0. You can access resource usage data for this field using the
UsedIota column name in the ResSpmaView view. For details, see
ResSpmaView on page 173.

Spare10

Current size of the VH cache in KB. This field is populated by the FSG
subsytem.
Note: This field will be converted to the VHCacheKB column in Teradata
Database 15.0. You can access resource usage data for this field using the
VHCacheKB column name in the ResSpmaView view. For details, see
ResSpmaView on page 173.
For more information about VH cache, see Glossary on page 261.

Spare11

WM CPU COD value in one tenths of a percent. For example, a value of


500 represents a WM CPU COD value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in
Teradata Database 15.0. You can access resource usage data for this field
using the WM_CPU_COD column name in the ResSpmaView view. For
details, see ResSpmaView on page 173.
Note: WM CPU COD is not supported on SLES 10. Its value is set to 1000
on SLES 10.

68

Resource Usage Macros and Tables

Chapter 6: ResUsageSpma Table


Spare Columns

Column Name

Description

Spare12

WM I/O COD value in whole percent. For example, a value of 50 represents


a WM I/O COD value of 50.0%.
The value is set to 100 if the WM I/O COD is disabled.
Note: This field will be converted to the WM_IO_COD column in
Teradata Database 15.0. You can access resource usage data for this field
using the WM_IO_COD column name in the ResSpmaView view. For
details, see ResSpmaView on page 173.
Note: WM I/O COD is not supported on SLES 10. Its value is set to 100 on
SLES 10.

SpareInt

PM I/O COD value in whole percent values for the entire system. For
example, a value of 50 represents a PM I/O COD value of 50%.
The value is set to 100 if the PM I/O COD is disabled.
Note: This field will be converted to the PM_IO_COD column in Teradata
Database 15.0. You can access resource usage data for the SpareInt field
using the PM_IO_COD column name in the ResSpmaView view. For
details, see ResSpmaView on page 173.

Related Topics
For more information about the different type of data fields, see About the Mode Column
on page 42.

Resource Usage Macros and Tables

69

Chapter 6: ResUsageSpma Table


Spare Columns

70

Resource Usage Macros and Tables

CHAPTER 7

ResUsageSawt Table

The ResUsageSawt table contains:

The current and maximum number of AMP Worker Tasks in use by work type

Flow control information

You can use the ResUsageSawt table to:

Report the pattern in AMP Worker Task usage for standard or expedited work types.

Monitor the length of each message queue of the AMP, the queue which holds work
messages from the Dispatcher that are waiting to get an AMP Worker Task.

Identify if one or more AMPs have entered the state of flow control, and how often, during
the logging interval.

If you enable table logging, the data is written to the database once for each log period.
To consolidate and summarize the total number of rows written to the database, you can
enable Summary Mode. For details, see Summary Mode on page 76.
Note: This table is created as a MULTISET table. For more information see Relational
Primary Index on page 37.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the nonunique primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be


logged late (typically, by no more than one or two seconds), but this
column will still contain the time value when the entry should have
been logged. For more information, see the Secs and NominalSecs
columns.

Resource Usage Macros and Tables

71

Chapter 7: ResUsageSawt Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

NodeID

n/a

Node ID on which the vproc resides. The Node ID is formatted as


CCC-MM, where CCC denotes the three-digit cabinet number and
MM denotes the two-digit chassis number of the node. For
example, a node in chassis 9 of cabinet 3 has a node ID of '003-09'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


These columns provide statistics on current logging characteristics.
Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR (8)

VprId

n/a

Identifies the vproc number. All Vprocs in this table are AMPS so
there is no VprType column provided. In Summary Mode, this
column is -1.

INTEGER

Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally the same as NominalSecs, but can be different in three
cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
CentiSecs

n/a

Number of centiseconds in the logging period. This column is


useful when performing data calculations with small elapsed times
where the difference between centisecond-based data and whole
seconds results in a percentage error.

INTEGER

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

SummaryFlag

n/a

Summarization status of this row. If the value is 'N,' the row is a


non-summary row. If the value is 'S,' the row is a summary row.

CHAR (1)

Active

max

Controls whether or not the rows will be logged to the resource


usage tables if Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.
For example, if you enable Active Row Filter Mode, the rows that
have a zero Active column value will not be logged to the resource
usage tables.

72

Resource Usage Macros and Tables

Chapter 7: ResUsageSawt Table


Statistics Columns

Column Name

Mode

Description

Data Type

TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.
CODFactor

n/a

PM CPU COD value in one tenths of a percent. For example, a


value of 500 represents a PM CPU COD value of 50.0%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.

Statistics Columns
TASM Columns
AMP Worker Task Columns
These columns report statistics about the AMP Worker Tasks.
Column Name

Mode

Description

Data Type

Available

track

Number of unreserved AMP Worker Tasks from the pool that are
not being used at the end of the interval.

FLOAT

For example, if 12 of the normally 62 unreserved AMP Worker


Tasks are removed from the pool by reserving them for expedited
work, there could at most be 50 unreserved AMP Worker Tasks
available. If in this log period, 10 unreserved AMP Worker Tasks are
taken from the pool to service 10 queries that are still executing,
there would be only 40 available at the end of the log period.
AvailableMin

min

Minimum number of unreserved AMP Worker Tasks available in


the pool for each AMP for the logged period.

FLOAT

For example, a zero value means there were no unreserved AMP


Worker Tasks available in the pool at some point during the
reporting period.
AwtLimit

track

Current setting for AMP Worker Task (for example, 80, 100, or so
on) in the DBS Control MaxLoadAWT field. For more information,
see MaxLoadAWT in Utilities.

FLOAT

FlowControlled

track

Specifies if an AMP is in flow control. If the value is non-zero, the


AMP is in flow control.

FLOAT

Resource usage indicates flow control on work mailbox name 2-11


only. Work mailbox name 2-11 is the incoming mailbox for AMPs,
where 2 is the mailbox number and 11 is the AMP Worker Task
partition.
However, resource usage indicates flow control on work mailbox
name 2-11 for all work types. For example, if any work type is in
flow control, the system is in flow control.
FlowCtlCnt

count

Resource Usage Macros and Tables

Number of times during the log period that the system entered the
flow control state from a non-flow controlled state.

FLOAT

73

Chapter 7: ResUsageSawt Table


Statistics Columns

Column Name

Mode

Description

Data Type

FlowCtlTime

count

Total time, in milliseconds, that an AMP is in flow control.

FLOAT

InuseMax

max

Maximum number of AMP Worker Tasks in use at any one time


during the log period.

FLOAT

MailBoxDepth

track

Current depth of the AMP work mailbox at the end of the period.

FLOAT

WorkTypeInuse00 WorkTypeInuse15

track

Current number of AMP Worker Tasks in use during the log period
for each work type for the VprId vproc.

FLOAT

Note: The WorkTypeInuse04 column value is always at least 1 due


to the internal database design for the Control AMP rows in the
ResUsageSawt table.
WorkTypeMax00 WorkTypeMax15

max

Maximum number of AMP Worker Tasks in use at one time during


the log period for each work type for the VprId vproc.

FLOAT

In Summary Mode, the WorkTypeMax column values are the Max


of the values for all the AMPS.

Work Type Descriptions


The WorkTypeInuse and WorkTypeMax array data columns above each contain 16 Work Type
entries that are described here. For example, WorktypeInuse00 contains the number of in use
AMP Worker Tasks that are of Work Type MSGWORKNEW, and WorktypeInuse01 contains
the values for MSGWORKONE.
These columns allow you to monitor the usage of the AMP Worker Tasks of each work type.
This can be used to determine:

If the usage is close to the maximum values defined.

What type of work the AMP Worker Tasks are doing.

Characteristics of the system during skew conditions or when there are AMP Worker Task
shortages.

Use the tdntune utility to determine the settings for Flow Control. For information on
Expedited Allocation Groups, see Priority Scheduler (schmon) chapter of Utilities.
Column Name

Mode

Description

Data Type

MSGWORKNEW

n/a

Used for new work requests. This work type has the lowest number,
which means it is queued last. It also has the effect of honoring
secondary requests needed to complete existing work items before
any new ones are started.

n/a

A zero value is used for new work items.

74

Resource Usage Macros and Tables

Chapter 7: ResUsageSawt Table


Statistics Columns

Column Name

Mode

Description

Data Type

MSGWORKONE

n/a

First level secondary work items. Numbered work types are used for
secondary work items. For example, work type one
(MSGWORKONE) is used for secondary work requests spawned by
new work items; work type two (MSGWORKTWO) requests are
spawned from work type one requests and queued for delivery
before work type one requests; and so on. Each numbered work
type is queued for delivery just before the one from which it is
spawned.

n/a

MSGWORKTWO

n/a

Second level secondary work items.

n/a

MSGWORKTHREE

n/a

Special types of database work.

n/a

MSGWORKFOUR

n/a

Start System Recover.

n/a

MSGWORKFIVE

n/a

Reports new work for utilities, such as FastLoad, MultiLoad, and


FastExport if utilities are configured to use a separate pool of work
types.

n/a

Note: This column is not normally used and the MSGWORKNEW,


MSGWORKONE, and MSGWORKTWO columns report work
requests for utilities.
MSGWORKSIX

n/a

First level secondary work spawned work for utilities such as


FastLoad, MultiLoad, and FastExport. If the utilities are not
configured to use a separate pool of work types, they use
MSGWORKNEW, MSGWORKONE, and MSGWORKTWO.

n/a

MSGWORKSEVEN

n/a

Second level secondary work for utilities such as FastLoad,


MultiLoad, and FastExport. If the utilities are not configured to use
a separate pool of work types, they use MSGWORKNEW,
MSGWORKONE, and MSGWORKTWO.

n/a

MSGWORKEIGHT

n/a

New work for Expedited Allocation Groups.

n/a

MSGWORKNINE

n/a

First level spawned work for Expedited Allocation Groups.

n/a

MSGWORKTEN

n/a

Second level spawned work for Expedited Allocation Groups.

n/a

MSGWORKELEVEN

n/a

Not used.

n/a

MSGWORKABORT

n/a

Used for transaction abort requests. This work type has a higher
value than the numbered work types so that abort requests are
honored before beginning any additional work item for the
transactions being aborted.

n/a

The array number for MSGWORKABORT is 12.


MSGWORKSPAWN

n/a

Used for spawned abort requests and is delivered before normal


aborts.

n/a

The array number for MSGWORKSPAWN is 13.


MSGWORKNORMAL

n/a

Used for messages that do not fall within the standard work type
hierarchy. This work type is delivered before any of the work items
described above.

n/a

The array number for MSGWORKNORMAL is 14.

Resource Usage Macros and Tables

75

Chapter 7: ResUsageSawt Table


Summary Mode

Column Name

Mode

Description

Data Type

MSGWORKCONTROL

n/a

Used for system control messages. These are delivered before any
other kind of message.

n/a

The array number for MSGWORKCONTROL is 15.

Reserved Column
Column Name

Mode

Description

Data Type

Reserved

n/a

Note: This column is not used.

CHAR (1)

Summary Mode
When Summary Mode is active for the ResUsageSawt table, one row is written to the database
for each node in the system for each log interval. The AMP Worker Task data will be combined
for all the AMP vprocs on the node.
You can determine if a row is in Summary Mode by checking the SummaryFlag column for
that row.
IF the SummaryFlag column value is

THEN the data for that row is being logged

'S'

in Summary Mode.

'N'

normally.

Spare Columns
The ResUsageSawt table spare fields are named Spare00 through Spare19, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare field currently being used.

76

Resource Usage Macros and Tables

Chapter 7: ResUsageSawt Table


Spare Columns

Column Name

Description

Spare00

WM CPU COD value in one tenths of a percent. For example, a value


of 500 represents a WM CPU COD value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the WM_CPU_COD column name in the ResSawtView
view. For details, see ResSawtView on page 168
Note: WM CPU COD is not supported on SLES 10. Its value is set to
1000 on SLES 10.

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

Resource Usage Macros and Tables

77

Chapter 7: ResUsageSawt Table


Spare Columns

78

Resource Usage Macros and Tables

CHAPTER 8

ResUsageShst Table

The ResUsageShst table:

Contains resource usage information specific to the host channels and TCP/IP networks
communicating with Teradata Database.

Includes resource usage data for system-wide, host information.

Note: This table is created as a MULTISET table. For more information see Relational
Primary Index on page 37.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the nonunique primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be


logged late (typically, by no more than one or two seconds), but this
column will still contain the time value when the entry should have
been logged. For more information, see the Secs and NominalSecs
columns.
NodeID

n/a

Node ID on which the vproc resides. The Node ID is formatted as


CCC-MM, where CCC denotes the three-digit cabinet number and
MM denotes the two-digit chassis number of the node. For
example, a node in chassis 9 of cabinet 3 has a node ID of '003-09'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


These columns provide statistics on current logging characteristics.
Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

Resource Usage Macros and Tables

79

Chapter 8: ResUsageShst Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR(8)

VprId

n/a

Identifies the vproc number.

INTEGER

For channel-connected hosts, the VprId value for:


IBMECS is the second 4 bytes from the IP address of the ECS
node (see ResShstView on page 170).
IBMMUX is zero.
For TCP/IP network-connected hosts, the VprId value for:
NETWORK is the Gateway vproc ID.
IBMNET is zero.
In Summary Mode, VprId is -1.
HstId

n/a

Identifies the host. The HstId value for:

INTEGER

IBMECS is cua/lcu/lchan/laddr, where:


cua = Channel unit address
lcu = Logical control unit address
lchan = Logical channel
laddr = Link address
IBMNET is the host group ID.
NETWORK and IBMMUX is BBMMPPHHH, where:
BB = Bus
MM = Module Number (or chassis number)
Note: The chassis number is always zero for networkconnected hosts.
PP = Port
HHH = Three digit host group ID
Note: Each of the fields above get two or three decimal digits of
the resulting nine digit value.
In Summary Mode, HstId is always 0.
HstType

n/a

Type of host connection:

CHAR(8)

IBMECS (Part TCP/IP and channel hardware connection)


IBMMUX (Channel hardware connection)
IBMNET (TCP/IP end-to-end, also called pure network,
connection)
NETWORK (Gateway channel connection)

80

Resource Usage Macros and Tables

Chapter 8: ResUsageShst Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

IPaddr

n/a

Identifies the IP address.

INTEGER

For channel-connected hosts, the IPaddr value for:


IBMECS is the first 4 bytes from the IP address of the ECS node.
For more information, see the HostId column.
IBMMUX is zero.
For TCP/IP network-connected hosts, the IPaddr value for:
NETWORK is zero.
IBMNET is zero.
Note: The IP address values (that is, the IPaddr and VprId
columns) for the IBMNET connection are displayed as a 32bit
integer, but to evaluate them as an IP address they need to be
considered one byte at a time. You can do this by using the SQL
commands listed in the ResShstView view. For details, see
ResShstView on page 170.
Secs

n/a

Actual number of seconds in the log period represented by this row.


This value is useful for normalizing the statistics contained in this
row, for example, to a per-second measurement.

SMALLINT

CentiSecs

n/a

Number of centiseconds in the logging period. This column is


useful when performing data calculations with small elapsed times
where the difference between centisecond-based data and whole
seconds results in a percentage error.

INTEGER

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

SummaryFlag

n/a

Summarization status of this row. If the value is 'N,' the row is a


non-summary row. If the value is 'S,' the row is a summary row.

CHAR (1)

Active

max

Controls whether or not the rows will be logged to the resource


usage tables if Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.
For example, if you enable Active Row Filter Mode, the rows that
have a zero Active column value will not be logged to the resource
usage tables.
TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.
CODFactor

n/a

PM CPU COD value in one tenths of a percent. For example, a


value of 500 represents a PM CPU COD value of 50.0%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.

Resource Usage Macros and Tables

81

Chapter 8: ResUsageShst Table


Statistics Columns

Statistics Columns
Host Controller Columns
Channel Management Columns
These columns identify overhead of channel management.
Column Name

Mode

Description

Data Type

HostReadFails

count

Number of failures transmitting from the host.

FLOAT

Note: This column is populated for Teradata Channel software


(TCHN) only.
HostWriteFails

count

Number of failures transmitting to the host.

FLOAT

Note: This column is populated for TCHN only.

Channel Traffic Columns


These columns identify the traffic between the host and the node in three levels of granularity:

Blocks

Messages

KB

Blocks are made up of some amount of variable sized messages. ReadKB and WriteKB identify
the KB involved in the traffic.
For information on the ResUsageSpma table channel and TCP/IP network traffic columns, see
Host Controller Channel and TCP/IP Network Traffic Columns on page 55.
Column Name

Mode

Description

Data Type

HostBlockReads

count

Number of blocks read in from the host.

FLOAT

HostBlockWrites

count

Number of blocks written out to the host.

FLOAT

HostMessageReads

count

Number of messages read in from the host.

FLOAT

HostMessageWrites

count

Number of messages written out to the host.

FLOAT

HostReadKB

count

KB transferred in from the host.

FLOAT

HostWriteKB

count

KB transferred out to the host.

FLOAT

User Command Columns


These columns identify the type of commands given to Teradata Database by the user. Three
levels of granularity are given:

82

Transactions. These columns consist of one or more requests.

Requests. These consist of one or more statements.


Resource Usage Macros and Tables

Chapter 8: ResUsageShst Table


Statistics Columns

Statements. These columns are subdivided into the various statement types.

Column Name

Mode

Description

Data Type

CmdAlterStmts

count

Number of alter, modify, or drop statement commands.

FLOAT

CmdArchUtilityStmts

count

Number of archival utility commands (for example, restore, archive


and recovery).

FLOAT

CmdCreateStmts

count

Number of create or replace statement commands.

FLOAT

CmdDeleteStmts

count

Number of delete commands.

FLOAT

CmdGrantStmts

count

Number of grant or revoke commands.

FLOAT

CmdInsertStmts

count

Number of insert commands.

FLOAT

CmdLoadUtilityStmts

count

Number of FastLoad and MultiLoad utility commands. (Tpump


commands cannot be distinguished, and are therefore counted by
the INSERT, UPDATE and DELETE statements).

FLOAT

CmdOtherStmts

count

Number of other commands.

FLOAT

CmdRequests

count

Number of request commands.

FLOAT

CmdSelectStmts

count

Number of select commands.

FLOAT

CmdTransactions

count

Number of transaction commands.

FLOAT

CmdUpdateStmts

count

Number of update commands.

FLOAT

User Command Arrival and Departure Columns


These columns identify the arrival and departure times and status of user commands.
Column Name

Mode

Description

Data Type

CmdStmtErrors

count

Number of statements that departed in error.

FLOAT

CmdStmtFailures

count

Number of statements that departed in failure or abortion.

FLOAT

CmdStmtSuccesses

count

Number of statements that departed normally.

FLOAT

Column Name

Mode

Description

Data Type

Reserved

n/a

Note: This column is not used.

CHAR (5)

Reserved Column

Resource Usage Macros and Tables

83

Chapter 8: ResUsageShst Table


Summary Mode

Summary Mode
When Summary Mode is active for the ResUsageShst table, one row is written to the database
for each type of host (network or channel-connected) on each node in the system,
summarizing the hosts of that type on that node.
You can determine if a row is in Summary Mode by checking the SummaryFlag column for
that row.
IF the SummaryFlag column value is

THEN the data for that row is being logged

'S'

in Summary Mode.

'N'

normally.

Spare Columns
The ResUsageShst table spare fields are named Spare00 through Spare09, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare field currently being used.
Column Name

Description

Spare00

WM CPU COD value in one tenths of a percent. For example, a value of 500
represents a WM CPU COD value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in Teradata
Database 15.0. You can access resource usage data for this field using the
WM_CPU_COD column name in the ResShstView view. For details, see
ResShstView on page 170
Note: WM CPU COD is not supported on SLES 10. Its value is set to 1000 on
SLES 10.

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

84

Resource Usage Macros and Tables

CHAPTER 9

ResUsageSldv Table

The ResUsageSldv table contains resource usage information for system-wide, logical device
information. Statistics from this table are collected from the storage devices.
Note: This table is created as a MULTISET table. For more information see Relational
Primary Index on page 37.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the nonunique primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be


logged late (typically, by no more than one or two seconds), but this
column will still contain the time value when the entry should have
been logged. For more information, see the Secs and NominalSecs
columns.
NodeID

n/a

Node ID on which the vproc resides. The Node ID is formatted as


CCC-MM, where CCC denotes the three-digit cabinet number and
MM denotes the two-digit chassis number of the node. For
example, a node in chassis 9 of cabinet 3 has a node ID of '003-09'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


These columns provide statistics on current logging characteristics and storage device
information.
Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR(8)

Resource Usage Macros and Tables

85

Chapter 9: ResUsageSldv Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

CtlId

n/a

Controller number.

INTEGER

The value is the decimal equivalent of the three digit controller ID


in the LdvId (the HHC digits).
If the controller information is not available, its value is set to -1.
In Summary Mode, the CtlId is set to -1.
LdvId

n/a

Storage device in the Bus System where it resides.

BYTE(4)

If the device address information is available, the LdvId is derived


from the Host, Channel, Id, and Lun information of the device
(00HHCTLL).
If the device address information is not available, this column
contains the null ID which is 0xFFFFFFFF.
In Summary Mode, the value is set to 0xFFFFFFFF.
LdvType

n/a

Type of logical device. The value is either DISK for database disk or
SDSK for system disk.

CHAR(4)

Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally the same as NominalSecs, but can be different in three
cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
CentiSecs

n/a

Number of centiseconds in the logging period. This column is


useful when performing data calculations with small elapsed times
where the difference between centisecond-based data and whole
seconds results in a percentage error.

INTEGER

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

SummaryFlag

n/a

Summarization status of this row. If the value is 'N,' the row is a


non-summary row. If the value is 'S,' the row is a summary row.

CHAR (1)

Active

max

Controls whether or not the rows will be logged to the resource


usage tables if Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.
For example, if you enable Active Row Filter Mode, the rows that
have a zero Active column value will not be logged to the resource
usage tables.
TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.

86

Resource Usage Macros and Tables

Chapter 9: ResUsageSldv Table


Statistics Columns

Column Name

Mode

Description

Data Type

CODFactor

n/a

PM CPU COD value in one tenths of percent. For example, a value


of 500 represents a PM CPU COD value of 50%.

SMALLINT

The value is set to 1000 if PM CPU COD is disabled.

Statistics Columns
Logical Device Columns
These columns identify individual logical device activities for storage components connected
through the buses.
The storage device statistics are calculated only on what can be derived from statistics
collected by the operating system since the disk array controllers do not provide us with any
useful data for resource usage.
The logical device columns are grouped into several subcategories as shown below.
Input and Output Traffic Columns
These columns represent the number and amount, in KB, of data read from or written to the
logical device.
Column Name

Mode

Description

Data Type

LdvReadKB

count

Number of KB read from the logical device.

FLOAT

LdvReads

count

Number of reads issued.

FLOAT

LdvWriteKB

count

Number of KB written to the logical device.

FLOAT

LdvWrites

count

Number of writes issued.

FLOAT

Response Time Columns


These columns represent the response time to requests given to the logical device.
Column Name

Mode

Description

Data Type

LdvReadRespMax

max

WM CPU COD value in one tenths of a percent. For example, a


value of 500 represents a WM CPU COD value of 50.0%.

FLOAT

The value is set to 1000 if the WM CPU COD is disabled.


Note: The LdvReadRespMax field is being used as a spare field to
represent the WM CPU COD and will be converted to the
WM_CPU_COD column in Teradata Database 15.0. For details, see
ResSldvView on page 171.
Note: WM CPU COD is not supported on SLES 10. Its value is set
to 1000 on SLES 10.

Resource Usage Macros and Tables

87

Chapter 9: ResUsageSldv Table


Statistics Columns

Column Name

Mode

Description

Data Type

LdvWriteRespMax

max

WM I/O COD value in whole percent. For example, a value of 50


represents a WM I/O COD value of 50.0%.

FLOAT

The value is set to 100 if the WM I/O COD is disabled.


Note: The LdvWriteRespMax field is being used as a spare field to
represent the WM I/O COD and will be converted to the
WM_IO_COD column in Teradata Database 15.0. For details, see
ResSldvView on page 171.
Note: WM I/O COD is not supported on SLES 10. Its value is set to
100 on SLES 10.
LdvReadRespTot

count

Total of individual read response times in centiseconds.

FLOAT

LdvWriteRespTot

count

Total of individual write response times in centiseconds.

FLOAT

ReadActiveTotal

count

Total of read I/O active time, in centiseconds.

FLOAT

Note: This column is not currently valid. It should not be used.


WriteActiveTotal

count

Total of write I/O active time, in centiseconds.

FLOAT

Note: This column is not currently valid. It should not be used.

Outstanding Requests Columns


These columns represent the number of outstanding operation requests and the amount of
time with outstanding requests for the logical device.
Column Name

Mode

Description

Data Type

QReadLength

track

Current number of read operations in queue.

FLOAT

Note: This column is not currently valid. It should not be used.


QWriteLength

track

Current number of write operations in queue.

FLOAT

Note: This column is not currently valid. It should not be used.


LdvOutReqDiv

count

Average value calculated by the ResSldvView view as shown below.

FLOAT

LdvOutReqAvg = LdvOutReqSum / LdvOutReqDiv


LdvOutReqSum

count

Time-weighted total of samples in centiseconds spent doing I/O.


LdvOutReqDiv contains the divisor, which is the delta time in
centiseconds for the sample.

FLOAT

LdvOutReqTime

count

Total time in centiseconds with (any) outstanding requests. The


values in this column should be less than or equal to the reported
logging period.

FLOAT

88

Resource Usage Macros and Tables

Chapter 9: ResUsageSldv Table


Reserved Column

Reserved Column
Column Name

Mode

Description

Data Type

Reserved

n/a

Type of storage device.

CHAR (1)

If the value is 'S', this indicates the storage device is a database


disk and a solid state storage device (SSD).
If the value is 'H,' this indicates the storage device is a hard disk
drive (HDD).
In Summary mode, the Reserved column is set to an en dash
character symbol (-), where '-' indicates information about the
storage device is not available.
Note: The Reserved field will be converted to the LdvKind
column in Teradata Database 15.0. If using the ResSldvView
view, the resource usage data can be accessed using the LdvKind
column name instead of the Reserved name. For details, see
ResSldvView on page 171.

Summary Mode
When Summary Mode is active for the ResUsageSldv table, the following rows are written to
the database for each node in the system for each log interval:

One row summarizes the system logical devices

One row summarizes Teradata Database logical devices

You can determine if a row is in Summary Mode by checking the SummaryFlag column for
that row.
IF the SummaryFlag column value is

THEN the data for that row is being logged

'S'

in Summary Mode.

'N'

normally.

Spare Columns
The ResUsageSldv table spare fields are named Spare00 through Spare04, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare fields currently being used.

Resource Usage Macros and Tables

89

Chapter 9: ResUsageSldv Table


Spare Columns

Column Name

Description

Spare00

Major number of the device.


In Summary mode, this field contains the max of the major numbers of the
devices being summed. It is not paired with the minor number reported in
the Spare01 column.
Note: This field will be converted to the Major column in Teradata Database
15.0. You can access the resource usage data for this field using the Major
column name in the ResSldvView view. For details, see ResSldvView on
page 171.

Spare01

Minor number of the device.


In Summary mode, this field contains the max of the minor numbers of the
devices being summed. It is not paired with the major number reported in
the Spare00 column.
Note: This field will be converted to the Minor column in Teradata Database
15.0. You can access the resource usage data for this field using the Minor
column name in the ResSldvView view. For details, see ResSldvView on
page 171.

Spare02

Full potential input or output token allocations for a device.


The total input or output token allocations are not limited by the IO_COD
field setting.
Note: This field will be converted to the FullPotentialIota column in
Teradata Database 15.0. You can access the resource usage data for this field
using the FullPotentialIota column name in the ResSldvView view. For
details, see ResSldvView on page 171.

Spare03

Potential input or output token allocations for a device.


The potential input or output token allocations are accumulated only when
an I/O is pending on a device and are limited by the IO_COD column
setting.
Note: This field will be converted to the CodPotentialIota column in
Teradata Database 15.0. You can access the resource usage data for this field
using the CodPotentialIota column in the ResSldvView view. For details, see
ResSldvView on page 171.

Spare04

Used input or output token allocations for a device.


Note: This field will be converted to the UsedIota column in Teradata
Database 15.0. You can access the resource usage data for this field using the
UsedIota column name in the ResSldvView view. For details, see
ResSldvView on page 171.

SpareInt

PM I/O COD value in whole percent. For example, a value of 50 represents a


PM I/OCOD value of 50.0%.
The value is set to 100 if the PM I/O COD is disabled.
Note: This field will be converted to the PM_IO_COD column in Teradata
Database 15.0. You can access resource usage data for this field using the
PM_IO_COD column name in the ResSldvView view. For details, see
ResSldvView on page 171.

90

Resource Usage Macros and Tables

Chapter 9: ResUsageSldv Table


Spare Columns

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

Resource Usage Macros and Tables

91

Chapter 9: ResUsageSldv Table


Spare Columns

92

Resource Usage Macros and Tables

CHAPTER 10

ResUsageSpdsk Table

The ResUsageSpdsk table:

Provides pdisk level statistics.

Includes resource usage logs on cylinder I/O, allocation, and migration.

Note: This table is created as a MULTISET table. For more information see Relational
Primary Index on page 37.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the nonunique primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be


logged late (typically, by no more than one or two seconds), but this
field will still contain the time value when the entry should have
been logged. For more information, see the Secs and NominalSecs
columns.
NodeID

n/a

Node ID on which the pdisk is connected. The Node ID is


formatted as CCC-MM, where CCC denotes the three-digit cabinet
number and MM denotes the two-digit chassis number of the node.
For example, a node in chassis 9 of cabinet 3 has a node ID of '00309'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


These columns provide statistics on current logging characteristics.
Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

Resource Usage Macros and Tables

93

Chapter 10: ResUsageSpdsk Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

PdiskGlobalId

n/a

Identifies the pdisk in the system. Each pdisk in the system has a
global ID which uniquely identifies the pdisk in the system. If a
pdisk is connected to the nodes in a clique, all the nodes in that
clique see the same pdisk global ID associated with that pdisk.

INTEGER

In Summary Mode, the pdisk global ID is -1.


PdiskType

n/a

Type of pdisk. The pdisk can be one of the following:

CHAR(4)

DISK: A storage device.


FILE: A file.
SSD: A solid-state device.
PdiskDeviceId

n/a

Identifies the local pdisk device.

BYTE(4)

For both DISK and SSD pdisks, the pdisk device ID can be a
major/minor number. The major number bit positions are
20-31 and the minor number is in bits 0-19. The format is
similar to the one shown below.
(MMMM MMMM MMMM mmmm mmmm mmmm mmmm mmmm)
For FILE pdisk, the pdisk device ID is 0xFFFFFFFF.
In Summary Mode, the pdisk device ID is 0xFFFFFFFF.
NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR(8)

Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally the same as NominalSecs, but can be different in three
cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
CentiSecs

n/a

Number of centiseconds in the logging period. This field is useful


when performing data calculations with small elapsed times where
the difference between centisecond-based data and whole seconds
results in a percentage error.

INTEGER

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

SummaryFlag

n/a

Summarization status of this row. If the value is 'N,' the row is a


non-summary row. If the value is 'S,' the row is a summary row.

CHAR (1)

Active

max

Controls whether or not the rows will be logged to the resource


usage tables if Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.
For example, if you enable Active Row Filter Mode, the rows that
have a zero Active column value will not be logged to the resource
usage tables.

94

Resource Usage Macros and Tables

Chapter 10: ResUsageSpdsk Table


Statistics Columns

Column Name

Mode

Description

Data Type

TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.
CODFactor

n/a

PM CPU COD value in one tenths of a percent.For example, a value


of 500 represents a PM CPU COD value of 50.0%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.

Statistics Columns
Teradata VS Columns
Allocation Columns
These columns identify the allocation statistics reported by the Allocator process.
Column Name

Mode

Description

Data Type

ExtAllocHot

count

Number of hot allocations made in the current log period. A hot


allocation is an allocation whose estimated temperature falls within
the pre-defined hot temperature range. Each allocation is for a
cylinder size worth of data. The cylinder resides in some disk storage
location and holds some data. Temperature is the frequency of access
to the data by I/O independent of where the data resides.

FLOAT

ExtAllocNonPacing

count

Number of non-pacing allocations made in the current log period. A


non-pacing allocation is an allocation whose data access affects
neither system performance nor individual query performance.

FLOAT

ExtAllocStatic

count

Number of static allocations made in the current log period. A static


allocation is an allocation whose requested temperature is used and
the measured temperature is ignored during migration.

FLOAT

ExtAllocSystemPacing

count

Number of system pacing allocations made in the current log period.


A system pacing allocation is an allocation whose data access affects
system performance.

FLOAT

ExtAllocTotal

count

Total number of allocations made in the current log period. A number


of computations can be derived. For example:

FLOAT

Cold Allocation = ExtAllocTotal -ExtAllocHot -ExtAllocWarm


QueryPacing Allocation = ExtAllocTotal -ExtAllocNonPacing ExtAllocSystemPacing
Dynamic Allocation = ExtAllocTotal - ExtAllocStatic
ExtAllocWarm

count

Resource Usage Macros and Tables

Number of warm allocations made in the current log period. A warm


allocation is an allocation whose estimated temperature falls within
the pre-defined warm temperature range. Each allocation is for a
cylinder size worth of data. The cylinder resides in some disk storage
location and holds some data. Temperature is the frequency of access
to the data by I/O independent of where the data resides.

FLOAT

95

Chapter 10: ResUsageSpdsk Table


Statistics Columns

I/O Statistics Columns


Note: You can populate the ResUsageSpdsk table I/O statistics columns without having a
Teradata VS license.
These columns identify the I/O statistics reported by the Extent Driver.
Column Name

Mode

Description

Data Type

ConcurrentMax

count

Maximum number of concurrent I/O requests.

FLOAT

ConcurrentReadMax

max

Maximum number of concurrent read I/O requests.

FLOAT

ConcurrentWriteMax

max

Maximum number of concurrent write I/O requests.

FLOAT

MigrationBlockedIos

count

Number of inputs and outputs that are blocked due to migration


request.

FLOAT

ReadCnt

count

Number of logical device reads.

FLOAT

ReadKB

count

Number of KB read from the logical device.

FLOAT

ReadRespMax

max

Maximum of individual read response time in centiseconds.

FLOAT

ReadRespSq

count

Total of squares of the individual read response time in


centiseconds.

FLOAT

ReadRespTot

count

Total of individual read response time in centiseconds.

FLOAT

WriteCnt

count

Number of logical device writes.

FLOAT

WriteKB

count

Number of KB written to the logical device.

FLOAT

WriteRespMax

max

Maximum of individual write response time in centiseconds.

FLOAT

WriteRespSq

count

Total of squares of the individual write response time in


centiseconds.

FLOAT

WriteRespTot

count

Total of individual write response time in centiseconds.

FLOAT

Migration Columns
Note: The ResUsageSpdsk table migration columns are populated only in Teradata VS. For
more information, see Teradata Virtual Storage.
The migration columns identify the number of cylinders that migrated to a different location
on a device as well as the time, in centiseconds, of all migration I/Os used, incurred, or saved
during the log period.
Note: Each allocation is for a cylinder size worth of data, also known internally in the
allocator as an extent. Thus the column names begin with Ext for extent.

96

Resource Usage Macros and Tables

Chapter 10: ResUsageSpdsk Table


Statistics Columns

Column Name

Mode

Description

Data Type

ExtMigrateFaster

count

Number of cylinders migrated to a faster location on a device. This


count is for cylinders that were allocated on this device and
migrated to a different location within the same device or migrated
to a completely different device.

FLOAT

The following formula calculates a ExtMigrateSlower value, which


is the number of cylinders migrated to slower locations:
ExtMigrateSlower = ExMigrateTotal - ExMigrateFaster.
ExtMigrateIOTimeBenefit

count

Estimates the total I/O time savings achieved by migrations


completing in the log period. The I/O time savings include the
improvement in response time caused by the new data arrangement
up to the time horizon. ExtMigrateIOTimeBenefit does not include
the cost of the migration I/Os and is a gross benefit, not a net
benefit.

FLOAT

ExtMigrateIOTimeCost

count

Estimates the total cost, in centiseconds, incurred by migration


I/Os completing during the log period, where cost is the extra time
waited by all non-migration I/Os as a result of the migration
I/O.

FLOAT

ExtMigrateIOTimeImprove

count

Estimates the percent improvement in average I/O response time


due to migrations completing in the log interval. For example, if,
right before a particular log interval, the average I/O response time
was 10 milliseconds (ms), then the migration logs an
ExtMigrateIOTimeImprove value of 10% in this interval. The
average I/O response time after the log interval should be (100%10%)*10ms = 9ms. Migration then logs an
ExtMigrateIOTimeImprove of 1% in the next interval. The average
I/O response time in the new log interval is (100%-1%)*9ms =
8.91ms.

FLOAT

ExtMigrateIOTimeImprove is only an estimate. Its permanent


improvement remains in effect as long as the workload does not
change and newer migrations do not significantly alter the data
arrangement.
When the workload changes or new migrations affect data
arrangement, response time changes in an unquantifiable way.
Despite this, ExtMigrateIOTimeImprove is useful because it
predicts actual system performance at least for short periods of
time and can be used to understand why the migration algorithm is
doing what it is doing.
ExtMigrateReadRespTot

count

Migration read I/O response time.

FLOAT

ExtMigrateWriteRespTot

count

Migration write I/O response time.

FLOAT

ExtMigrateTotal

count

Total number of cylinders migrated to a different physical location.


For more information, see the ExtMigrateFaster field.

FLOAT

Resource Usage Macros and Tables

97

Chapter 10: ResUsageSpdsk Table


Summary Mode

Reserved Column
Column Name

Mode

Description

Data Type

Reserved

n/a

Note: This column is not used.

CHAR (1)

Summary Mode
When Summary Mode is active for the ResUsageSpdsk table, rows are summarized into a
single row for each pdisk type (for example, DISK, FILE, or SSD) for each node in the system
per log interval.
You can determine if a row is in Summary Mode by checking the SummaryFlag column for
that row.
IF the SummaryFlag column value is

THEN the data for that row is being logged

'S'

in Summary Mode.

'N'

normally.

Spare Columns
The ResUsageSpdsk table spare fields are named Spare00 through Spare19, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare fields currently being used.
Column Name

Description

Spare00

WM CPU COD value in one tenths of a percent. For example, a value


of 500 represents a WM CPU COD value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the WM_CPU_COD column name in the ResSpdskView
view. For details, see ResSpdskView on page 172.
Note: WM CPU COD is not supported on SLES 10. Its value is set to
1000 on SLES 10.

98

Resource Usage Macros and Tables

Chapter 10: ResUsageSpdsk Table


Spare Columns

Column Name

Description

Spare01

WM I/O COD value in whole percent. For example, a value of 50


represents a WM I/O COD value of 50.0%.
The value is set to 100 if the WM I/O COD is disabled.
Note: This field will be converted to the WM_IO_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the WM_IO_COD column name in the ResSpdskView
view. For details, see ResSpdskView on page 172.
Note: WM I/O COD is not supported on SLES 10. Its value is set to
100 on SLES 10.

SpareInt

PM I/O COD value in whole percent. For example, a value of 50


represents a PM I/O COD value of 50.0%.
The value is set to 100 if the PM I/O COD is disabled.
Note: This field will be converted to the PM_IO_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the PM_IO_COD column name in the ResSpdskView
view. For details, see ResSpdskView on page 172.

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

Resource Usage Macros and Tables

99

Chapter 10: ResUsageSpdsk Table


Spare Columns

100

Resource Usage Macros and Tables

CHAPTER 11

ResUsageSps

You can use the ResUsageSps table to:

Get a historical view of workload behavior for utilities and SQL operations.

Determine the number of workload requests that are using AMP Worker Task (see the
NumRequests column for details).

Examine queue wait and service time numbers to find backed up queries and allocation
groups.

Determine which workload is responsible for I/O skew.

Monitor CPU usage managed by the Priority Scheduler.

Identify the percent of CPU being used by different workloads.

If you are using the ResUsageSps table on SLES 10, this allows you to validate the relative
weights given to workloads.
For a complete description of the Priority Scheduler and its components, see Priority
Scheduler (schmon) chapter in Utilities.
If you are running SLES 10 or earlier ...

If you are running SLES 11 or later ...

and using TASM, each WD is equivalent of one


PGId.

and using TASM, each WD is equivalent of one


Priority Scheduler workload definition ID
(pWDid).

and table logging is enabled on ResUsageSps, a


row is written to the database once for every
triplet of Vproc Type (VprType), PG ID, and
Performance Period ID (VprType, PGId, PPId)
on each node in the system for each log interval.

and table logging is enabled on ResUsageSps, a


row is written to the database once for every
pWDid and VprType in the system for each log
interval.

For more information on the PGId and pWDid columns, see the RowIndex1 column.
Note: This table is created as a MULTISET table.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the nonunique primary index.

Resource Usage Macros and Tables

101

Chapter 11: ResUsageSps


Housekeeping Columns

Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be


logged late (typically, by no more than one or two seconds), but this
column will still contain the time value when the entry should have
been logged. For more information, see the Secs and NominalSecs
columns.
NodeID

n/a

Node ID on which the vproc resides. The Node ID is formatted as


CCC-MM, where CCC denotes the three-digit cabinet number and
MM denotes the two-digit chassis number of the node. For
example, a node in chassis 9 of cabinet 3 has a node ID of '003-09'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


These columns provide statistics on current logging characteristics.
Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR(8)

PPId

n/a

On SLES 10 or earlier systems, this column identifies the


performance period. The PPId is a mapping of the internal
performance period value (ranges 0 to 7) to a RSS value (ranges
0 to 1). A PPId of 0 maps to the value 0, and the PPId of 1 maps
to the values 1 through 7.
On SLES 11 or later systems, this column is not valid and returns
a zero value.

BYTEINT

VprType

n/a

Type of vproc (for example, AMP, PE, and MISC). Rows reported as
vproc type of MISC contain data for all vproc types other than the
AMP and PE vproc types.

CHAR(4)

AMPCount

n/a

Number of AMPs on the Node. AMPCount is used to divide


columns that are reporting data from all the AMPs. This allows the
ResSpsView view to report the data columns on a per AMP basis.
For an example of the view, see ResSpsView on page 175.

SMALLINT

102

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Housekeeping Columns

Column Name

Mode

Description

Data Type

RowIndex1

n/a

On SLES 10 or earlier systems, this column contains the PGId.


The PGId identifies the PG. There is a one to one mapping
between a PG ID and a WD ID at any point in time.

SMALLINT

The mapping between PG ID and WD ID can be determined by


looking at the Teradata Viewpoint Workload Designer portlet.
The PG ID value ranges from 0 to 250, while the value of a
WD ID is not in a specific range (that is, the value is
incremented and not reused).
On SLES 11 or later systems, this column contains the pWDid.
The pWDid value ranges from 0 to 255.
Note: When running SLES 11 or later systems in the ResPs
macro outputs, the RowIndex1 column displays as PGid, instead
of pWDid. For sample outputs of the ResPs macros, see
Chapter 15: Resource Usage Macros.
WDId

track

WD ID.

FLOAT

The default PGs (System, L, M, H, R) have no associated WD and


will have a WDID of zero in the table. Only PGs with a name that
has PGWL in it have a nonzero WDID.
Note: This column is valid on SLES 11 or later systems only.
Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally, this value is the same as NominalSecs, but can be
different in three cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
CentiSecs

n/a

Number of centiseconds in the logging period. This column is


useful when performing data calculations with small elapsed times
where the difference between centisecond-based data and whole
seconds results in a percentage error.

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

INTEGER

SMALLINT
NCPUs

n/a

Number of CPUs on this node.

SMALLINT

This column is useful for normalizing the CPU utilization column


values for the number of CPUs on the node. This is especially
important in coexistence systems where the number of CPUs can
vary across system nodes.
Active

max

Controls whether or not the rows will be logged to the resource


usage tables when Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.

Resource Usage Macros and Tables

103

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.
CODFactor

n/a

PM CPU COD value in one tenths of percent. For example, a value


of 500 represents a PM CPU COD value of 50%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.


VPId

track

Virtual partition ID.

FLOAT

Only one VpId is associated with a pWDid and VprType row at any
point in time. There can be multiple pWDid values associated with
a VPId.
Note: This column is valid on SLES 11 or later systems only.

Statistics Columns
TASM Columns
Monitor WD Columns
The Monitor WD columns provide RSS data to the following System PMPC APIs:

MONITOR WD request

MonitorWD function

For more information on these APIs, see Application Programming Reference.


The Control AMP WorkTimeInuse and WorkTimeInuseMax columns described in the
following table exclude uncompleted work time for the Control GDO AMP during recovery
work. This will occur for rows with the RowIndex1 column equal to zero, but will typically not
be noticed. Work time for recovery work that completes during the reporting period is
reported for the Control AMP.
For more information on the RowIndex1 column, see Miscellaneous Housekeeping
Columns on page 102.
Column Name

Mode

Description

Data Type

ActiveSessions

track

On SLES 10 or earlier systems, this is the number of Scheduling


Sets.
On SLES 11or later systems, this is the number of request-level
workload management objects created. There is one object
created per user request. For non-user work request, there is one
object created per WD.

FLOAT

AGId

track

The current Allocation Group (AG) for the PG ID that is being


reported. This value can be any number from 0 to 200.

FLOAT

Note: This column is valid on SLES 10 or earlier systems only.

104

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

RelWgt

track

The Active Relative Weight. That is, the dynamically assigned


relative weight that considers, in its calculation, the activity of all
other Allocation Groups present on the system. The RelWgt
column constantly changes, unlike the relative weight assignment
the Database Administrator assigns in the Teradata Viewpoint
Workload Designer portlet.

FLOAT

RelWgt is the average relative weight of active online nodes (that is,
divide the sum of the non-zero RelWgt by the count of online
nodes with the non-zero RelWgt).
Note: This column is valid on SLES 10 or earlier systems only.
CPURunDelay

count

Number of milliseconds tasks in the WD sat in the CPU runqueue


waiting to run over the reporting period.

FLOAT

This data can be used in determining demand for the virtual


partition and WD Share workload management method. If the
CPU and I/O percentages for a virtual partition or WD are below
their relative share values and the CPURunDelay values are low,
there was insufficient demand to meet the share percentage. If the
CPURunDelay values are high, higher tier SQL requests were
allocated more resources so that there were insufficient resources
remaining to allocate to SQL requests in this WD to meet its relative
share.
For descriptions of the virtual partition and WD share workload
management method, see Glossary.
Note: This column is valid on SLES 11 or later systems only.
DecayLevel1IO

count

Number of times SQL requests in the WD hit decay level 1 due to


I/O.

FLOAT

The values are reported under VprType MISC because the summed
usage from PE and AMP go toward the usage that trips the decay.
The values reported from either PE or AMP usage alone would not
be accurate.
Note: DecayLevel1IO is used for Timeshare WDs only. For a
description of this workload management method, see

Glossary.
Note: This column is valid on SLES 11 or later systems only.
DecayLevel2IO

count

Number of times SQL requests in the WD decay level 2 due to I/O.

FLOAT

The values are reported under VprType MISC because the summed
usage from PE and AMP go toward the usage that trips the decay.
The values reported from either PE or AMP usage alone would not
be accurate.
Note: DecayLevel2IO is used for Timeshare WDs only. For a
description of this workload management method, see Glossary.
Note: This column is valid on SLES 11 or later systems only.

Resource Usage Macros and Tables

105

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

DecayLevel1CPU

count

Number of times SQL requests in the WD hit decay level 1 due to


CPU.

FLOAT

The values are reported under VprType MISC because the summed
usage from PE and AMP go toward the usage that trips the decay.
The values reported from either PE or AMP usage alone would not
be accurate.
Note: DecayLevel1CPU is used for Timeshare WDs only. For a
description of this workload management method, see Glossary.
Note: This column is valid on SLES 11 or later systems only.
DecayLevel2CPU

count

Number of times SQL requests in the WD hit decay level 2 due to


CPU.

FLOAT

The values are reported under VprType MISC because the summed
usage from PE and AMP go toward the usage that trips the decay.
The values reported from either PE or AMP usage alone would not
be accurate.
Note: DecayLevel2CPU is used for Timeshare WDs. For a
description of this workload management method, see Glossary.
Note: This column is valid on SLES 11 or later systems only.
IOBlks

count

Number of logical data blocks read or written.

FLOAT

Note: This column is valid on SLES 10 or earlier systems only.


IOCompleted

count

Number of I/Os completed on behalf of this WD.

FLOAT

Note: This column is valid on SLES 11 or later systems only.


IOCompletedKB

count

KB of I/O completed on behalf of this WD.

FLOAT

Note: This column is valid on SLES 11 or later systems only.


IOCriticalSubmitted

count

Number of I/Os submitted with a MI Lock Priority. These I/Os go


into a Teradata Scheduler queue and are not prioritized based on
the type of management method.

FLOAT

Note: This column is valid on SLES 11 or later systems only.


IOCriticalSubmittedKB

count

KB of I/O submitted with an MI Lock Priority. These I/Os go into a


Teradata Scheduler queue and are not prioritized based on the type
of management method.

FLOAT

Note: This column is valid on SLES 11 or later systems only.


IOSubmitted

count

Number of I/Os submitted on behalf of this WD.

FLOAT

Note: This column is valid on SLES 11 or later systems only.


IOSubmittedKB

count

KB of I/O submitted on behalf of this WD.

FLOAT

Note: This column is valid on SLES 11 or later systems only.


NumRequests

106

count

Number of AMP Worker Task messages or requests that got


assigned AMP Worker Tasks to them.

FLOAT

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

NumTasks

track

Average number of tasks of online nodes. The column is the result


of:

FLOAT

NumTasks = SUM of (NumTasks-i) / N


where:
NumTasks-i is the number of tasks:
On SLES 10 or earlier systems, assigned to the PG at the end
of the reporting period.
On SLES 11 or later systems, assigned to the WD at the end of
the reporting period.
i varies from 1 to N.
N is the number of online nodes.
QWaitTime

count

Total time in milliseconds that work requests waited on an input


queue before being serviced. If the work requests are not delivered,
they are not counted.

FLOAT

To calculate the average QWaitTime per request, divide by


NumRequests, which is reported in DBC.ResSpsView as
QWaitTimeRequestAvg.
QWaitTimeMax

max

Maximum time in milliseconds that work requests waited on an


input queue before being serviced.

FLOAT

ServiceTime

count

Time in milliseconds that work requests required for service.

FLOAT

To calculate an approximate average ServiceTime for each request


during this period, divide the ServiceTime value by the
NumRequests value, which is reported in DBC.ResSpsView as
ServiceTimeRequestAvg.
The service time is the elapsed time from the time the message was
received to the time the AMP Worker Task was released. This is the
amount of time the AMP Worker Task was held through sleeps,
CPU, I/O, and so on until it is released.
ServiceTimeMax

max

Maximum time in milliseconds that work requests required for


service.

FLOAT

TacticalExceptionIO

count

Number of times SQL requests in the WD hit a tactical per-node


exception due to I/O.

FLOAT

The values are reported under VprType MISC because the summed
usage from PE and AMP go toward the usage that trips the
exception. The value reported from either PE or AMP usage alone
would not be accurate.
An exception, used only for Tactical WDs, is created for each
Tactical WD. For a description of this WD Share workload
management method, see Glossary.
Note: This column is valid on SLES 11 or later systems only.

Resource Usage Macros and Tables

107

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

TacticalExceptionCPU

count

Number of times SQL requests in the WD hit a tactical per-node


exception due to CPU.

FLOAT

The values are reported under VprType MISC because the summed
usage from PE and AMP go toward the usage that trips the
exception. The value reported from either PE or AMP usage alone
would not be accurate.
Note: TacticalExceptionCPU is used for Tactical WDs only. For a
description of this workload management method, see Glossary.
Note: This column is valid on SLES 11 or later systems only.
WaitIO

count

Number of milliseconds tasks in WD waited for I/O over the


reporting period.

FLOAT

WaitIO is updated when the wait for I/O is completed.


Note: This column is valid on SLES 11 or later systems only.
WorkMsgReceiveDelay

count

Time for all messages not yet delivered at the end of each reporting
period. This column is related to the QWaitTime column and
represents a running total of delays attributed to the tasks that still
have not been assigned an AMP Worker Task within this interval.
When the task does receive an AMP Worker Task in a later interval,
the time attributed here is counted again within QWaitTime of the
interval where it was assigned an AMP Worker Task.

FLOAT

WorkMsgReceiveDelayCnt

count

Number of messages that are still waiting for AMP Worker Tasks at
the end of each reporting period.

FLOAT

WorkMsgReceiveDelayMax

max

Maximum delay time in milliseconds for messages that are still in


the work box.

FLOAT

WorkMsgReceiveDelayCntMax

max

Maximum number of messages on the work mailbox wait to be


picked up by the AMP Worker Tasks at the end of each reporting
period.

FLOAT

WorkMsgSendDelay

count

Total time in milliseconds it takes to deliver work messages.

FLOAT

WorkMsgSendDelayCnt

count

Number of messages that are delivered to the work box.

FLOAT

WorkMsgSendDelayMax

max

Maximum time in milliseconds that it takes to deliver any single


work message.

FLOAT

WaitOther

count

Number of milliseconds tasks in WD waited for reasons other than


I/O over the reporting period (for example, a task waiting for a
message).

FLOAT

WaitOther is updated when wait is completed.


WorkTimeInuse

count

Service time consumed by a WD during the current reporting


period.

FLOAT

Note: This is not the running sum of a WD that exists over multiple
intervals.
WorkTimeInuseMax

108

max

Maximum service time of a single task in a WD that is running or


has finished in the current reporting period. This includes time
used during previous intervals for that task.

FLOAT

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Statistics Columns

AMP Worker Task Columns


These columns report statistics about the AMP Worker Tasks. For more information about the
ResUsageSawt table and columns, see Chapter 7: ResUsageSawt Table.
The data is reporting the contribution of the respective WD to the column and the values are
not the same as the values reported in the ResUsageSawt table. The ResUsageSps table values
should add up to the ResUsageSawt table for columns like WorkTypeInuse. The Max columns
will not be able to be correlated to the ResUsageSawt table Max values in such a direct way
since the ResUsageSps Max columns report the Max value of the ResUsageSps table InUse
column for the WD and not the Max value of the ResUsageSawt table for all the WDs
combined.
Column Name

Mode

Description

Data Type

AwtReleases

count

Number of AMP Worker Tasks released (that is, completed


requests) while the NumRequests column reports the number of
AMP Worker Task requests that arrived. For details, see
NumRequests.

FLOAT

WorkTypeInuse00 WorkTypeInuse15

track

Current number of AMP Worker Tasks in use for each work type.

FLOAT

WorkTypeMax00 WorkTypeMax15

max

Maximum of the WorkTypeInuse values reported during the


logging period. When multiple data samples occur during the
reporting period the value is the maximum of the sampled values.

FLOAT

The true maximum number of in-use AMP Worker Tasks of a


WorkType may occur at a different time during the reporting
period and not be seen and, therefore, not be reported.

Process Scheduling Columns


CPU Utilization Columns
These columns represent CPU activities on the node associated with the AMP Worker Task,
Dispatcher, Parser, or miscellaneous activities.
Note: When an external routine (such as C, C++, Java UDF, or external stored procedure)
forks a child process or thread, the CPU time is not reported to these fields. As a result, the
resource usage table shows a lower CPU usage than shown in the ResUsageSpma table even if
the external routine consumes a large amount of CPU time. If there are child processes or
threads running on your system, this may account for the larger CPU times reported in the
ResUsageSpma table compared to this table. To confirm that the difference in CPU usage
reported by the ResUsageSpma table is caused by child processes or threads, contact your
Teradata Support Center personnel.
Column Name

Mode

Description

Data Type

CPUUServAWT

count

Time in milliseconds CPUs are busy in the AMP Worker Task


executing user service code. This is the system level time spent on a
process.

FLOAT

Resource Usage Macros and Tables

109

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

CPUUServDisp

count

Time in milliseconds CPUs are busy in the Dispatcher or Parser


executing user service code. This is the system level time spent on a
process.

FLOAT

CPUUServMisc

count

Time in milliseconds CPUs are busy executing miscellaneous


activities for user service code. This is the system level time spent on
a process.

FLOAT

CPUUExecAWT

count

Time in milliseconds CPUs are busy in the AMP Worker Task


executing user execution code. This is the user level time spent on a
process.

FLOAT

CPUUExecDisp

count

Time in milliseconds CPUs are busy in the Dispatcher or Parser


executing user execution code. This is the user level time spent on a
process.

FLOAT

CPUUExecMisc

count

Time in milliseconds CPUs are busy executing miscellaneous


activities for user execution code. This is the user level time spent
on a process.

FLOAT

Process Block Counts Columns


These columns identify how many times a process became blocked.
For more information about process block counts columns, see Chapter 6: ResUsageSpma
Table.
Column Name

Mode

Description

Data Type

ProcBlksCPULimit

count

Number of times processes were blocked by delays due to the


Priority Scheduler CPU Limit (for example, System Limit, AG Limit,
or RP).

FLOAT

For information about the Priority Scheduler CPU Limit, see


Utilities.
ProcBlksDBLock

count

Number of times processes were blocked for a database lock.

FLOAT

ProcBlksFsgLock

count

Number of times processes were blocked for an FSG lock.

FLOAT

ProcBlksSegLock

count

Number of times processes were blocked for a disk or task context


(for example, scratch, stack, and so on).

FLOAT

ProcBlksTime

count

Number of times processes were blocked for a timer expiration.

FLOAT

ProcBlksFlowControl

count

Number of times processes were blocked by delays caused by the


flow control conditions.

FLOAT

ProcBlksFsgNIOs

count

Number of times processes were blocked waiting for task I/Os to


complete.

FLOAT

ProcBlksFsgRead

count

Number of times processes were blocked for an FSG read.

FLOAT

ProcBlksFsgWrite

count

Number of times processes were blocked for an FSG write from disk.

FLOAT

ProcBlksMisc

count

Number of times processes were blocked for miscellaneous events.

FLOAT

110

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

ProcBlksMonitor

count

Number of times processes were blocked for a user monitor.

FLOAT

ProcBlksMonResume

count

Number of times processes were blocked for a user monitor resume


from a yield.

FLOAT

ProcBlksNetThrottle

count

Number of times processes were blocked for delivery of outstanding


outgoing messages.

FLOAT

ProcBlksSegMDL

count

Number of times processes were blocked waiting for an MDL


resource to become available. An MDL is an internal PDE data
structure needed by the operation of the segment subsystem.

FLOAT

ProcBlksSegNoVirtual

count

Number of times processes were blocked waiting for virtual memory


for a segment.

FLOAT

ProcBlksQnl

count

Number of times processes were blocked for a TSKQNL lock.

FLOAT

Process Pending Wait Time Columns


These columns identify the how long the processes were blocked for each possible reason
listed below.
The following column definition descriptions can also be found in Process Pending Wait
Time Columnsof Chapter 6: ResUsageSpma Table.
Column Name

Mode

Description

Data Type

ProcWaitCPULimit

count

Total time in milliseconds processes were blocked pending delays


due to the Priority Scheduler CPU Limits (for example, System
Limit, AG Limit, or RP).

FLOAT

For information about the Priority Scheduler CPU Limit, see


Utilities.
ProcWaitFlowControl

count

Total time in milliseconds processes were blocked pending the


delays caused by flow control conditions.

FLOAT

ProcWaitFsgLock

count

Total time in milliseconds processes were blocked pending an FSG


lock.

FLOAT

ProcWaitDBLock

count

Total time in milliseconds processes were blocked pending a


database lock.

FLOAT

ProcWaitSegLock

count

Total time in milliseconds processes were blocked pending a disk or


task context (for example, scratch, stack, and so on) lock.

FLOAT

ProcWaitFsgRead

count

Total time in milliseconds processes were blocked pending an FSG


read from disk.

FLOAT

ProcWaitFsgWrite

count

Total time in milliseconds processes were blocked pending an FSG


write from disk.

FLOAT

ProcWaitFsgNIOs

count

Total time in milliseconds processes were blocked waiting for task I/


Os to complete.

FLOAT

Resource Usage Macros and Tables

111

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

ProcWaitMisc

count

Total time in milliseconds processes were blocked pending


miscellaneous events.

FLOAT

ProcWaitMonitor

count

Total time in milliseconds processes were blocked pending a user


monitor.

FLOAT

ProcWaitMonResume

count

Total time in milliseconds processes were blocked pending a user


monitor resume from a yield.

FLOAT

ProcWaitNetThrottle

count

Total time in milliseconds processes were blocked pending delivery


of outstanding outgoing messages.

FLOAT

ProcWaitSegMDL

count

Total time in milliseconds processes were blocked waiting for an


MDL resource to become available. An MDL is an internal PDE
data structure needed by the operation of the segment subsystem.

FLOAT

ProcWaitSegNoVirtual

count

Total time in milliseconds processes were blocked waiting for


virtual memory for a segment.

FLOAT

ProcWaitTime

count

Total time in milliseconds processes were blocked pending some


amount of elapsed time only.

FLOAT

ProcWaitQnl

count

Total time in milliseconds processes were blocked pending an


TSKQNL lock.

FLOAT

File System Columns


Segments Acquired Columns
These columns identify the total disk memory segments acquired by the File System during
the log period. Logical acquires (Acqs) and the logical amount acquired (AcqKB) are
identified. Acquires causing physical reads (AcqReads) and the amount read (AcqReadKB) are
identified as a subset of logical acquires.
Column Name

Mode

Description

Data Type

FileAPtAcqs

count

Total number of append table or permanent journal table data


block or cylinder index disk segments acquired.

FLOAT

FilePCiAcqs

count

Total number of permanent cylinder index disk segments acquired.

FLOAT

FilePDbAcqs

count

Total number of permanent data block disk segments acquired.

FLOAT

FileSCiAcqs

count

Total number of regular or restartable spool index disk segments


acquired.

FLOAT

FileSDbAcqs

count

Total number of regular or restartable spool data block disk


segments acquired.

FLOAT

FileTJtAcqs

count

Total number of transient journal table disk segments acquired.

FLOAT

FileAPtAcqKB

count

Total KB acquired by FileAPtAcqs.

FLOAT

FilePCiAcqKB

count

Total KB acquired by FilePCiAcqs.

FLOAT

112

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

FilePDbAcqKB

count

Total KB acquired by FilePDbAcqs.

FLOAT

FileSCiAcqKB

count

Total KB acquired by FileSCiAcqs.

FLOAT

FileSDbAcqKB

count

Total KB acquired by FileSDbAcqs.

FLOAT

FileTJtAcqKB

count

Total KB acquired by FileTJtAcqs.

FLOAT

FileAPtAcqReads

count

Number of append table or permanent journal table data block or


cylinder index disk segments that caused a physical read.

FLOAT

FilePCiAcqReads

count

Number of permanent cylinder index disk segment acquires that


caused a physical read.

FLOAT

FilePDbAcqReads

count

Number of permanent data block disk segment acquires that


caused a physical read.

FLOAT

FileSCiAcqReads

count

Number of regular or restartable spool index disk segments that


caused a physical read.

FLOAT

FileSDbAcqReads

count

Number of regular or restartable spool data block disk segments


that caused a physical read.

FLOAT

FileTJtAcqReads

count

Number of transient journal table disk segments that caused a


physical read.

FLOAT

FileAPtAcqReadKB

count

KB physically read by FileAPtAcqReads.

FLOAT

FilePCiAcqReadKB

count

KB physically read by FilePCiAcqReads.

FLOAT

FilePDbAcqReadKB

count

KB physically read by FilePDbAcqReads.

FLOAT

FileSCiAcqReadKB

count

KB physically read by FileSCiAcqReads.

FLOAT

FileSDbAcqReadKB

count

KB physically read by FileSDbAcqReads.

FLOAT

FileTJtAcqReadKB

count

KB physically read by FileTJtAcqReads.

FLOAT

Data Block Prefetch Columns


These columns identify File Segment Prefetch activities.
Note: A prefetch is either a cylinder read operation or an individual block read operation.
Either of these operations are generically called a prefetch.
Column Name

Mode

Description

Data Type

FileAPtPres

count

Total number of append table or permanent journal table data


block or cylinder index disk segments prefetched.

FLOAT

FilePCiPres

count

Total number of permanent cylinder index disk segments


prefetched.

FLOAT

FilePDbPres

count

Total number of permanent data block disk segments prefetched.

FLOAT

Resource Usage Macros and Tables

113

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

FileSCiPres

count

Total number of regular or restartable spool index disk segments


prefetched.

FLOAT

FileSDbPres

count

Total number of regular or restartable spool data block disk


segments prefetched.

FLOAT

FileTJtPres

count

Total number of transient journal table disk segments prefetched.

FLOAT

FileAPtPresKB

count

Total number of KB prefetched by FileAPtPres.

FLOAT

FilePCiPresKB

count

Total number of KB prefetched by FilePCiPres.

FLOAT

FilePDbPresKB

count

Total number of KB prefetched by FilePDbPres.

FLOAT

FileSCiPresKB

count

Total number of KB prefetched by FileSCiPres.

FLOAT

FileSDbPresKB

count

Total number of KB prefetched by FileSDbPres.

FLOAT

FileTJtPresKB

count

Total number of KB prefetched by FileTJtPres.

FLOAT

FileAPtPreReads

count

Total number of append table or permanent journal table data


block or cylinder index disk segment prefetches that caused a
physical read.

FLOAT

FilePCiPreReads

count

Total number of permanent cylinder index disk segment prefetches


that caused a physical read.

FLOAT

FilePDbPreReads

count

Total number of permanent data block disk segment prefetches that


caused a physical read.

FLOAT

FileSCiPreReads

count

Total number of regular or restartable spool index disk segment


prefetches that caused a physical read.

FLOAT

FileSDbPreReads

count

Total number of regular or restartable spool data block disk


segment prefetches that caused a physical read.

FLOAT

FileTJtPreReads

count

Total number of transient journal table disk segment prefetches


that caused a physical read.

FLOAT

FileAPtPreReadKB

count

Total number of KB physical read by FileAPtPreReads.

FLOAT

FilePCiPreReadKB

count

Total number of KB physical read by FilePCiPreReads.

FLOAT

FilePDbPreReadKB

count

Total number of KB physically read by FilePDbPreReads.

FLOAT

FileSCiPreReadKB

count

Total number of KB physical read by FileSCiPreReads.

FLOAT

FileSDbPreReadKB

count

Total number of KB physical read by FileSDbPreReads.

FLOAT

FileTJtPreReadKB

count

Total number of KB physical read by FileTJtPreReads.

FLOAT

Segments Released Columns


These columns identify the total disk memory segments released by the File System, as well as
those segments that are dropped from memory during the log period. When a segment is
released, the segment is either:

114

Force out of memory (F)

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Statistics Columns

Remains resident in memory (R)

Aged out of memory (A), from segments that remain resident

Both the number of segments (Rels, Writes, Drps) and the size of the segments (RelKB,
WriteKB, DrpKB) are counted. When a segment leaves memory, it must be written to disk
only if the segment is dirty, that is, modified (Dy). Otherwise, the clean or unmodified (Cn)
segment is simply dropped.
Most spool blocks are simply dropped from a task and put on the age queue. This may happen
multiple times. Each of these will be counted as a resident release. If the system is low on
memory and the age queue must be processed, this may also result in an age write or age drop.
Forced writes are always also counted as either clean resident releases or forced drops,
depending on whether age normal or age out now was specified.
Column Name

Mode

Description

Data Type

FileAPtDyRRels

count

Number of dirty append table and permanent journal data block or


cylinder index disk segment resident releases.

FLOAT

FilePCiDyRRels

count

Number of dirty permanent cylinder index disk segment resident


releases.

FLOAT

FilePDbDyRRels

count

Number of dirty permanent data block disk segment resident


releases.

FLOAT

FileSCiDyRRels

count

Number of dirty regular or restartable spool cylinder index disk


segment resident releases.

FLOAT

FileSDbDyRRels

count

Number of dirty regular or restartable spool data block disk


segment resident releases.

FLOAT

FileTJtDyRRels

count

Number of dirty transient journal table or WAL data block or WAL


cylinder index disk segment resident releases.

FLOAT

FileAPtDyRRelKB

count

KB released by FileAPtDyRRels.

FLOAT

FilePCiDyRRelKB

count

KB released by FilePCiDyRRels.

FLOAT

FilePDbDyRRelKB

count

KB released by FilePDbDyRRels.

FLOAT

FileSCiDyRRelKB

count

KB released by FileSCiDyRRels.

FLOAT

FileSDbDyRRelKB

count

KB released by FileSDbDyRRels.

FLOAT

FileTJtDyRRelKB

count

KB released by FileTJtDyRRels.

FLOAT

FileAPtFWrites

count

Number of append table and permanent journal data block or


cylinder index disk segment forced releases or specific I/O requests
causing an immediate physical write.

FLOAT

FilePCiFWrites

count

Number of permanent cylinder index disk segment forced releases


or specific I/O requests causing an immediate physical write.

FLOAT

FilePDbFWrites

count

Number of permanent data block disk segment forced releases or


specific I/O requests causing an immediate physical write.

FLOAT

Resource Usage Macros and Tables

115

Chapter 11: ResUsageSps


Statistics Columns

Column Name

Mode

Description

Data Type

FileSCiFWrites

count

Number of regular or restartable spool cylinder index disk segment


forced releases or specific I/O requests causing an immediate
physical write.

FLOAT

FileSDbFWrites

count

Number of regular or restartable spool data block disk segment


forced releases or specific I/O requests causing an immediate
physical write.

FLOAT

FileTJtFWrites

count

Number of transient journal table or WAL data block or WAL


cylinder index disk segment forced releases or specific I/O requests
causing an immediate physical write.

FLOAT

FileAPtFWriteKB

count

KB written by FileAPtFWrites.

FLOAT

FilePCiFWriteKB

count

KB written by FilePCiFWrites.

FLOAT

FilePDbFWriteKB

count

KB written by FilePDbFWrites.

FLOAT

FileSCiFWriteKB

count

KB written by FileSCiFWrites.

FLOAT

FileSDbFWriteKB

count

KB written by FileSDbFWrites.

FLOAT

FileTJtFWriteKB

count

KB written by FileTJtFWrites.

FLOAT

Memory Allocation Columns


These columns represent the number and amount of memory allocations, subdivided into
(the only applicable) generic node memory type.
Column Name

Mode

Description

Data Type

MemAllocs

count

Number of successful SEG memory allocations.

FLOAT

MemAllocKB

count

Total KB attributed to SEG memory allocations.

FLOAT

Net Columns
Broadcast Net Traffic Columns
These columns identify the number (Reads, Writes) and amount (ReadKB, WriteKB) of input
and output messages passing through the Teradata Database nets through broadcast (1:many)
methods (Brd) per net.
Column Name

Mode

Description

Data Type

NetBrdReads

count

Number of broadcast messages input to the vproc.

FLOAT

NetBrdWrites

count

Number of broadcast messages output from the vproc.

FLOAT

116

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Spare Columns

Point-to-Point Net Traffic Columns


These columns identify the number (Reads, Writes) and amount (ReadKB, WriteKB) of input
and output messages passing through either Teradata Database net through point-to-point
(1:1) methods (PtP).
Column Name

Mode

Description

Data Type

NetPtPReads

count

Number of point-to-point messages input to the vproc on behalf of


the WD.

FLOAT

NetPtPWrites

count

Number of point-to-point messages output from the vproc on


behalf of the WD.

FLOAT

NetPtPReadKB

count

Total KB of point-to-point messages input to the vproc on behalf of


the WD.

FLOAT

NetPtPWriteKB

count

Total KB of point-to-point messages output to the vproc on behalf


of the WD.

FLOAT

Column Name

Mode

Description

Data Type

Reserved

n/a

Note: This column is not used.

CHAR (3)

Reserved Column

Spare Columns
The ResUsageSps table spare fields are named Spare00 through Spare19, and SpareInt. Four of
those fields are being used below.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare fields currently being used.
Column Name

Description

Spare00

WM CPU COD value in one tenths of a percent. For example, a value


of 500 represents a WM CPU COD value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in
Teradata Database 15.0. You can access the resource usage data for
this field using the WM_CPU_COD column name in the ResSpsView
view. For details, see ResSpsView on page 175.
Note: WM CPU COD is not supported on SLES 10. Its value is set to
1000 on SLES 10.

Resource Usage Macros and Tables

117

Chapter 11: ResUsageSps


Spare Columns

Column Name

Description

Spare01

WM I/O COD value in whole percent. For example, a value of 50


represents a WM I/O COD value of 50.0%.
The value is set to 100 if the WM I/O COD is disabled.
Note: This field will be converted to the WM_IO_COD column in
Teradata Database 15.0. You can access the resource usage data for
this field using the WM_IO_COD column name in the ResSpsView
view. For details, see ResSpsView on page 175.
Note: WM I/O COD is not supported on SLES 10. Its value is set to
100 on SLES 10.

Spare03

Number of times the parent Virtual Partition query is throttled due to


a set VP-level CPU hard limit. The hard limit at a Virtual Partition
level is a node-level maximum percent.
Note: This field will be converted to the CpuVpThrottleCount
column in Teradata Database 15.0. You can access the resource usage
data for this field using the CpuVpThrottleCount column name in
the ResSpsView view. For details, see ResSpsView on page 175.

Spare04

Time, in milliseconds, that the parent Virtual Partition query is


throttled due to a set Virtual Partition-level CPU hard limit.
Note: This field will be converted to the CpuVpThrottleTime column
in Teradata Database 15.0. You can access the resource usage data for
this field using the CpuVpThrottleTime column name in the
ResSpsView view. For details, see ResSpsView on page 175.

Spare05

Number of times the workload query is throttled due to a set


workload-level CPU hard limit. The hard limit at a workload level is a
node-level maximum percent.
Note: This field will be converted to the CpuThrottleCount column
in Teradata Database 15.0. You can access the resource usage data for
this field using the CpuThrottleCount column name in the
ResSpsView view. For details, see ResSpsView on page 175.

Spare06

Time, in milliseconds, that the workload query is throttled due to a


set workload-level CPU hard limit. The hard limit at a workload level
is a node-level maximum percent.
Note: This field will be converted to the CpuThrottleTime column in
Teradata Database 15.0. You can access the resource usage data for
this field using the CpuThrottleTime column name in the
ResSpsView view. For details, see ResSpsView on page 175.

118

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Spare Columns

Column Name

Description

Spare07

Sum of the full potential input or output token allocations from all
devices attached to the node. This field is reported only to the AMP
vproc rows.
The input or output token allocations are not limited by the IO_COD
setting.
The data source for this field is located in the
all_wd_full_potential_iota field in /proc/tdmeter/limit_stats file.
Note: This field will be converted to the FullPotentialIota column in
Teradata Database 15.0. You can access the resource usage data for
this field using the FullPotentialIota column name in the ResSpsView
view. For details, see ResSpsView on page 175.
Note: This field is available on SLES 11 only.

Spare08

Sum of the potential input or output token allocations from all


devices attached to the node for this workload. This field is reported
only to the AMP vproc rows.
The input or output token allocations are accumulated only when an
I/O is pending on a device and are limited by the IO_COD setting.
The data source for this field is located in the wd_potential_iota field
in /proc/tdmeter/limit_stats file.
Note: This field will be converted to the CodPotentialIota column in
Teradata Database 15.0. You can access the resource usage data for
this field using the CodPotentialIota column name in the ResSpsView
view. For details, see ResSpsView on page 175.
Note: This field is available on SLES 11 only.

Spare09

Sum of the used input or output token allocations from all devices
attached to the node for this workload. This field is reported only to
AMP vproc rows.
The data source for this field is located in the following file:
wd_used_iota field in /proc/tdmeter/limit_stats
Note: This field will be converted to the UsedIota column in Teradata
Database 15.0. You can access the resource usage data for this field
using the UsedIota column name in the ResSpsView view. For details,
see ResSpsView on page 175.
Note: This field is available on SLES 11 only.

Resource Usage Macros and Tables

119

Chapter 11: ResUsageSps


Spare Columns

Column Name

Description

Spare10

Number of times that an I/O was in a releasable position to disk. This


field is reported only to AMP vproc rows.
An I/O may not be in a releasable position because there are higher
priority I/Os.
The data source for this field is:
((tdsched_io_info *)TvsaIoBuf_p->io_info)->req_pct)
Note: This field will be converted to the IoThrottleCount column in
Teradata Database 15.0. You can access the resource usage data for
this field using the IoThrottleCount column name in the ResSpsView
view. For details, see ResSpsView on page 175.
Note: This field is available on SLES 11 only.

Spare11

Number of logical reads from VH cache. This field is populated by the


FSG subsystem.
Note: This field will be converted to the VHLogicalDBRead column
in Teradata Database 15.0. You can access the resource usage data for
this field using the VHLogicalDBRead column name in the
ResSpsView view. For details, see ResSpsView on page 175.
For information about VH cache, see Glossary.

Spare12

Volume of logical reads in KB from VH cache. This field is populated


by the FSG system.
Note: This field will be converted to the VHLogicalDBReadKB
column in Teradata Database 15.0. You can access the resource usage
data for this field using the VHLogicalDBReadKB column name in
the ResSpsView view. For details, see ResSpsView on page 175.
For information about VH cache, see Glossary.

Spare13

Number of very hot reads that were handled by physical disk I/O due
to a VH cache miss (that is, data not found in the VH cache). This
field is populated by the FSG system.
Note: This field will be converted to the VHPhysicalDBRead column
in Teradata Database 15.0. You can access the resource usage data for
this field using the VHPhysicalDBRead column name in the
ResSpsView view. For details, see ResSpsView on page 175.
For information about VH cache, see Glossary.

Spare14

Volume of very hot reads in KB that were handled by physical disk


I/O due to a VH cache miss. This field is populated by the FSG
system.
Note: This field will be converted to the VHPhysicalDBReadKB
column in Teradata Database 15.0. You can access the resource usage
data for this field using the VHPhysicalDBReadKB column name in
the ResSpsView view. For details, see ResSpsView on page 175.
For information about VH cache, see Glossary.

120

Resource Usage Macros and Tables

Chapter 11: ResUsageSps


Spare Columns

Column Name

Description

SpareInt

PM I/O COD value in whole percent values for the entire system. For
example, a SpareInt value of 50 represents a
PM I/O COD value of 50%.
This field value is 100 if the PM I/O COD is disabled.
Note: This field will be converted to the PM_IO_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the PM_IO_COD column name in the ResSpsView view.
For details, see ResSpsView on page 175.

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

Resource Usage Macros and Tables

121

Chapter 11: ResUsageSps


Spare Columns

122

Resource Usage Macros and Tables

CHAPTER 12

ResUsageSvdsk Table

The ResUsageSvdsk table:

Provides AMP-level storage statistics.

Includes resource usage logs on cylinder allocation, migration, and I/O statistics.

If you enable table logging on ResUsageSvdsk, a row is written to the database once for every
AMP vproc in the system for each log interval. To consolidate and summarize the total
number of rows written to the database, you can enable Summary Mode. For details, see
Summary Mode on page 128.
Note: This table is created as a MULTISET table.
The following table describes the ResUsageSvdsk table columns.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the nonunique primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be


logged late (typically, by no more than one or two seconds), but this
column will still contain the time value when the entry should have
been logged. For more information, see the Secs and NominalSecs
columns.
NodeID

n/a

Node ID on which the vproc resides. The Node ID is formatted as


CCC-MM, where CCC denotes the three-digit cabinet number and
MM denotes the two-digit chassis number of the node. For
example, a node in chassis 9 of cabinet 3 has a node ID of '003-09'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


These columns provide the general characteristics of the housekeeping columns.

Resource Usage Macros and Tables

123

Chapter 12: ResUsageSvdsk Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

VprId

n/a

Identifies the AMP vproc.

INTEGER

In Summary Mode, the value of the AMP vproc ID is -1.


NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR(8)

Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally the same as NominalSecs, but can be different in three
cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
CentiSecs

n/a

Number of centiseconds in the logging period. This column is


useful when performing data calculations with small elapsed times
where the difference between centisecond-based data and whole
seconds results in a percentage error.

INTEGER

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

SummaryFlag

n/a

Summarization status of this row. If the value is 'N,' the row is a


non-summary row. If the value is 'S,' the row is a summary row.

CHAR (1)

Active

max

Controls whether or not the rows will be logged to the resource


usage tables if Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.
For example, if you enable Active Row Filter Mode, the rows that
have a zero Active column value will not be logged to the resource
usage tables.
TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.
CODFactor

n/a

PM CPU COD value in one tenths of a percent. For example, a


value of 500 represents a PM CPU COD value of 50.0%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.

124

Resource Usage Macros and Tables

Chapter 12: ResUsageSvdsk Table


Statistics Columns

Statistics Columns
Teradata VS Columns
Allocation Columns
These columns identify the allocation statistics reported by the Allocator process.
Column Name

Mode

Description

Data Type

ExtAllocHot

count

Number of hot allocations made in the current log period. A hot


allocation is an allocation whose estimated temperature falls within
the pre-defined hot temperature range.

FLOAT

The cylinder resides in some disk storage location and holds some
data. Temperature is the frequency of access to the data by I/O
independent of where the data resides.
ExtAllocNonPacing

count

Number of non-pacing allocations made in the current log period. A


non-pacing allocation is an allocation whose data access affects
neither system performance nor individual query performance.

FLOAT

ExtAllocStatic

count

Number of static allocations made in the current log period. A static


allocation is an allocation whose requested temperature is used and
the measured temperature is ignored during migration.

FLOAT

ExtAllocSystemPacing

count

Number of system pacing allocations made in the current log period.


A system pacing allocation is an allocation whose data access affects
system performance.

FLOAT

ExtAllocTotal

count

Total number of allocations made in the current log period. A


number of computations can be derived. For example:

FLOAT

Cold Allocation = ExtAllocTotal ExtAllocHot ExtAllocWarm


QueryPacing Allocation = ExtAllocTotal ExtAllocNonPacing
ExtAllocSystemPacing
Dynamic Allocation = ExtAllocTotal - ExtAllocStatic
ExtAllocWarm

count

Number of warm allocations made in the current log period. A


warm allocation is an allocation whose estimated temperature falls
within the pre-defined warm temperature range.

FLOAT

The cylinder resides in some disk storage location and holds some
data. Temperature is the frequency of access to the data by I/O
independent of where the data resides.

I/O Statistics Columns


These columns identify the I/O statistics reported by FSG for each AMP.
Column Name

Mode

Description

Data Type

ConcurrentMax

max

Maximum number of concurrent I/O requests.

FLOAT

Resource Usage Macros and Tables

125

Chapter 12: ResUsageSvdsk Table


Statistics Columns

Column Name

Mode

Description

Data Type

ConcurrentReadMax

max

Maximum number of concurrent read I/O requests.

FLOAT

ConcurrentWriteMax

max

Maximum number of concurrent write I/O requests.

FLOAT

ReadCnt

count

Number of logical device reads.

FLOAT

WriteCnt

count

Number of logical device writes.

FLOAT

ReadKB

count

Number of KB read from the logical device.

FLOAT

WriteKB

count

Number of KB written to the logical device.

FLOAT

ReadRespTot

count

Total of individual read response time in centiseconds.

FLOAT

WriteRespTot

count

Total of individual write response time in centiseconds.

FLOAT

ReadRespMax

max

Maximum of individual read response time in centiseconds.

FLOAT

WriteRespMax

max

Maximum of individual write response time in centiseconds.

FLOAT

ReadRespSq

count

Total of squares of the individual read response time in


centiseconds.

FLOAT

WriteRespSq

count

Total of squares of the individual write response time in


centiseconds.

FLOAT

OutReqTime

count

Time with outstanding requests (busy time), in centiseconds.

FLOAT

Migration Columns
Note: The ResUsageSvdsk table migration columns are populated only in Teradata VS. For
more information, see Teradata Virtual Storage.
The migration columns identify the number of cylinders that migrated to a different location
on a device as well as the time, in centiseconds, of all migration I/Os used, incurred, or saved
during the log period.
Note: Each allocation is for a cylinder size worth of data, also known internally in the
allocator as an extent. Therefore, the column names begin with Ext for extent.
Column Name

Mode

Description

Data Type

ExtMigrateFaster

count

Number of cylinders migrated to faster locations (that is,


migrations whose gross benefits are positive) for the associated
AMP.

FLOAT

The following formula calculates a Slower Migration value, which is


the number of cylinders migrated to slower locations:
SlowerMigration = ExtMigrateTotal - ExtMigrateFaster
Cylinders are migrated to slower locations to make room for hotter
cylinders to replace them.

126

Resource Usage Macros and Tables

Chapter 12: ResUsageSvdsk Table


Statistics Columns

Column Name

Mode

Description

Data Type

ExtMigrateIOTimeCost

count

Estimates the total cost, in centiseconds, incurred by migration


I/Os completing during the log period, where cost is the extra time
waited by all non-migration I/Os as a result of the migration
I/O.

FLOAT

ExtMigrateIOTimeBenefit

count

Estimates the total I/O time savings achieved by migrations


completing in the log period. The I/O time savings include the
improvement in response time caused by the new data arrangement
up to the time horizon.

FLOAT

This value does not include the cost of the migration I/Os and is a
gross benefit, not a net benefit.
ExtMigrateIOTimeImprove

count

Estimates the percent improvement in average I/O response time


due to migrations completing in the log interval.

FLOAT

For example, if, right before a particular log interval, the average
I/O response time was 10 milliseconds (ms), then the migration
logs an ExtMigrateIOTimeImprove value of 10% in this interval.
The average I/O response time after the log interval should be
(100%-10%)*10ms = 9ms. Migration then logs an
ExtMigrateIOTimeImprove of 1% in the next interval. The average
I/O response time in the new log interval is (100%-1%)*9ms =
8.91ms.
ExtMigrateIOTimeImprove is only an estimate. Its permanent
improvement remains in effect as long as the workload does not
change and newer migrations do not significantly alter the data
arrangement.
When the workload changes or new migrations affect data
arrangement, response time changes in a nonquantifiable way.
You can use this field to predict the actual system performance for
short periods of time and to understand why the migration
algorithm is doing what it is doing.
ExtMigrateReadRespTot

count

Migration read I/O response time.

FLOAT

ExtMigrateWriteRespTot

count

Migration write I/O response time.

FLOAT

ExtMigrateTotal

count

Total number of cylinders migrated to a different physical location.


For more information, see the ExtMigrateFaster column.

FLOAT

Column Name

Mode

Description

Data Type

Reserved

n/a

Note: This column is not used.

CHAR (1)

Reserved Column

Resource Usage Macros and Tables

127

Chapter 12: ResUsageSvdsk Table


Summary Mode

Summary Mode
When Summary Mode is active for the ResUsageSvdsk table, one row is written to the database
for each node in the system. This row summarizes all AMP vdisk data in each node per log
interval.
You can determine if a row is in Summary Mode by checking the SummaryFlag column for
that row.
IF the SummaryFlag column value is

THEN the data for that row is being logged

'S'

in Summary Mode.

'N'

normally.

Spare Columns
The ResUsageSvdsk table spare fields are named Spare00 through Spare19, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare field currently being used.
Column Name

Description

Spare00

WM CPU COD value in one tenths of a percent. For example, a value


of 500 represents a WM CPU COD value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the WM_CPU_COD column name in the ResSvdskView
view. For details, see ResSvdskView on page 178.
Note: WM CPU COD is not supported on SLES 10. Its value is set to
1000 on SLES 10.

Spare01

WM I/O COD value in whole percent. For example, a value of 50


represents a WM I/O COD value of 50.0%.
The value is set to 100 if the WM I/O COD is disabled.
Note: This field will be converted to the WM_IO_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the WM_IO_COD column name in the ResSvdskView
view. For details, see ResSvdskView on page 178.
Note: WM I/O COD is not supported on SLES 10. Its value is set to
100 on SLES 10.

128

Resource Usage Macros and Tables

Chapter 12: ResUsageSvdsk Table


Spare Columns

Column Name

Description

SpareInt

PM I/O COD value in whole percent. For example, a value of 50


represents a PM I/O COD value of 50.0%.
The value is set to 100 if the PM I/O COD is disabled.
Note: This field will be converted to the PM_IO_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the PM_IO_COD column name in the ResSvdskView
view. For details, see ResSvdskView on page 178.

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

Resource Usage Macros and Tables

129

Chapter 12: ResUsageSvdsk Table


Spare Columns

130

Resource Usage Macros and Tables

CHAPTER 13

ResUsageSvpr Table

ResUsageSvpr logical table includes resource usage data for available system-wide, virtual
processor information.
Note: This table is created as a MULTISET table. For more information see Relational
Primary Index on page 37.
Teradata recommends that you use the view ResSvprView on page 179 to access the data
rather than accessing the ResUsageSvpr table directly.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the nonunique primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries might be


logged late (typically, by no more than one or two seconds), but this
column will contain the time value when the entry should have been
logged. For more information, see the Secs and NominalSecs
columns.
NodeID

n/a

Node ID on which the vproc resides. The Node ID is formatted as


CCC-MM, where CCC denotes the three-digit cabinet number and
MM denotes the two-digit chassis number of the node. For
example, a node in chassis 9 of cabinet 3 has a node ID of '003-09'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


These columns provide the general characteristics of the housekeeping columns.
Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

Resource Usage Macros and Tables

131

Chapter 13: ResUsageSvpr Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR(8)

VprId

n/a

Identifies the vproc number (non-Summary Mode) or the vproc


type (Summary Mode; 0 = NODE, 1 = AMP, 2 = PE, 3=GTW,
4=RSG, 5=TVS).

INTEGER

VprType

n/a

Type of vproc. The values can be NODE, AMP, PE, GTW, RSG, or
TVS (see Teradata Virtual Storage).

CHAR(4)

Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally the same as NominalSecs, but can be different in three
cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
CentiSecs

n/a

Number of centiseconds in the logging period. This column is


useful when performing data calculations with small elapsed times
where the difference between centisecond-based data and whole
seconds results in a percentage error.

INTEGER

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

NCPUs

n/a

Number of CPUs on this node.

SMALLINT

This column is useful for normalizing the CPU utilization column


values for the number of CPUs on the node. This is especially
important in coexistence systems where the number of CPUs can
vary across system nodes.
SummaryFlag

n/a

Summarization status of this row. Possible values are 'N' if the row
is a non-summary row and 'S' if it is a summary row.

CHAR (1)

Active

max

Controls whether or not the rows will be logged to the resource


usage tables if Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.
For example, if you enable Active Row Filter Mode, the rows that
have a zero Active column value will not be logged to the resource
usage tables.
TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.
CODFactor

n/a

PM CPU COD value in one tenths of a percent. For example, a


value of 500 represents a PM CPU COD value of 50.0%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.

132

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Statistics Columns
File System Columns
Synchronized Full Table Scans Columns
These columns contain statistics relating to synchronized full table scans.
Note: The following columns are moved from the ResUsageIvpr table to the ResUsageSvpr
table to avoid costly joins.
Column Name

Mode

Description

Data Type

FileSyncGroups

track

Number of groups of scanners involved in full table scans. A group


consists of scanners who are able to use the same read I/O to obtain
data from disk.

FLOAT

FileSyncScanners

track

Number of tasks involved in full table scans who are willing to


synchronize with other scanners.

FLOAT

FileSyncScans

count

Number of attempts to synchronize a full table scan.

FLOAT

FileSyncSubtables

track

Number of subtables scanned by one or more full table scanners


who are willing to synchronize scans.

FLOAT

Segments Acquired Columns


These columns identify the total disk memory segments acquired by the File System during
the log period.
Note: The FileXXxAcqs columns are the only columns counted as cache hits, where XXx
represents one of the following entries:
APt
PCi

PDb
SCi

SDb
TJt

Column Name

Mode

Description

Data Type

FileAPtAcqs

count

Total number of append table or permanent journal table data


block or cylinder index disk segments.

FLOAT

FilePCiAcqs

count

Total number of permanent cylinder index disk segments acquires


that were logically acquired.

FLOAT

FilePDbAcqs

count

Total number of permanent data block disk segments acquires that


were logically acquired.

FLOAT

FileSCiAcqs

count

Total number of regular or restartable spool cylinder index disk


segments acquires that were logically acquired.

FLOAT

FileSDbAcqs

count

Total number of regular or restartable spool data block disk


segments acquires that were logically acquired.

FLOAT

Resource Usage Macros and Tables

133

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileTJtAcqs

count

Total number of transient journal or WAL data block or WAL


cylinder index disk segments acquires that were logically acquired.

FLOAT

FileAPtAcqKB

count

Total KB logically acquired by FileAPtAcqs.

FLOAT

FilePCiAcqKB

count

Total KB logically acquired by FilePCiAcqs.

FLOAT

FilePDbAcqKB

count

Total KB logically acquired by FilePDbAcqs.

FLOAT

FileSCiAcqKB

count

Total KB logically acquired by FileSCiAcqs.

FLOAT

FileSDbAcqKB

count

Total KB logically acquired by FileSDbAcqs.

FLOAT

FileTJtAcqKB

count

Total KB logically acquired by FileTJtAcqs.

FLOAT

FileAPtAcqReads

count

Number of append table or permanent journal table data block or


cylinder index disk segment acquires that caused a physical read.

FLOAT

FilePCiAcqReads

count

Number of permanent cylinder index disk segment acquires that


caused a physical read.

FLOAT

FilePDbAcqReads

count

Number of permanent data block disk segment acquires that


caused a physical read.

FLOAT

FileSCiAcqReads

count

Number of regular or restartable spool cylinder index disk segment


acquires that caused a physical read.

FLOAT

FileSDbAcqReads

count

Number of regular or restartable spool data block disk segment


acquires that caused a physical read.

FLOAT

FileTJtAcqReads

count

Number of transient journal or WAL data block or WAL cylinder


index disk segment acquires that caused a physical read.

FLOAT

FileAPtAcqReadKB

count

Total KB physically read by FileAPtAcqReads.

FLOAT

FilePCiAcqReadKB

count

Total KB physically read by FilePCiAcqReads.

FLOAT

FilePDbAcqReadKB

count

Total KB physically read by FilePDbAcqReads.

FLOAT

FileSCiAcqReadKB

count

Total KB physically read by FileSCiAcqReads.

FLOAT

FileSDbAcqReadKB

count

Total KB physically read by FileSDbAcqReads.

FLOAT

FileTJtAcqReadKB

count

Total KB physically read by FileTJtAcqReads.

FLOAT

FileAPtAcqsOther

count

Total number of append table or permanent journal table data


block or cylinder index scratch disk segments that were logically
acquired.

FLOAT

FilePCiAcqsOther

count

Total number of permanent cylinder index scratch disk segments


that were logically acquired.

FLOAT

FilePDbAcqsOther

count

Total number of permanent data block scratch disk segments that


were logically acquired.

FLOAT

FileSCiAcqsOther

count

Total number of regular or restartable spool cylinder index scratch


disk segments that were logically acquired.

FLOAT

134

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileSDbAcqsOther

count

Total number of regular or restartable spool data block scratch disk


segments that were logically acquired.

FLOAT

FileTJtAcqsOther

count

Total number of transient journal or WAL data block or WAL


cylinder index scratch disk segments that were logically acquired.

FLOAT

FileAPtAcqOtherKB

count

Total number of append table or permanent journal table data


block or cylinder index scratch disk segments acquired in KB.

FLOAT

FilePCiAcqOtherKB

count

Total number of permanent cylinder index scratch disk segments


acquired in KB.

FLOAT

FilePDbAcqOtherKB

count

Total number of permanent data block scratch disk segments


acquired in KB.

FLOAT

FileSCiAcqOtherKB

count

Total number of regular or restartable spool cylinder index scratch


disk segments acquired in KB.

FLOAT

FileSDbAcqOtherKB

count

Total number of regular or restartable spool data block scratch disk


segments acquired in KB.

FLOAT

FileTJtAcqOtherKB

count

Total number of transient journal or WAL data block or WAL


cylinder index scratch disk segments acquired in KB.

FLOAT

Data Block Prefetch Columns


These columns identify File Segment Prefetch activities.
Note: A prefetch is either a cylinder read operation or an individual block read operation.
Either of these operations are generically called a prefetch.
Column Name

Mode

Description

Data Type

FileAPtPres

count

Total number of append table or permanent journal table data


block or cylinder index disk segments prefetched.

FLOAT

FilePCiPres

count

Total number of permanent cylinder index disk segments


prefetched.

FLOAT

FilePDbPres

count

Total number of permanent data block disk segments prefetched.

FLOAT

FileSCiPres

count

Total number of regular or restartable spool index disk segments


prefetched.

FLOAT

FileSDbPres

count

Total number of regular or restartable spool data block disk


segments prefetched.

FLOAT

FileTJtPres

count

Total number of transient journal table disk segments prefetched.

FLOAT

FileAPtPresKB

count

Total number of KB prefetched by FileAPtPres.

FLOAT

FilePCiPresKB

count

Total number of KB prefetched by FilePCiPres.

FLOAT

FilePDbPresKB

count

Total number of KB prefetched by FilePDbPres.

FLOAT

FileSCiPresKB

count

Total number of KB prefetched by FileSCiPres.

FLOAT

Resource Usage Macros and Tables

135

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileSDbPresKB

count

Total number of KB prefetched by FileSDbPres.

FLOAT

FileTJtPresKB

count

Total number of KB prefetched by FileTJtPres.

FLOAT

FileAPtPreReads

count

Total number of append table or permanent journal table data


block or cylinder index disk segment prefetches that caused a
physical read.

FLOAT

FilePCiPreReads

count

Total number of permanent cylinder index disk segment prefetches


that caused a physical read.

FLOAT

FilePDbPreReads

count

Total number of permanent data block disk segment prefetches that


caused a physical read.

FLOAT

FileSCiPreReads

count

Total number of regular or restartable spool index disk segment


prefetches that caused a physical read.

FLOAT

FileSDbPreReads

count

Total number of regular or restartable spool data block disk


segment prefetches that caused a physical read.

FLOAT

FileTJtPreReads

count

Total number of transient journal table disk segment prefetches


that caused a physical read.

FLOAT

FileAPtPreReadKB

count

Total number of KB physical read by FileAPtPreReads.

FLOAT

FilePCiPreReadKB

count

Total number of KB physical read by FilePCiPreReads.

FLOAT

FilePDbPreReadKB

count

Total number of KB physically read by FilePDbPreReads.

FLOAT

FileSCiPreReadKB

count

Total number of KB physical read by FileSCiPreReads.

FLOAT

FileSDbPreReadKB

count

Total number of KB physical read by FileSDbPreReads.

FLOAT

FileTJtPreReadKB

count

Total number of KB physical read by FileTJtPreReads.

FLOAT

Segments Released Columns


These columns identify the total disk memory segments released by the File System, as well as
those segments that are dropped from memory during the log period. When a segment is
released, the segment is either:

Forced (F)

Remains resident in memory (R)

Aged out of memory (A), from segments that are currently resident

Both the number of segments (Rels, Writes, Drps) and the size of the segments
(RelKB,WriteKB, DrpKB) are counted. When a segment leaves memory, it must be written to
disk only if the segment is dirty (Dy), that is, modified. Otherwise, the clean (Cn), that is,
unmodified segment is simply dropped.
Most spool blocks for a small table remain resident when they are created and age there. Each
of these will be counted as a dirty resident release (DyRRel columns). If a block survives in the
cache, it would be reacquired (whenever the system creates spool data, a subsequent step will
read it) and released again. The release will still be counted as a dirty resident release, since the

136

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

block survived in a modified state. On the other hand, if there is contention for room in the
FSG cache, the segment might be removed from memory. Because it is a modified segment, it
must be written out first. This is counted as a dirty age write (DyAWrite columns). When it is
reacquired it will no longer be modified, so the subsequent release will be counted as a clean
resident release (CnRRel columns).
If the segments are modified, only DyAWrites, DyAWriteKB, CnADrps, and CnADrpKB
columns are incremented.
If the segments are unmodified, only CnADrps and CnADrpKB columns are incremented.
Note: To determine the clean segments that aged out of memory for the CnADrps column,
subtract the DyAWrites value from the CnADrps value. To determine the clean segments that
aged out of memory for the CnADrpKB column, subtract the DyAWriteKB value from the
CnADrpKB value.
Full table modification operations make one pass on the table and modify each block only
once. Since these operations do not access a block multiple times, there is no point keeping
them in the cache. If a block that was examined did not contain any rows that qualify for the
modification, when the block is released, it will be dropped from memory immediately (FDrp
columns). However, if the block was modified, when it is released the system issues the write
as part of the release so it is counted as a forced write (FWrite columns). Since the system also
drops the block from memory as soon as the write is complete, this release is also counted as a
forced drop (FDrp columns).
Column Name

Mode

Description

Data Type

FileAPtDyRRels

count

Number of dirty append table or permanent journal table data


block or cylinder index disk segment resident releases.

FLOAT

FilePCiDyRRels

count

Number of dirty permanent cylinder index disk segment resident


releases.

FLOAT

FilePDbDyRRels

count

Number of dirty permanent data block disk segment resident


releases.

FLOAT

FileSCiDyRRels

count

Number of dirty regular or restartable spool cylinder index disk


segment resident releases.

FLOAT

FileSDbDyRRels

count

Number of dirty regular or restartable spool data block disk


segment resident releases.

FLOAT

FileTJtDyRRels

count

Number of dirty transient journal table or WAL data block or WAL


cylinder index disk segment resident releases.

FLOAT

FileAPtDyRRelsKB

count

KB released by FileAPtDyRRels.

FLOAT

FilePCiDyRRelsKB

count

KB released by FilePCiDyRRels.

FLOAT

FilePDbDyRRelsKB

count

KB released by FilePDbDyRRels.

FLOAT

FileSCiDyRRelsKB

count

KB released by FileSCiDyRRels.

FLOAT

FileSDbDyRRelsKB

count

KB released by FileSDbDyRRels.

FLOAT

Resource Usage Macros and Tables

137

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileTJtDyRRelsKB

count

KB released by FileTJtDyRRels.

FLOAT

FileAPtFWrites

count

Number of append table or permanent journal table data block or


cylinder index disk segment forced releases or specific I/O requests
causing an immediate physical write.

FLOAT

FilePCiFWrites

count

Number of permanent cylinder index disk segment forced releases


or specific I/O requests causing an immediate physical write.

FLOAT

FilePDbFWrites

count

Number of permanent data block disk segment forced releases or


specific I/O requests causing an immediate physical write.

FLOAT

FileSCiFWrites

count

Number of regular or restartable spool cylinder index disk segment


forced releases or specific I/O requests causing an immediate
physical write.

FLOAT

FileSDbFWrites

count

Number of regular or restartable spool data block disk segment


forced releases or specific I/O requests causing an immediate
physical write.

FLOAT

FileTJtFWrites

count

Number of transient journal table or WAL data block or WAL


cylinder index disk segment forced releases or specific I/O requests
causing an immediate physical write.

FLOAT

FileAPtFWriteKB

count

KB written by FileAPtFWrites.

FLOAT

FilePCiFWriteKB

count

KB written by FilePCiFWrites.

FLOAT

FilePDbFWriteKB

count

KB written by FilePDbFWrites.

FLOAT

FileSCiFWriteKB

count

KB written by FileSCiWrites.

FLOAT

FileSDbFWriteKB

count

KB written by FileSDbFWrites.

FLOAT

FileTJtFWriteKB

count

KB written by FileTJtFWrites.

FLOAT

FileAPtDyAWrites

count

Number of dirty append table or permanent journal table data


block or cylinder index disk segments aged out of memory causing
a delayed physical write.

FLOAT

FilePCiDyAWrites

count

Number of dirty permanent cylinder index disk segments aged out


of memory causing a delayed physical write.

FLOAT

FilePDbDyAWrites

count

Number of dirty permanent data block disk segments aged out of


memory causing a delayed physical write.

FLOAT

FileSCiDyAWrites

count

Number of dirty regular or restartable spool cylinder index disk


segments aged out of memory causing a delayed physical write.

FLOAT

FileSDbDyAWrites

count

Number of dirty regular or restartable spool data block disk


segments aged out of memory causing a delayed physical write.

FLOAT

FileTJtDyAWrites

count

Number of dirty transient journal table or WAL data block or WAL


cylinder index disk segments aged out of memory causing a delayed
physical write.

FLOAT

FileAPtDyAWriteKB

count

KB written by FileAPtDyAWrites.

FLOAT

138

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FilePCiDyAWriteKB

count

KB written by FilePCiDyAWrites.

FLOAT

FilePDbDyAWriteKB

count

KB written by FilePDbDyAWrites.

FLOAT

FileSCiDyAWriteKB

count

KB written by FileSCiDyAWrites.

FLOAT

FileSDbDyAWriteKB

count

KB written by FileSDbDyAWrites.

FLOAT

FileTJtDyAWriteKB

count

KB written by FileTJtDyAWrites.

FLOAT

FileAPtCnRRels

count

Number of clean append table or permanent journal table data


block or cylinder index disk segment resident releases.

FLOAT

FilePCiCnRRels

count

Number of clean permanent cylinder index disk segment resident


releases.

FLOAT

FilePDbCnRRels

count

Number of clean permanent data block disk segment resident


releases.

FLOAT

FileSCiCnRRels

count

Number of clean regular or restartable spool cylinder index disk


segment resident releases.

FLOAT

FileSDbCnRRels

count

Number of clean regular or restartable spool data block disk


segment resident releases.

FLOAT

FileTJtCnRRels

count

Number of clean transient journal table or WAL data block or WAL


cylinder index disk segment resident releases.

FLOAT

FileAPtCnRRelKB

count

KB released by FileAptCnRRels.

FLOAT

FilePCiCnRRelKB

count

KB released by FilePCiCnRRels.

FLOAT

FilePDbCnRRelKB

count

KB released by FilePDbCnRRels.

FLOAT

FileSCiCnRRelKB

count

KB released by FileSCiCnRRels.

FLOAT

FileSDbCnRRelKB

count

KB released by FileSDbCnRRels.

FLOAT

FileTJtCnRRelKB

count

KB released by FileTJtCnRRels.

FLOAT

FileAPtFDrps

count

Number of append table or permanent journal table data block or


cylinder index disk segment forced releases causing an immediate
memory drop.

FLOAT

FilePCiFDrps

count

Number of permanent cylinder index disk segment forced releases


causing an immediate memory drop.

FLOAT

FilePDbFDrps

count

Number of permanent data block disk segment forced releases


causing an immediate memory drop.

FLOAT

FileSCiFDrps

count

Number of regular or restartable spool cylinder index disk segment


forced releases causing an immediate memory drop.

FLOAT

FileSDbFDrps

count

Number of regular or restartable spool data block disk segment


forced releases causing an immediate memory drop.

FLOAT

FileTJtFDrps

count

Number of transient journal table or WAL data block or WAL


cylinder index disk segment forced releases causing an immediate
memory drop.

FLOAT

Resource Usage Macros and Tables

139

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileAPtFDrpKB

count

KB dropped by FileAPtFDrps.

FLOAT

FilePCiFDrpKB

count

KB dropped by FilePCiFDrps.

FLOAT

FilePDbFDrpKB

count

KB dropped by FilePDbFDrps.

FLOAT

FileSCiFDrpKB

count

KB dropped by FileSCiFDrps.

FLOAT

FileSDbFDrpKB

count

KB dropped by FileSDbFDrps.

FLOAT

FileTJtFDrpKB

count

KB dropped by FileTJtFDrps.

FLOAT

FileAPtCnADrps

count

Number of clean append table or permanent journal table data


block or cylinder index disk segments aged out of memory.

FLOAT

FilePCiCnADrps

count

Number of clean permanent cylinder index disk segments aged out


of memory.

FLOAT

FilePDbCnADrps

count

Number of clean permanent data block disk segments aged out of


memory.

FLOAT

FileSCiCnADrps

count

Number of clean regular or restartable spool cylinder index disk


segments aged out of memory.

FLOAT

FileSDbCnADrps

count

Number of clean regular or restartable spool data block disk


segments aged out of memory.

FLOAT

FileTJtCnADrps

count

Number of clean transient journal table or WAL data block or WAL


cylinder index disk segments aged out of memory.

FLOAT

FileAPtCnADrpKB

count

KB dropped by FileAPtCnADrps.

FLOAT

FilePCiCnADrpKB

count

KB dropped by FilePCiCnADrps.

FLOAT

FilePDbCnADrpKB

count

KB dropped by FilePDbCnADrps.

FLOAT

FileSCiCnADrpKB

count

KB dropped by FileSCiCnADrps.

FLOAT

FileSDbCnADrpKB

count

KB dropped by FileSDbCnADrps.

FLOAT

FileTJtCnADrpKB

count

KB dropped by FileTJtCnADrps.

FLOAT

FileAPtRelsOther

count

Total number of scratch append table or permanent journal table


data block or cylinder index disk segments released as part of them
being deleted.

FLOAT

FilePCiRelsOther

count

Total number of scratch permanent cylinder index disk segments


released as part of them being deleted.

FLOAT

FilePDbRelsOther

count

Total number of scratch permanent data block disk segments


released as part of them being deleted.

FLOAT

FileSCiRelsOther

count

Total number of scratch regular or restartable spool cylinder index


disk segments released as part of them being deleted.

FLOAT

FileSDbRelsOther

count

Total number of scratch regular or restartable spool data block disk


segments released as part of them being deleted.

FLOAT

140

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileTJtRelsOther

count

Total number of scratch transient journal table or WAL data block


or WAL cylinder index disk segments released as part of them being
deleted.

FLOAT

FileAPtRelOtherKB

count

Total number of scratch append table or permanent journal table


data block or cylinder index disk segments released in KB.

FLOAT

FilePCiRelOtherKB

count

Total number of scratch permanent cylinder index disk segments


released in KB.

FLOAT

FilePDbRelOtherKB

count

Total number of scratch permanent data block disk segments


released in KB.

FLOAT

FileSCiRelOtherKB

count

Total number of scratch regular or restartable spool cylinder index


disk segments released in KB.

FLOAT

FileSDbRelOtherKB

count

Total number of scratch regular or restartable spool data block disk


segments released in KB.

FLOAT

FileTJtRelOtherKB

count

Total number of scratch transient journal table or WAL data block


or WAL cylinder index disk segments released in KB.

FLOAT

Data Segment Lock Requests Columns


These columns identify the number of lock requests, blocks, and deadlocks on a disk segment,
including those implied for segment acquires.
Column Name

Mode

Description

Data Type

FileLockBlocks

count

Number of lock requests that were blocked. This column is


calculated as follows:

FLOAT

Total locks - Locks blocked = Locks with immediate grants


FileLockDeadlocks

count

Number of deadlocks detected on lock requests.

FLOAT

FileLockEnters

count

Number of lock requests on disk segments.

FLOAT

Cylinder Management Overhead Events Columns


These columns identify the number of times the File System software performed a cylinder
management event. The table ResUsageIvpr further breaks down the I/Os associated with
these events. For more information, see Appendix C: ResUsageIvpr Table.
Column Name

Mode

Description

Data Type

FileCylAllocs

count

Number of new cylinders allocated.

FLOAT

FileCylDefrags

count

Number of cylinder defragments performed.

FLOAT

FileCylFrees

count

Number of logical or physical cylinders freed.

FLOAT

FileCylMigrs

count

Number of cylinder migrations.

FLOAT

Resource Usage Macros and Tables

141

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileMCylPacks

count

Number of MiniCylPack operations performed.

FLOAT

Write Ahead Logging Columns


These columns identify the number of times the File System software performed a cylinder
management event associated with the WAL log.
Column Name

Mode

Description

Data Type

FileWCylAllocs

count

Number of new WAL cylinders allocated.

FLOAT

FileWCylFrees

count

Number of times the File System logically frees a cylinder.

FLOAT

FSG I/O Column


This column identifies the I/O statistics reported from the FSG.
Column Name

Mode

Description

Data Type

IoRespMax

max

Maximum I/O response time in milliseconds on an AMP.

FLOAT

FSG Cache Wait Columns


Column Name

Mode

Description

Data Type

FSGCacheWaits

count

Number of times the File System waits for memory to become


available in the file segment cache when trying to read data from
disk.

FLOAT

FSGCacheWaitTime

count

Total amount of time the File System waits for memory to become
available in the file segment cache when trying to read data from
disk.

FLOAT

FSGCacheWaitTimeMax

max

Maximum amount of time the File System waits for memory to


become available in the File System cache when trying to read data
from disk.

FLOAT

MI Columns
Column Name

Mode

Description

Data Type

MIWriteLocks

count

Number of write locks acquired on a MI.

FLOAT

MIWriteLockTime

count

Total MI write lock hold time in milliseconds.

FLOAT

MIWriteLockTimeMax

max

Maximum MI write lock hold time in milliseconds.

FLOAT

142

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

MIWrites

count

Number of no modification (NOMOD) write locks acquired on


MI.

FLOAT

MIWriteTime

count

Total MI NOMOD write lock hold time in milliseconds.

FLOAT

MIWriteTimeMax

max

Maximum MI NOMOD write lock hold time in milliseconds.

FLOAT

MISleeps

count

Number of times spent waiting to get a lock on the MI.

FLOAT

MISleepTime

count

Total amount of time waited to get a lock on the MI.

FLOAT

MISleepTimeMax

max

Maximum amount of time waited to get a lock on the MI.

FLOAT

Data Block Merge Columns


These columns merge small data blocks into one large block so that subsequent merge
attempts will have less I/Os.
Column Name

Mode

Description

Data Type

DBMergeDone

count

Number of successful data block merges done. The number of


times the data block being modified successfully merged with some
number of adjacent data blocks as part of the modification.

FLOAT

Subtracting DBMergeDone from DBMergeTried will result in the


number of data block merge operations that were tried and failed.
DBMergeElim

count

Number of data blocks eliminated due to data block merges. If n


data blocks are merged into one large block, where n is the number
of data blocks, this number is incremented by n-1.

FLOAT

DBMergeExtraIO

count

Number of additional physical I/Os performed in the data block


merge process over and above those done if no data block merges
were attempted. This includes any extra physical I/Os that were
performed regardless of whether a particular merge attempt
succeeded or not.

FLOAT

DBMergeTried

count

Number of times the data block being modified tried to be merged


with adjacent data blocks as part of the modification.

FLOAT

AutoCylPack Columns
These columns track the number of cylinders used during AutoCylPack, a background task
that runs periodically to maintain the set levels of free space percent (FSP) on table cylinders.
For more information about AutoCylPack, see Database Administration.
Column Name

Mode

Description

Data Type

FileACPCylsMigr

count

Number of successful migrations performed by AutoCylPack.

FLOAT

Resource Usage Macros and Tables

143

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileACPCylsPostponed

count

Number of cylinders AutoCylPack selected for performing a


migration, but could not be performed at the current time. This
can happen due to conflicts with foreground tasks modifying the
cylinder at around the same time as AutoCylPack. AutoCylPack,
therefore, postpones the work until the next time it scans the MI
from the beginning. When AutoCylPack comes across the cylinder
again, and it still qualifies, the cylinder is selected again for
processing.

FLOAT

FileACPCylsSkipped

count

Number of cylinders AutoCylPack skipped when scanning the MI


since nothing needed to be done.

FLOAT

FileACPCylsUnFSEOnly

count

Number of cylinders AutoCylPack selected for performing a


migration, but could not be performed at the current time. This
can happen because of a locking conflict or a cylinder being
modified or marked down. Instead, AutoCylPack cleans-up the
unfree sector entries (UnFSEs) on all cylinders, except for those
that are down.

FLOAT

BLC Columns
These columns collect various BLC statistics for use in performance analysis and debugging.
BLC enables data compression at the data block (DB) level of the Teradata Database file
system. Compression reduces the amount of storage required for a given amount of data. The
BlockLevelCompression field of DBS Control enables and disables BLC.
For more information on BLC and compression-related DBS Control settings, see Utilities.
Note: You must enable BLC to collect statistics from the columns below or zero will be
returned for each column.
Column Name

Mode

Description

Data Type

FileCompDBs

count

Total number of data blocks compressed.

FLOAT

FileCompCPU

count

Total compression time, including any overhead. The column


measures, in nanoseconds, the time it takes from the beginning
of the compression operation to the end.

FLOAT

Note: This column is valid on SLES 11 or later systems only.


FilePostCompMB

count

Total number of MBs of compressed data that result from


uncompressed blocks being compressed.

FLOAT

This column is used together with the FilePreCompMB column


to calculate the compression ratio.
FilePreCompMB

count

Total number of MBs for the data block that will be compressed
before any compression starts.

FLOAT

This column is used together with the FilePostCompMB column


to calculate the compression ratio.

144

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileCompCylMigrs

count

Number of cylinder migrates done as part of a Ferret


COMPRESS command or by the AutoTempComp background
task.

FLOAT

The AutoTempComp background task is responsible for finding


the cylinders that should be compressed or decompressed based
on their temperature. For more information on this background
task, see Database Design.
For more information on the Ferret utility, COMPRESS
command, see Utilities.
FileCompFerretDBs

count

Number of blocks compressed as a result of the compression


operation called from the Ferret utility (see Utilities).

FLOAT

FileCompTempDBs

count

Number of data blocks compressed by the AutoTempComp


background task because they were colder than the DBS Control
TempBLCThresh field setting. See Database Design for more
information on the DBS Control TempBLCThresh field and
AutoTempComp background task.

FLOAT

Note: You must enable the temperature-based block-level


compression (TBBLC) feature when using this column or it will
return a zero. To determine if TBBLC is enabled, see
EnableTempBLC in Utilities.
FilePreUncompMB

count

Total number of MBs for the compressed data block that will be
uncompressed before uncompression starts.

FLOAT

This column is used together with the FilePostUncompMB


column to calculate the uncompression ratio.
FilePostUncompMB

count

Total number of MBs of uncompressed data that result from


compressed blocks being uncompressed.

FLOAT

This column is used together with the FilePreUncompMB


column to calculate the uncompression ratio.
FileTempCPU

count

CPU time spent by the AutoTempComp background task either


compressing or uncompressing data, including any overhead.

FLOAT

For more information on the AutoTempComp background task,


see Utilities.
Note: You must enable the temperature-based block-level
compression (TBBLC) feature when using this column or it will
return zero. To determine if TBBLC is enabled, see
EnableTempBLC in Utilities.
Also, any time counted against this column is not counted in
either the FileCompCPU or FileUncompCPU column.
Note: This column is valid on SLES 11 or later only.
FileUncompCPU

count

Total uncompression time, including any overhead. The column


measures, in nanoseconds, the time it takes from the beginning
of the uncompression operation to the end.

FLOAT

Note: This column is valid on SLES 11 or later only.

Resource Usage Macros and Tables

145

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileUnCompCylMigrs

count

Number of cylinder migrates done as part of a Ferret


UNCOMPRESS command or by the AutoTempComp
background task.

FLOAT

For more information on the Ferret utility, UNCOMPRESS


command, or AutoTempComp background task, see Utilities.
FileUncompDBs

count

Total number of data blocks uncompressed.

FLOAT

FileUncompFerretDBs

count

Number of blocks uncompressed as a result of the


uncompression operation called from the Ferret utility (see
Utilities).

FLOAT

FileUnCompTempDBs

count

Number of data blocks uncompressed by the AutoTempComp


background task because they were warmer than the DBS
Control TempBLCThresh field setting. See Database Design for
more information on the DBS Control TempBLCThresh field
and AutoTempComp background task.

FLOAT

Note: You must enable the TBBLC feature when using this
column or it will return zero. To determine if TBBLC is enabled,
see EnableTempBLC in Utilities.

General Concurrency Control Database Locks Columns


These columns identify database locking activities.
Column Name

Mode

Description

Data Type

DBLockBlocks

count

Number of times a database lock was blocked.

FLOAT

DBLockEnters

count

Number of times a database lock was entered for holding.

FLOAT

DBLockDeadlocks

count

Number of times a database lock was deadlocked.

FLOAT

Memory Columns
Memory Resident Columns
These columns represent the amount of memory resident specific to virtual processor
activities, subdivided into memory types. The columns do not include any memory
allocations specific to the node the vproc is running under.

146

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Disk memory segments can be in one of four states:

Clean (unmodified) and Unaccessed by any process (CU)

Dirty (modified) and Unaccessed (DU)

Clean and Accessed (CA)

Dirty and Accessed (DA)

Permanent segments for an entire table can be user-locked-in to memory. These are called
frozen segments (Frz), and no state subdivision is necessary because they cannot be aged or
forced out of memory.
Column Name

Mode

Description

Data Type

MemAPtKBResCA

track

Current KB resident in memory for append table or permanent


journal table data block or cylinder index disk segments that are
currently clean and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemAPtKBResCU

track

Current KB resident in memory for append table or permanent


journal table data block or cylinder index disk segments that are
currently clean and not accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemAPtKBResDA

track

Current KB resident in memory for append table or permanent


journal table data block or cylinder index disk segments that are
currently dirty and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemAPtKBResDU

track

Current KB resident in memory for append table or permanent


journal table data block or cylinder index disk segments that are
currently dirty and unaccessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemPCiKBResCA

track

Current KB resident in memory for permanent cylinder index disk


segments that are currently clean and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemPCiKBResCU

track

Current KB resident in memory for permanent cylinder index disk


segments that are currently clean and not accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemPCiKBResDA

track

Current KB resident in memory for permanent cylinder index disk


segments that are currently dirty and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemPCiKBResDU

track

Current KB resident in memory for permanent cylinder index disk


segments that are currently dirty and unaccessed.

FLOAT

Note: This column is not currently valid. It should not be used.

Resource Usage Macros and Tables

147

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

MemPDbKBResCA

track

Current KB resident in memory for permanent data block segments


that are currently clean and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemPDbKBResCU

track

Current KB resident in memory for permanent data block disk


segments that are currently clean and not accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemPDbKBResDA

track

Current KB resident in memory for permanent data block disk


segments that are currently dirty and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemPDbKBResDU

track

Current KB resident in memory for permanent data block disk


segments that are currently dirty and unaccessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemSCiKBResCA

track

Current KB resident in memory for regular or restartable spool


cylinder index disk segments that are currently clean and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemSCiKBResCU

track

Current KB resident in memory for regular or restartable spool


cylinder index disk segments that are currently clean and not
accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemSCiKBResDA

track

Current KB resident in memory for regular or restartable spool


cylinder index disk segments that are currently dirty and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemSCiKBResDU

track

Current KB resident in memory for regular or restartable spool


cylinder index disk segments that are currently dirty and
unaccessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemSDbKBResCA

track

Current KB resident in memory for regular or restartable spool data


block disk segments that are currently clean and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemSDbKBResCU

track

Current KB resident in memory for regular or restartable spool data


block disk segments that are currently clean and not accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemSDbKBResDA

track

Current KB resident in memory for regular or restartable spool data


block disk segments that are currently dirty and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemSDbKBResDU

track

Current KB resident in memory for regular or restartable spool data


block disk segments that are currently dirty and unaccessed.

FLOAT

Note: This column is not currently valid. It should not be used.

148

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

MemTJtKBResCA

track

Current KB resident in memory for transient journal table or WAL


data block or WAL cylinder index disk segments that are currently
clean and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemTJtKBResCU

track

Current KB resident in memory for transient journal table or WAL


data block or WAL cylinder index disk segments that are currently
clean and not accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemTJtKBResDA

track

Current KB resident in memory for transient journal table or WAL


data block or WAL cylinder index disk segments that are currently
dirty and accessed.

FLOAT

Note: This column is not currently valid. It should not be used.


MemTJtKBResDU

track

Current KB resident in memory for transient journal table or WAL


data block or WAL cylinder index disk segments that are currently
dirty and unaccessed.

FLOAT

Note: This column is not currently valid. It should not be used.

Memory Allocation Column


Column Name

Mode

Description

Data Type

MemCtxtAllocs

count

Number of successful memory allocations and size-increasing


memory alters on task context pages.

FLOAT

Note: Only scratch pages are allocated. All other task context pages
appear resident at some point soon after component restart.

Task Context Segment Usage Columns


These columns identify the usage of task context segments and how they leave memory.
Column Name

Mode

Description

Data Type

MemCtxtAccesses

count

Number of segments accessed.

FLOAT

MemCtxtAccessKB

count

KB of segments accessed.

FLOAT

MemCtxtDeaccesses

count

Number of segments deaccessed.

FLOAT

Deaccessed segments remain in memory until paged out through


aging.
MemCtxtDeaccessKB

count

KB of segments deaccessed.

FLOAT

MemCtxtDestroyKB

count

KB of segments destroyed.

FLOAT

MemCtxtDestroys

count

Number of segments destroyed.

FLOAT

Destroyed segments are immediately dropped from memory.

Resource Usage Macros and Tables

149

Chapter 13: ResUsageSvpr Table


Statistics Columns

Net Columns
Point-to-Point Net Traffic Columns
These columns identify the number (Reads, Writes) and amount (ReadKB, WriteKB) of input
and output messages passing through either Teradata Database net through point-to-point
(1:1) methods (PtP).
Column Name

Mode

Description

Data Type

NetPtPReadKB

count

Total KB of point-to-point messages input to the vproc.

FLOAT

NetPtPReads

count

Number of point-to-point messages input to the vproc.

FLOAT

NetPtPWriteKB

count

Total KB of point-to-point messages output to the vproc.

FLOAT

NetPtPWrites

count

Number of point-to-point messages output from the vproc.

FLOAT

Broadcast Net Traffic Columns


These columns identify the number (Reads, Writes) and amount (ReadKB, WriteKB) of input
and output messages passing through the Teradata Database nets through broadcast (1:many)
methods (Brd) per net.
Column Name

Mode

Description

Data Type

NetBrdReadKB

count

Total KB of broadcast messages input to the vproc.

FLOAT

NetBrdReads

count

Number of broadcast messages input to the vproc.

FLOAT

NetBrdWriteKB

count

Total KB of broadcast messages output from the vproc.

FLOAT

NetBrdWrites

count

Number of broadcast messages output from the vproc.

FLOAT

Work Mailbox Queue Columns


These columns identify the virtual processor work mailbox queue length where requested
work awaits the allocation of a process to perform the work.
Column Name

Mode

Description

Data Type

MsgWorkQLen

track

Total number of work requests waiting at the current time.

FLOAT

MsgWorkQLenMax

max

Maximum number of work requests waiting during each log


interval.

FLOAT

The Max count, unlike the MsgWorkQLen field, tracks the


maximum count over the log period. Therefore, the MsgWorkQLen
field can be zero even though the Max count can be non-zero over
the log period.

150

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Process Scheduling Columns


ChnSignal Status Tracking Column
Column Name

Mode

Description

Data Type

MsgChnLastDone

count

Number of last done events that occurred for this vproc.

FLOAT

Note: The last AMP to finish an operation may send a last done
broadcast message indicating the work is done for this step. This is
used in tracking down the slowest AMP in the system. An AMP that
has more last done messages than the others could be a bottleneck
in the system performance.

CPU Utilization Columns


These columns represent CPU activities associated with this virtual processor, subdivided into
48 partitions. Partition 0 is reserved for use by PDE processes. For more information on the
other partitions see Appendix E: Partition Assignments.
For definitions of user service and user execution, see Process Scheduling Columns in the
Chapter 5: ResUsageScpu Table.
Column Name

Mode

Description

Data Type

CPUUServPart00 CPUUServPart48

count

Time in centiseconds CPUs are busy in the partition doing user


service. This is the system level time spent on a process.

FLOAT

CPUUExecPart00 CPUUExecPart48

count

Time in centiseconds CPUs are busy in the partition doing user


execution. This is the user level time spent on a process.

FLOAT

Note: Some CPU times may not be reported to these fields if child
processes or threads are running on your system. For information
about the difference in CPU usage reported and possible cause, see
CPU Utilization Columnsin Chapter 11: ResUsageSps.

Cylinder Read Columns


These columns represent the File System resource usage statistics. The Cylinder Read feature
uses these statistics for tracking performance and utilization.

Resource Usage Macros and Tables

151

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileFcrRequests

count

Total number of requests for the File System to use Cylinder


Read.

FLOAT

This column records the number of attempts to use Cylinder


Read independent of whether the request will be issued to FSG or
not. A request can be denied due to insufficient data blocks or
because there is insufficient space in the FSG cache. Requests can
also be denied at both the user and kernel level. Each of these
items is counted in other FileFcr resource usage columns.
This column can perform a number of calculations:
Requests issued to FSG =
FileFcrRequests - FileFcrDeniedUser
Successful Cylinder Reads =
FileFcrRequests - FileFcrDeniedUser - FileFcrDeniedKern
FileFcrRequestsAdaptive

count

Number of adaptive requests from File System.

FLOAT

This column records the number of requests for adaptive-style


Cylinder Reads.
Note: This column is not currently used.
FileFcrBlocksDeniedCache

count

Number of data blocks contained in Cylinder Read requests


rejected by the FSG subsystem due to insufficient cache.

FLOAT

This column records the number of data blocks that were part of
attempts to use Cylinder read that were denied by the FSG
subsystem due to insufficient cache space; therefore, also
incremented the FileFcrDeniedCache column.
FileFcrBlocksDeniedKern

count

Number of data blocks contained in the FSG subsystem rejected


requests for Cylinder Read.

FLOAT

This column records the number of data blocks that were part of
attempts to use Cylinder Read that were denied by the FSG
subsystem, and therefore, also incremented the
FileFcrDeniedKern column.
FileFcrBlocksDeniedUser

count

Number of data blocks contained in the File System rejected


requests for Cylinder Read.

FLOAT

This column records the number of data blocks that were part of
attempts to use Cylinder Read that were denied by the File
System.
FileFcrBlocksDeniedThreshKern

count

Number of data blocks contained in Cylinder Read requests


rejected for threshold by the FSG subsystem.

FLOAT

This column records the number of data blocks that were part of
attempts to use Cylinder read that were denied by the FSG
subsystem due to the number of blocks being below the
threshold, and therefore, also incremented the
FileFcrDeniedThreshKern column.

152

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileFcrBlocksDeniedThreshUser

count

Number of data blocks contained in Cylinder Read requests


rejected for threshold by the File System.

FLOAT

This column records the number of data blocks that were part of
attempts to use Cylinder Read that were denied by the File System
due to the number of blocks being below the threshold, and
therefore, also incremented the FileFcrDeniedThreshUser
column.
FileFcrBlocksRead

count

Number of data blocks read in using Cylinder Read.

FLOAT

This column records the total number of data blocks read in by


successful Cylinder Read operations.
The average number of data blocks in a successful Cylinder read
can be calculated as follows:
Average data blocks/ Cylinder Read = FileFcrBlocksRead /
(FileFcrRequests - FileFcrDeniedUser - FileFcrDeniedKern)
FileFcrDeniedCache

count

Number of Cylinder Read requests denied by FSG due to


insufficient cache.

FLOAT

This column records the number of Cylinder Read requests


denied due to insufficient FSG cache space for a cylinders worth
of data.
FileFcrDeniedKern

count

Number of Cylinder Read requests denied by the FSG subsystem.

FLOAT

This column records the number of Cylinder Read requests


issued to the FSG subsystem which, for any reason, have been
denied. A request can be denied due to insufficient data blocks
(for example, the FileFcrDeniedThreshKern column) or because
there is insufficient space in the FSG cache (for example, the
FileFcrDeniedCache column). The FSG subsystem can reject a
request containing insufficient data blocks that the File System
thought had enough blocks because the FSG subsystem reduces
the count by the number of data blocks that are already resident
in the cache.
FileFcrDeniedUser

count

Number of Cylinder Read requests denied by the File System.

FLOAT

This column records the number of Cylinder Read attempts


denied by the File System. A request can be denied by the File
System due to insufficient number of data blocks being requested
(for example, the FileFcrDeniedThreshUser column). For
information, see the FileFcrDeniedThreshUser column.

Resource Usage Macros and Tables

153

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileFcrDeniedThreshKern

count

Number of Cylinder Read requests denied by the FSG subsystem


due to insufficient data blocks.

FLOAT

This column records the number of Cylinder Read requests


denied due to the data block threshold criteria. There is a
minimum threshold of data blocks for an individual Cylinder
Read request. If the number of data blocks is below this
threshold, the overhead of the Cylinder Read operation is
considered too large and issuing individual data block reads is
considered more efficient. Therefore, the Cylinder Read request
is denied. FSG must reevaluate the threshold for a request that
the File System considered valid since FSG eliminates any data
blocks from the request list that are already resident in the cache.
This could reduce the count that the File System thought was
above the threshold to one that is now below.
FileFcrDeniedThreshUser

count

Number of Cylinder Read requests denied by the File System due


to insufficient data blocks.

FLOAT

This column records the number of Cylinder Read requests


denied due to the data block threshold criteria. There is a
minimum threshold of data blocks for an individual Cylinder
Read request. If the number of data blocks is below this
threshold, the overhead of the Cylinder Read operation is
considered too large and issuing individual data block reads is
considered more efficient. Therefore, the Cylinder Read request
is denied.

PE and AMP UDF CPU Columns


These columns report the system level and user execution UDF CPU time value under the
AMP and PE vprocs. They also provide information about whether the UDFs were doing
work for the AMP or PE vprocs.
The data reported by CPUUExecPart00 for the NODE vproc includes the UDF CPU time for
all UDFs running on the node.
All UDFs are invoked by either a PE or an AMP, but PDE reports UDF CPU usage to the Node
partition by design, not the associated AMP and PE partitions. Therefore, the
CPUUExecPart00 and CPUUServPart00 columns will report the UDF CPU usage and the
CPUUServOrExecPart11 and 13 will not, where ServOrExec is either the user service (Serv) or
user execution (Exec) partition. RSS code reports the UDF CPU usage by AMP and PE in the
following UDF CPU columns:

UDFAMPExec

UDFAMPServ

UDFPEExec

UDFPEServ

Note: The total of these UDF CPU columns can exceed 100% of elapsed centiseconds due to
the CPU data gathering operation. The UDF CPU reporting for the ResUsageSvpr table is
reported when the UDF completes an operation. A long running UDF will report all the CPU

154

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

time at one time and that may be significantly larger than the current reporting period. This
can cause spikes in the UDF CPU reporting columns.
The UDF CPU time value over multiple periods averages 100% or less.
Note: When an external routine (such as C, C++, Java UDF, or external stored procedure)
forks a child process or thread, the CPU time is not reported to these fields. As a result, the
resource usage table shows a lower CPU usage than shown in the ResUsageSpma table even if
the external routine consumes a large amount of CPU time. If there are child processes or
threads running on your system, this may account for the larger CPU times reported in the
ResUsageSpma table compared to this table. To confirm that the difference in CPU usage
reported by the ResUsageSpma table is caused by child processes or threads, contact your
Teradata Support Center personnel.
Column Name

Mode

Description

Data Type

UDFAMPServ

count

Reported system-level UDF CPU time value under the AMP vproc.

FLOAT

UDFAMPExec

count

Reported user execution UDF CPU time value under the AMP
vproc.

FLOAT

UDFPEServ

count

Reported system-level UDF CPU time value under the PE vproc.

FLOAT

UDFPEExec

count

Reported user execution UDF CPU time value under the PE vproc.

FLOAT

Process Block Count Columns


These columns identify how many times a process became blocked on which blocking type.
Column Name

Mode

Description

Data Type

ProcBlksDBLock

count

Number of process blocks for database locks.

FLOAT

ProcPendDBLock

track

Number of process blocks pending database locks.

FLOAT

Process Pending Wait Time Columns


Column Name

Mode

Description

Data Type

ProcWaitDBLock

count

Total time in centiseconds processes were blocked pending database


locks.

FLOAT

Teradata VS Columns
Allocation Columns
These columns identify the allocation statistics reported by the Allocator.

Resource Usage Macros and Tables

155

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

AllocatorExtentAllocReqs

count

Number of cylinder allocation requests received by the allocator.

FLOAT

AllocatorExtentFreeReqs

count

Number of cylinder free requests received by the allocator.

FLOAT

AllocatorMapIOsStarted

count

Number of map I/Os initiated by the allocator.

FLOAT

AllocatorMapIOsDone

count

Number of map I/Os completed by the allocator.

FLOAT

Extent Driver I/O Columns


These columns identify the I/O statistics reported from the extent driver.

Column Name

Mode

Description

Data
Type

ReadsCold

count

Total number of reads issued to all cylinders that are considered COLD.

FLOAT

A cylinder is considered to be COLD if the response time of its


associated physical storage is less than TVS Percentile Cold Lower
Bound configuration setting.
Note: This column is not currently valid. It should not be used.
ReadsHot

count

Total number of reads issued to all cylinders that are considered HOT.

FLOAT

A cylinder is considered HOT if the response time of its associated


physical storage is greater than the Teradata VS Percentile Hot Lower
Bound configuration setting.
Note: This column is not currently valid. It should not be used.
ReadsWarm

count

Total number of reads issued to all cylinders that are considered


WARM.

FLOAT

A cylinder is considered to be WARM if the response time of its


associated physical storage is between TVS Percentile Hot Lower
Bound and TVS Percentile Cold Upper Bound configuration settings.
Note: This column is not currently valid. It should not be used.
WritesCold

count

Total number of writes issued to all cylinders that are considered


COLD.

FLOAT

Note: This column is not currently valid. It should not be used.


WritesHot

count

Total number of writes issued to all cylinders that are considered HOT.

FLOAT

Note: This column is not currently valid. It should not be used.


WritesWarm

count

Total number of writes issued to all cylinders that are considered


WARM.

FLOAT

Note: This column is not currently valid. It should not be used.


ReadResponseColdTotal

count

Total read response time of all cylinders that are considered COLD

FLOAT

Note: This column is not currently valid. It should not be used.

156

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Statistics Columns

Column Name

Mode

Description

Data
Type

ReadResponseHotTotal

count

Total read response time of all cylinders that are considered HOT.

FLOAT

Note: This column is not currently valid. It should not be used.


ReadResponseWarmTotal

count

Total read response time of all cylinders that are considered WARM.

FLOAT

Note: This column is not currently valid. It should not be used.


WriteResponseColdTotal

count

Total write response time of all cylinders that are considered COLD.

FLOAT

Note: This column is not currently valid. It should not be used.


WriteResponseHotTotal

count

Total write response time of all cylinders that are considered HOT.

FLOAT

Note: This column is not currently valid. It should not be used.


WriteResponseWarmTotal

count

Total write response time of all cylinders that are considered WARM.

FLOAT

Note: This column is not currently valid. It should not be used.


ReadResponseColdMax

max

Maximum read response time of all cylinders that are considered


COLD.

FLOAT

Note: This column is not currently valid. It should not be used.


ReadResponseHotMax

max

Maximum read response time of all cylinders that are considered HOT.

FLOAT

Note: This column is not currently valid. It should not be used.


ReadResponseWarmMax

max

Maximum read response time of all cylinders that are considered


WARM.

FLOAT

Note: This column is not currently valid. It should not be used.


ReadResponseColdMin

min

Minimum read response time of all cylinders that are considered


COLD.

FLOAT

This column has a default value of the largest value of a 64-bit integer
column.
Note: This column is not currently valid. It should not be used.
ReadResponseHotMin

min

Minimum read response time of all cylinders that are considered HOT.

FLOAT

This column has a default value of the largest value of a 64-bit integer
column.
Note: This column is not currently valid. It should not be used.
ReadResponseWarmMin

min

Minimum read response time of all cylinders that are considered


WARM.

FLOAT

This column has a default value of the largest value of a 64-bit integer
column.
Note: This column is not currently valid. It should not be used.
WriteResponseColdMax

max

Maximum write response time of all cylinders that are considered


COLD.

FLOAT

Note: This column is not currently valid. It should not be used.

Resource Usage Macros and Tables

157

Chapter 13: ResUsageSvpr Table


Statistics Columns

Data
Type

Column Name

Mode

Description

WriteResponseHotMax

max

Maximum write response time of all cylinders that are considered


HOT.

FLOAT

Note: This column is not currently valid. It should not be used.


WriteResponseWarmMax

max

Maximum write response time of all cylinders that are considered


WARM.

FLOAT

Note: This column is not currently valid. It should not be used.


WriteResponseColdMin

min

Minimum write response time of all cylinders that are considered


COLD.

FLOAT

This column has a default value of the largest value of a 64-bit integer
column.
Note: This column is not currently valid. It should not be used.
WriteResponseHotMin

min

Minimum write response time of all cylinders that are considered


HOT.

FLOAT

This column has a default value of the largest value of a 64-bit integer
column.
Note: This column is not currently valid. It should not be used.
WriteResponseWarmMin

min

Minimum write response time of all cylinders that are considered


WARM.

FLOAT

This column has a default value of the largest value of a 64-bit integer
column.
Note: This column is not currently valid. It should not be used.

Node Agent Columns


Note: The NodeAgentMigrationsStarted and NodeAgentMigrationsDone columns are
populated only in Teradata VS. For more information on these columns and on TVS, see
Teradata Virtual Storage.
Column Name

Mode

Description

Data Type

NodeAgentMigrationsDone

count

Number of migration requests completed by the Node Agent.

FLOAT

NodeAgentMigrationsStarted

count

Number of migration requests started by the Node Agent.

FLOAT

Reserved Column
Column Name

Mode

Description

Data Type

Reserved

n/a

Note: This column is not used.

CHAR (3)

158

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Summary Mode

Summary Mode
When Summary Mode is active for the ResUsageSvpr table, one row is written to the database
for each type of vproc on each node in the system, summarizing the vprocs of that type on
that node, for each log interval.
You can determine if a row is in Summary Mode by checking the SummaryFlag column for
that row.
IF the SummaryFlag column value is

THEN the data for that row is being logged

'S'

in Summary Mode.

'N'

normally.

Spare Columns
The ResUsageSvpr table spare fields are named Spare00 through Spare19, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare fields currently being used.
Column Name

Description

Spare00

Number of data segments that were aged out of VH cache. This field
is populated by the FSG subsystem.
Note: This field will be converted to the VHAgedOut column in
Teradata Database 15.0. You can access resource usage data for this
field using the VHAgedOut column name in the ResSvprView view.
For details, see ResSvprView on page 179.
For information about VH cache, see Glossary.

Spare01

Volume of data segments in KB that were aged out of VH cache. This


field is populated by the FSG subsystem.
Note: This field will be converted to the VHAgedOutKB column in
Teradata Database 15.0. You can access resource usage data for this
field using the VHAgedOutKB column name in the ResSvprView
view. For details, see ResSvprView on page 179.
For information about VH cache, see Glossary.

Resource Usage Macros and Tables

159

Chapter 13: ResUsageSvpr Table


Spare Columns

Column Name

Description

Spare02

Number of logical reads from VH cache. This field is populated by the


FSG subsystem.
Note: This field will be converted to the VHLogicalDBRead column
in Teradata Database 15.0. You can access resource usage data for this
field using the VHLogicalDBRead column name in the ResSvprView
view. For details, see ResSvprView on page 179.
For information about VH cache, see Glossary.

Spare03

Volume of logical reads in KB from VH cache. This field is populated


by the FSG subsystem.
Note: This field will be converted to the VHLogicalDBReadKB
column in Teradata Database 15.0. You can access resource usage data
for this field using the VHLogicalDBReadKB column name in the
ResSvprView view. For details, see ResSvprView on page 179.
For information about VH cache, see Glossary.

Spare04

Number of very hot reads that were handled by physical disk I/O due
to a VH cache miss (that is, data not found in the VH cache). This
field is populated by the FSG subsystem.
Note: This field will be converted to the VHPhysicalDBRead column
in Teradata Database 15.0. You can access resource usage data for this
field using the VHPhysicalDBRead column name in the ResSvprView
view. For details, see ResSvprView on page 179.
For information about VH cache, see Glossary.

Spare05

Volume of very hot reads in KB that were handled by physical disk


I/O due to a VH cache miss. This field is populated by the FSG
subsystem.
Note: This field will be converted to the VHPhysicalDBReadKB
column in Teradata Database 15.0. You can access resource usage data
for this field using the VHPhysicalDBReadKB column name in the
ResSvprView view. For details, see ResSvprView on page 179.
For information about VH cache, see Glossary.

Spare06

WM CPU COD value in one tenths of a percent. For example, a value


of 500 represents a WM CPU COD value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in
Teradata Database 15.0. You can access resource usage data for this
field using the WM_CPU_COD column name in the ResSvprView
view. For details, see ResSvprView on page 179.
Note: WM CPU COD is not supported on SLES 10. Its value is set to
1000 on SLES 10.

160

Resource Usage Macros and Tables

Chapter 13: ResUsageSvpr Table


Spare Columns

Column Name

Description

Spare07

Size of VH cache in KB currently in use. This field is populated by the


FSG subsystem.
Note: This field will be converted to the VHCacheInuseKB column in
Teradata Database 15.0. You can access resource usage data for this
field using the VHCacheInuseKB column name in the ResSvprView
view. For details, see ResSvprView on page 179.

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

Resource Usage Macros and Tables

161

Chapter 13: ResUsageSvpr Table


Spare Columns

162

Resource Usage Macros and Tables

CHAPTER 14

Resource Usage Views

This chapter provides the definitions of the resource usage views.


Note: Views are the recommended method to access the resource usage table data.
To see the view definitions, execute SHOW VIEW viewname, where viewname is the name of
the view whose most recent SQL create text is to be reported. For details on using the SHOW
VIEW statement, see SQL Data Definition Language.
The following views report the table column, GroupId. A homogenous system requires no
changes to use this macro because all the nodes will be assigned to group A. For a coexisting
system, however, the values need to be set up when the system is installed or reconfigured so
that each type of node is assigned to a specific group ID.
Since each of the views include all the columns for the represented table, the views will be
changed whenever the base table columns are changed.
Notice:

Do not change or delete columns in these views. If the columns are modified, the resource
usage macros that use these views may not work properly. You can, however, safely add
columns.

Resource Usage Macros and Tables

163

Chapter 14: Resource Usage Views


ResCPUUsageByAMPView

ResCPUUsageByAMPView
ResCPUUsageByAMPView describes CPU usage per AMP.
REPLACE VIEW DBC.ResCPUUsageByAMPView
AS SELECT
TheDate,
TheTime,
VprId,
VprId
as Vproc,
NodeID,
Secs,
NCPUs,
/* GroupId */
/* Changes in GroupId definition affects the displayed grouping in
* the Res*ByGroup macros. The default setting below shows how to
* select different node families, but does not differentiate the
* resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,

/*

/*

/*

/*

164

CPUUExecPart11 AS AMPWorkTaskExec,
CPUUServPart11 AS AMPWorkTaskServ,
AMPMiscUserExec*/
(
CPUUExecPart01+CPUUExecPart02+CPUUExecPart03 +
CPUUExecPart04+CPUUExecPart05+CPUUExecPart06+CPUUExecPart07 +
CPUUExecPart08+CPUUExecPart09+CPUUExecPart10 +
CPUUExecPart12+CPUUExecPart13+CPUUExecPart14+CPUUExecPart15 +
CPUUExecPart16+CPUUExecPart17+CPUUExecPart18+CPUUExecPart19 +
CPUUExecPart20+CPUUExecPart21+CPUUExecPart22+CPUUExecPart23 +
CPUUExecPart24+CPUUExecPart25+CPUUExecPart26+CPUUExecPart27 +
CPUUExecPart28+CPUUExecPart29+CPUUExecPart30+CPUUExecPart31 +
CPUUExecPart32+CPUUExecPart33+CPUUExecPart34+CPUUExecPart35 +
CPUUExecPart36+CPUUExecPart37+CPUUExecPart38+CPUUExecPart39 +
CPUUExecPart40+CPUUExecPart41+CPUUExecPart42+CPUUExecPart43 +
CPUUExecPart44+CPUUExecPart45+CPUUExecPart46+CPUUExecPart47) AS AMPMiscUserExec,
AMPMiscUserServ */
(
CPUUServPart01+CPUUServPart02+CPUUServPart03 +
CPUUServPart04+CPUUServPart05+CPUUServPart06+CPUUServPart07 +
CPUUServPart08+CPUUServPart09+CPUUServPart10 +
CPUUServPart12+CPUUServPart13+CPUUServPart14+CPUUServPart15 +
CPUUServPart16+CPUUServPart17+CPUUServPart18+CPUUServPart19 +
CPUUServPart20+CPUUServPart21+CPUUServPart22+CPUUServPart23 +
CPUUServPart24+CPUUServPart25+CPUUServPart26+CPUUServPart27 +
CPUUServPart28+CPUUServPart29+CPUUServPart30+CPUUServPart31 +
CPUUServPart32+CPUUServPart33+CPUUServPart34+CPUUServPart35 +
CPUUServPart36+CPUUServPart37+CPUUServPart38+CPUUServPart39 +
CPUUServPart40+CPUUServPart41+CPUUServPart42+CPUUServPart43 +
CPUUServPart44+CPUUServPart45+CPUUServPart46+CPUUServPart47) AS AMPMiscUserServ,
AMPTotalUserExec */
(CPUUExecPart00+CPUUExecPart01+CPUUExecPart02+CPUUExecPart03 +
CPUUExecPart04+CPUUExecPart05+CPUUExecPart06+CPUUExecPart07 +
CPUUExecPart08+CPUUExecPart09+CPUUExecPart10+CPUUExecPart11 +
CPUUExecPart12+CPUUExecPart13+CPUUExecPart14+CPUUExecPart15 +
CPUUExecPart16+CPUUExecPart17+CPUUExecPart18+CPUUExecPart19 +
CPUUExecPart20+CPUUExecPart21+CPUUExecPart22+CPUUExecPart23 +
CPUUExecPart24+CPUUExecPart25+CPUUExecPart26+CPUUExecPart27 +
CPUUExecPart28+CPUUExecPart29+CPUUExecPart30+CPUUExecPart31 +
CPUUExecPart32+CPUUExecPart33+CPUUExecPart34+CPUUExecPart35 +
CPUUExecPart36+CPUUExecPart37+CPUUExecPart38+CPUUExecPart39 +
CPUUExecPart40+CPUUExecPart41+CPUUExecPart42+CPUUExecPart43 +
CPUUExecPart44+CPUUExecPart45+CPUUExecPart46+CPUUExecPart47) AS AMPTotalUserExec,
AMPTotalUserServ */
(CPUUServPart00+CPUUServPart01+CPUUServPart02+CPUUServPart03 +
CPUUServPart04+CPUUServPart05+CPUUServPart06+CPUUServPart07 +
CPUUServPart08+CPUUServPart09+CPUUServPart10+CPUUServPart11 +
CPUUServPart12+CPUUServPart13+CPUUServPart14+CPUUServPart15 +
CPUUServPart16+CPUUServPart17+CPUUServPart18+CPUUServPart19 +
CPUUServPart20+CPUUServPart21+CPUUServPart22+CPUUServPart23 +
CPUUServPart24+CPUUServPart25+CPUUServPart26+CPUUServPart27 +

Resource Usage Macros and Tables

Chapter 14: Resource Usage Views


ResCPUUsageByAMPView
CPUUServPart28+CPUUServPart29+CPUUServPart30+CPUUServPart31 +
CPUUServPart32+CPUUServPart33+CPUUServPart34+CPUUServPart35 +
CPUUServPart36+CPUUServPart37+CPUUServPart38+CPUUServPart39 +
CPUUServPart40+CPUUServPart41+CPUUServPart42+CPUUServPart43 +
CPUUServPart44+CPUUServPart45+CPUUServPart46+CPUUServPart47)
AS AMPTotalUserServ
FROM DBC.ResUsageSvpr WHERE VprType like 'AMP%' WITH CHECK OPTION;

Resource Usage Macros and Tables

165

Chapter 14: Resource Usage Views


ResCPUUsageByPEView

ResCPUUsageByPEView
ResCPUUsageByPEView describes CPU usage by each PE.
EPLACE VIEW DBC.ResCPUUsageByPEView
AS SELECT
TheDate,
TheTime,
VprId,
VprId
AS Vproc,
NodeID (FORMAT'999-99') AS NodeId,
Secs, NCPUs,
/*
*
*
*

Changes in GroupId definition affects the displayed grouping in


the Res*ByGroup macros. The default setting below shows how to
select different node families, but does not differentiate the
resulting groups. If a coexistence system had 5550, 5600 and

5650 nodes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
CPUUExecPart13
CPUUServPart13
CPUUExecPart12
CPUUServPart12

AS
AS
AS
AS

PEDispExec,
PEDispServ,
PESessExec,
PESessServ,

/* PEMiscUserExec */
(
CPUUExecPart01+CPUUExecPart02+CPUUExecPart03 +
CPUUExecPart04+CPUUExecPart05+CPUUExecPart06+CPUUExecPart07 +
CPUUExecPart08+CPUUExecPart09+CPUUExecPart10+CPUUExecPart11 +
CPUUExecPart14+CPUUExecPart15 +
CPUUExecPart16+CPUUExecPart17+CPUUExecPart18+CPUUExecPart19 +
CPUUExecPart20+CPUUExecPart21+CPUUExecPart22+CPUUExecPart23 +
CPUUExecPart24+CPUUExecPart25+CPUUExecPart26+CPUUExecPart27 +
CPUUExecPart28+CPUUExecPart29+CPUUExecPart30+CPUUExecPart31 +
CPUUExecPart32+CPUUExecPart33+CPUUExecPart34+CPUUExecPart35 +
CPUUExecPart36+CPUUExecPart37+CPUUExecPart38+CPUUExecPart39 +
CPUUExecPart40+CPUUExecPart41+CPUUExecPart42+CPUUExecPart43 +
CPUUExecPart44+CPUUExecPart45+CPUUExecPart46+CPUUExecPart47) AS PEMiscUserExec,
/* PEMiscUserServ */
(
CPUUServPart01+CPUUServPart02+CPUUServPart03 +
CPUUServPart04+CPUUServPart05+CPUUServPart06+CPUUServPart07 +
CPUUServPart08+CPUUServPart09+CPUUServPart10+CPUUServPart11 +
CPUUServPart14+CPUUServPart15 +
CPUUServPart16+CPUUServPart17+CPUUServPart18+CPUUServPart19 +
CPUUServPart20+CPUUServPart21+CPUUServPart22+CPUUServPart23 +
CPUUServPart24+CPUUServPart25+CPUUServPart26+CPUUServPart27 +
CPUUServPart28+CPUUServPart29+CPUUServPart30+CPUUServPart31 +
CPUUServPart32+CPUUServPart33+CPUUServPart34+CPUUServPart35 +
CPUUServPart36+CPUUServPart37+CPUUServPart38+CPUUServPart39 +
CPUUServPart40+CPUUServPart41+CPUUServPart42+CPUUServPart43 +
CPUUServPart44+CPUUServPart45+CPUUServPart46+CPUUServPart47) AS PEMiscUserServ,
/* PETotalUserExec */
(CPUUExecPart00+CPUUExecPart01+CPUUExecPart02+CPUUExecPart03 +
CPUUExecPart04+CPUUExecPart05+CPUUExecPart06+CPUUExecPart07 +
CPUUExecPart08+CPUUExecPart09+CPUUExecPart10+CPUUExecPart11 +
CPUUExecPart12+CPUUExecPart13+CPUUExecPart14+CPUUExecPart15 +
CPUUExecPart16+CPUUExecPart17+CPUUExecPart18+CPUUExecPart19 +
CPUUExecPart20+CPUUExecPart21+CPUUExecPart22+CPUUExecPart23 +
CPUUExecPart24+CPUUExecPart25+CPUUExecPart26+CPUUExecPart27 +
CPUUExecPart28+CPUUExecPart29+CPUUExecPart30+CPUUExecPart31 +
CPUUExecPart32+CPUUExecPart33+CPUUExecPart34+CPUUExecPart35 +
CPUUExecPart36+CPUUExecPart37+CPUUExecPart38+CPUUExecPart39 +
CPUUExecPart40+CPUUExecPart41+CPUUExecPart42+CPUUExecPart43 +
CPUUExecPart44+CPUUExecPart45+CPUUExecPart46+CPUUExecPart47) AS PETotalUserExec,
/* PETotalUserServ */
(CPUUServPart00+CPUUServPart01+CPUUServPart02+CPUUServPart03 +
CPUUServPart04+CPUUServPart05+CPUUServPart06+CPUUServPart07 +
CPUUServPart08+CPUUServPart09+CPUUServPart10+CPUUServPart11 +
CPUUServPart12+CPUUServPart13+CPUUServPart14+CPUUServPart15 +

166

Resource Usage Macros and Tables

Chapter 14: Resource Usage Views


ResCPUUsageByPEView
CPUUServPart16+CPUUServPart17+CPUUServPart18+CPUUServPart19 +
CPUUServPart20+CPUUServPart21+CPUUServPart22+CPUUServPart23 +
CPUUServPart24+CPUUServPart25+CPUUServPart26+CPUUServPart27 +
CPUUServPart28+CPUUServPart29+CPUUServPart30+CPUUServPart31 +
CPUUServPart32+CPUUServPart33+CPUUServPart34+CPUUServPart35 +
CPUUServPart36+CPUUServPart37+CPUUServPart38+CPUUServPart39 +
CPUUServPart40+CPUUServPart41+CPUUServPart42+CPUUServPart43 +
CPUUServPart44+CPUUServPart45+CPUUServPart46+CPUUServPart47)
AS PETotalUserServ
FROM DBC.ResUsageSvpr WHERE VprType like 'PE%' WITH CHECK OPTION;

Resource Usage Macros and Tables

167

Chapter 14: Resource Usage Views


ResSawtView

ResSawtView
ResSawtView is based on the ResUsageSawt table.
/* housekeeping fields */
TheDate,
NodeID (FORMAT '999-99') AS NodeID,
TheTime,
GmtTime,
NodeType,
TheTimestamp,
CentiSecs,
Secs,
NominalSecs,
CodFactor,
SummaryFlag,
Reserved,
VprId,
/* Aliased fields */
/* PM/WM CODs */
CodFactor
AS PM_CPU_COD,

/* PM CPU Capacity On Demand factor */

/* Spare Field usage */


/* PM/WM CODs */
Spare00
AS WM_CPU_COD,

/* WM CPU Capacity On Demand factor */

/* transformed fields */
/* PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
/* Changes in GroupId definition affects the displayed grouping in
* the Res*ByGroup macros. The default setting below shows how to
* select different node families, but does not differentiate the
* resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
( WorkTypeInuse00
WorkTypeInuse04
WorkTypeInuse08
WorkTypeInuse12

+
+
+
+

WorkTypeInuse01
WorkTypeInuse05
WorkTypeInuse09
WorkTypeInuse13
AS
/* Remaining table fields */

+ WorkTypeInuse02
+ WorkTypeInuse06
+ WorkTypeInuse10
+ WorkTypeInuse14
WorkTypeInuse,

+
+
+
+

WorkTypeInuse03
WorkTypeInuse07
WorkTypeInuse11
WorkTypeInuse15

+
+
+
)

FROM DBC.ResUsageSawt WITH CHECK OPTION;

Note: The ResUsageSawt table fields have been removed from this sample output.

168

Resource Usage Macros and Tables

Chapter 14: Resource Usage Views


ResScpuView

ResScpuView
ResScpuView is based on the ResUsageScpu table.
/* housekeeping fields */
thedate,
NodeID
(FORMAT
thetime,
GmtTime,
NodeType
(FORMAT
TheTimestamp,
CentiSecs
(FORMAT
Secs
(FORMAT
NominalSecs
(FORMAT
CodFactor
(FORMAT
SummaryFlag
(FORMAT
Reserved
(FORMAT
cpuid,
Reserved2
(FORMAT

'999-99')

AS NodeID,

'X(8)')

AS NodeType,

'-------9')
'----9')
'ZZZ9')
'-----9')
'X(1)')
'X(1)')

AS
AS
AS
AS
AS
AS

'X(2)')

AS Reserved2,

CentiSecs,
Secs,
NominalSecs,
CodFactor,
SummaryFlag,
Reserved,

/* Aliased Fields */
/* PM/WM CODs */
CodFactor
AS PM_CPU_COD,

/* PM CPU Capacity On Demand factor */

/* Spare Field usage */


/* PM/WM CODs */
Spare00
AS WM_CPU_COD,

/* WM CPU Capacity On Demand factor */

/* transformed fields */
/* PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
/* Changes in GroupId definition affects the displayed grouping in
* the Res*ByGroup macros. The default setting below shows how to
* select different node families, but does not differentiate the
* resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
/* Remaining table fields */
FROM DBC.ResUsageScpu WITH CHECK OPTION;

Note: The ResUsageScpu table fields have been removed from this sample output.

Resource Usage Macros and Tables

169

Chapter 14: Resource Usage Views


ResShstView

ResShstView
ResShstView is based on the ResUsageShst table.
REPLACE VIEW DBC.ResShstView
AS SELECT
/* housekeeping fields */
TheDate,
NodeID (FORMAT '999-99') AS NodeID,
TheTime,
GmtTime,
NodeType,
TheTimestamp,
CentiSecs,
Secs,
NominalSecs,
CodFactor,
SummaryFlag,
Reserved,
IPaddr,
HstType,
vprid,
hstid (FORMAT '----------9'),
/* Aliased fields */
/* PM/WM CODs */
CodFactor
AS

PM_CPU_COD,

/* PM CPU Capacity On Demand factor */

/* Spare Field usage */


/* PM/WM CODs */
Spare00
AS WM_CPU_COD,

/* WM CPU Capacity On Demand factor */

/* transformed fields */
/* PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
/*
*
*
*
*
*
*
*
*
*

ECS connections have IP address values.


IPv4 has 4 byte addresses and IPv6 allows 16 byte addresses
First 4 bytes are stored in IPaddr integer field
Second 4 bytes are stored in VprId integer field
Convert to standard 256.256.256.256 byte representation
by displaying each byte as a separate integer.
BITAND not available until DIPSYSFNC runs, which is after DIPRUM.
Request moving DIPRUM to after DIPSYSFNC in TD14.10 release.

SHIFTRIGHT(
SHIFTRIGHT(
SHIFTRIGHT(
SHIFTRIGHT(
SHIFTRIGHT(
SHIFTRIGHT(
SHIFTRIGHT(
SHIFTRIGHT(
*/

BITAND(
BITAND(
BITAND(
BITAND(
BITAND(
BITAND(
BITAND(
BITAND(

IPaddr,
IPaddr,
IPaddr,
IPaddr,
VprId,
VprId,
VprId,
VprId,

'000000FF'XB
'0000FF00'XB
'00FF0000'XB
'FF000000'XB
'000000FF'XB
'0000FF00'XB
'00FF0000'XB
'FF000000'XB

), 0)
), 8)
),16)
),24)
), 0)
), 8)
),16)
),24)

(Format
(Format
(Format
(Format
(Format
(Format
(Format
(Format

'ZZ9')
'ZZ9')
'ZZ9')
'ZZ9')
'ZZ9')
'ZZ9')
'ZZ9')
'ZZ9')

AS
AS
AS
AS
AS
AS
AS
AS

IP0,
IP1,
IP2,
IP3,
IP4,
IP5,
IP6,
IP7,

/* Changes in GroupId definition affects the displayed grouping in


* the Res*ByGroup macros. The default setting below shows how to
* select different node families, but does not differentiate the
* resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
/* Remaining table fields */
FROM

170

ResUsageShst

WITH CHECK OPTION;

Resource Usage Macros and Tables

Chapter 14: Resource Usage Views


ResSldvView

Note: The ResUsageShst table fields have been removed from this sample output.

ResSldvView
ResSldvView is based on the ResUsageSldv table.
REPLACE VIEW DBC.ResSldvView
AS SELECT
/* housekeeping fields */
thedate,
NodeID,
thetime,
GmtTime,
NodeType
(FORMAT 'X(8)')
TheTimestamp,
CentiSecs
(FORMAT '-------9')
Secs
(FORMAT '----9')
NominalSecs
(FORMAT 'ZZZ9')
CodFactor
(FORMAT '-----9')
SummaryFlag
(FORMAT 'X(1)')
Reserved
(FORMAT 'X(1)')
ctlid,
ldvid,
LdvType
(FORMAT 'X(4)')
/* Aliased Fields */
/* PM/WM CODs */
CodFactor
AS PM_CPU_COD,

AS NodeType,
AS
AS
AS
AS
AS
AS

CentiSecs,
Secs,
NominalSecs,
CodFactor,
SummaryFlag,
Reserved,

AS LdvType,

/* PM CPU Capacity On Demand factor */

CASE
WHEN Reserved IN ('S') THEN 'SSD'
WHEN Reserved IN ('H') THEN 'HDD'
ELSE '---'
END AS LdvKind,
/* PM/WM CODs */
LdvReadRespMax
AS WM_CPU_COD, /* WM CPU Capacity On Demand factor, was invalid field */
LdvWriteRespMax AS WM_IO_COD, /* WM IO Capacity On Demand factor, was invalid field */
/* Spare Field usage */
Spare00
AS Major, /* device major number */
Spare01
AS Minor, /* device minor number */
/* I/O Hard Limits and I/O Capacity On Demand (COD) */
Spare02
AS FullPotentialIota, /* Device Full Pontential iotas
*/
Spare03
AS CodPotentialIota,
/* Device Pontential iotas (after COD) */
Spare04
AS UsedIota,
/* Device used iotas
*/
/* PM/WM CODs */
SpareInt
AS PM_IO_COD,
/* PM IO Capacity On Demand factor
*/
/* transformed fields */
/* PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
( PM_IO_COD * WM_IO_COD / 100 ) (FORMAT '----9') AS IO_COD, /* effective IO COD */
/* Changes in GroupId definition affects the displayed grouping in
* the Res*ByGroup macros. The default setting below shows how to
* select different node families, but does not differentiate the
* resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
LdvOutReqSum / NULLIFZERO(LdvOutReqDiv) AS LdvOutReqAvg,
/* Remaining table fields */

Resource Usage Macros and Tables

171

Chapter 14: Resource Usage Views


ResSpdskView

FROM

DBC.ResUsageSldv

WITH CHECK OPTION;

Note: The ResUsageSldv table fields have been removed from this sample output.

ResSpdskView
ResSpdskView is based on the ResUsageSpdsk table.
REPLACE VIEW DBC.ResSpdskView
AS SELECT
/* housekeeping fields */
thedate,
NodeID
thetime,
GmtTime,
NodeType
TheTimestamp,
CentiSecs
Secs
NominalSecs
CodFactor
SummaryFlag
Reserved
PdiskGlobalId,
PdiskDeviceId,
PdiskType

(FORMAT '999-99')

AS NodeID,

(FORMAT 'X(8)')

AS NodeType,

(FORMAT
(FORMAT
(FORMAT
(FORMAT
(FORMAT
(FORMAT

AS
AS
AS
AS
AS
AS

'-------9')
'----9')
'ZZZ9')
'-----9')
'X(1)')
'X(1)')

(FORMAT 'X(4)')

CentiSecs,
Secs,
NominalSecs,
CodFactor,
SummaryFlag,
Reserved,

AS PdiskType,

/* Aliased Fields */
/* PM/WM CODs */
CodFactor
AS PM_CPU_COD,

/* PM CPU Capacity On Demand factor */

/* Spare Field usage */


/* PM/WM CODs */
Spare00
AS WM_CPU_COD,

/* WM CPU Capacity On Demand factor */

Spare01

AS WM_IO_COD,

/* WM IO

Capacity On Demand factor */

SpareInt

AS PM_IO_COD,

/* PM IO

Capacity On Demand factor */

/* transformed fields */
/* PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
( PM_IO_COD * WM_IO_COD / 100 ) (FORMAT '----9') AS IO_COD, /* effective IO COD */
/* Changes in GroupId definition affects the displayed grouping in
* the Res*ByGroup macros. The default setting below shows how to
* select different node families, but does not differentiate the
* resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
/* Remaining table fields */
FROM DBC.ResSpdskView WITH CHECK OPTION;

Note: The ResUsageSpdsk table fields have been removed from this sample output.

172

Resource Usage Macros and Tables

Chapter 14: Resource Usage Views


ResSpmaView

ResSpmaView
ResSpmaView provides an overview of system operation.
Note: The GroupId column in this view provides a grouping for PE-only nodes.
REPLACE VIEW DBC.ResSpmaView
AS SELECT
/* housekeeping fields */
TheDate,
NodeID (FORMAT '999-99') AS NodeID,
TheTime,
GmtTime,
NodeType,
TheTimestamp,
CentiSecs,
Secs,
NominalSecs,
CodFactor,
NCPUs,
Reserved,
Vproc1,
VprocType1,
Vproc2,
VprocType2,
Vproc3,
VprocType3,
Vproc4,
VprocType4,
Vproc5,
VprocType5,
Vproc6,
VprocType6,
Vproc7,
VprocType7,
MemSize,
NodeNormFactor,
/* Aliased fields */
/* PM/WM CODs */
CodFactor
AS PM_CPU_COD,

/* PM CPU Capacity On Demand factor */

/* Spare Field usage */


Spare00
AS FilePreReadKB, /* long until TD15.0 */
Spare01
AS FileWriteKB,
/* long until TD15.0 */
Spare02
AS FileContigWIos,
/* long until TD15.0 */
Spare03
AS FileContigWBlocks,/* long until TD15.0 */
Spare04
AS FileContigWKB,
/* long until TD15.0 */
/* SLES11 Hard Limits Capacity On Demand (COD) */
Spare05
AS CpuThrottleCount,
/* Control Group throttle count */
Spare06
AS CpuThrottleTime,
/* Control Group: centi-seconds */
/* I/O Hard Limits and I/O Capacity On Demand (COD) */
Spare07
AS FullPotentialIota, /* Total Full Pontential iotas
Spare08
AS CodPotentialIota,
/* Total Pontential iotas (after COD)
Spare09
AS UsedIota,
/* Total used iotas
/* TIM stats */
Spare10
As VHCacheKB, /* VH Cache size */
/* PM/WM CODs */
Spare11
AS WM_CPU_COD,
/* WM CPU Capacity On Demand factor
Spare12
AS WM_IO_COD,
/* WM IO Capacity On Demand factor
SpareInt
AS PM_IO_COD,
/* PM IO Capacity On Demand factor

*/
*/
*/

*/
*/
*/

/* transformed fields */
/*PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
( PM_IO_COD * WM_IO_COD / 100 ) (FORMAT '----9') AS IO_COD, /* effective IO COD */
/*
*
*
*
*
*
*
*
*

Changes in GroupId definition affects the displayed grouping in


the ResNodeByGroup macro. The default setting below shows how to
select different node families, but does not differentiate the
resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
as well as some PE-only nodes the CASE expression might look like:
WHEN (VPROCTYPE1='AMP' AND NodeType
IN ('5650H')) THEN '5650Nodes'
WHEN (VPROCTYPE1='AMP' AND NodeType
IN ('5600H')) THEN '5600Nodes'
WHEN (VPROCTYPE1='AMP' AND NodeType NOT IN ('5600H', '5650')) THEN '5550Nodes'
ELSE 'PEonly'

Resource Usage Macros and Tables

173

Chapter 14: Resource Usage Views


ResSpmaView
*/
CASE
WHEN (VPROCTYPE1='AMP' AND NodeType
IN ('5600H')) THEN 'AMPNodes'
WHEN (VPROCTYPE1='AMP' AND NodeType NOT IN ('5600H')) THEN 'AMPNodes'
ELSE 'PEonly'
END AS GroupId,
(
(
(
(
(
(
(
(

(CPUUServ + CPUUExec) / NULLIFZERO(NCPUs) ) AS CPUBusy,


CPUUServ / NULLIFZERO(NCPUs) )
AS CPUOpSys,
CPUUExec / NULLIFZERO(NCPUs) )
AS CPUUser,
CPUIoWait / NULLIFZERO(NCPUs) )
AS CPUWaitIO,
(CPUUServ + CPUUExec) * (NodeNormFactor / 100)/ NULLIFZERO(NCPUs) ) AS CPUBusyNorm,
CPUUServ * (NodeNormFactor / 100) / NULLIFZERO(NCPUs) )
AS CPUOpSysNorm,
CPUUExec * (NodeNormFactor / 100) / NULLIFZERO(NCPUs) )
AS CPUUserNorm,
CPUIoWait* (NodeNormFactor / 100) / NULLIFZERO(NCPUs) )
AS CPUWaitIONorm,

( FileAcqReads + FilePreReads + FileWrites ) AS DiskSegmentIO,


( FileAcqReadKB + FilePreReadKB +
/* paging or swapping count times pagesize (= 4K) */
(MemTextPageReads + MemCtxtPageReads ) * 4 ) AS LogicalDeviceReadKB,
( FileWriteKB +
/* paging or swapping count times pagesize (= 4K) */
MemCtxtPageWrites * 4 ) AS LogicalDeviceWriteKB,
( LogicalDeviceReadKB + LogicalDeviceWriteKB ) AS LogicalDeviceIOKB,
( MemTextPageReads + MemCtxtPageReads + MemCtxtPageWrites )*4 AS PageOrSwapIOKB,
( NetTxCircPtP
+ NetTxCircBrd ) (FORMAT '------9') AS NetAttempts,
( NetMsgBrdReads
+ NetMsgBrdWrites )
AS NetMultiIO,
( NetMsgPtPReads
+ NetMsgPtPWrites )
AS NetPtoPIO,
( NetMsgBrdReadKB + NetMsgPtPReadKB )
AS NetReadKB,
( NetMsgBrdReads
+ NetMsgPtPReads )
AS NetReads,
( NetMsgBrdWriteKB + NetMsgPtPWriteKB )
AS NetWriteKB,
( NetMsgBrdWrites + NetMsgPtPWrites )
AS NetWrites,
( ProcBlocked
+ ProcReady )
AS ProcActive,
( ProcBlksDBLock + ProcBlksMemAlloc
+ ProcBlksMisc
+
ProcBlksMonitor + ProcBlksMonResume
+
ProcBlksNetThrottle
+
ProcBlksSegLock + ProcBlksFsgLock
+
ProcBlksFsgRead + ProcBlksFsgWrite ) (FORMAT '------9')
AS ProcBlocks,
( ProcWaitDBLock + ProcWaitMemAlloc
+ ProcWaitMisc
+
ProcWaitMonitor + ProcWaitMonResume
+
ProcWaitNetThrottle + ProcWaitPageRead +
ProcWaitSegLock + ProcWaitFsgLock
+
ProcWaitFsgRead + ProcWaitFsgWrite ) (FORMAT '------9')
AS ProcWaits,
( CmdDDLStmts
+ CmdDeleteStmts + CmdInsertStmts + CmdSelectStmts +
CmdUpdateStmts + CmdUtilityStmts + CmdOtherStmts )
AS UserStmtsArriving,
/* iota limit per device */
( UsedIota*100 / NULLIFZERO(FullPotentialIota) ) AS Used_FullPotential,
/* Spare Field Usage */
/* SPMA table fields */
FROM DBC.ResUsageSpma WITH CHECK OPTION;

Note: The ResUsageSpma table fields have been removed from this sample output.

174

Resource Usage Macros and Tables

Chapter 14: Resource Usage Views


ResSpsView

ResSpsView
ResSpsView is based on the ResUsageSps table.
REPLACE VIEW DBC.ResSpsView
AS SELECT
/* housekeeping fields */
TheDate,
NodeID (FORMAT '999-99') AS NodeID,
TheTime,
GmtTime,
NodeType,
TheTimestamp,
CentiSecs,
Secs,
NominalSecs,
CodFactor,
/* Capacity On Demand: 500 = 50 percent */
NCPUs,
AMPcount,
RowIndex1,
/* SLES10 = PGid, SLES11 = pWDid */
VprType,
PPid,
/* always zero for SLES11
*/
Reserved,
/* Aliased table fields */
/* PM/WM CODs */
CodFactor
AS PM_CPU_COD,/*
NumTasks
AS NumProcs, /*
RowIndex1
AS PGid,
/*
RowIndex1
AS pWDid,
/*
ActiveSessions AS NumSets,
/*

PM CPU Capacity On Demand factor */


renamed NumProcs to NumTasks in 14.0 */
SLES10 */
SLES11 */
renamed NumSets in 14.0 */

/* Spare Field usage */


/* PM/WM CODs */
Spare00
AS WM_CPU_COD,
/* WM CPU Capacity On Demand factor */
Spare01
AS WM_IO_COD,
/* WM IO Capacity On Demand factor */
/* SLES11 Hard Limits Capacity On Demand (COD) */
Spare03
AS CpuVpThrottleCount, /* VP-level CPU hard limits */
Spare04
AS CpuVpThrottleTime, /* VP-level milli-seconds */
Spare05
AS CpuThrottleCount,
/* WD level throttle count */
Spare06
AS CpuThrottleTime,
/* WD level milli-seconds */
/* I/O Hard Limits and I/O Capacity On Demand (COD) */
Spare07
AS FullPotentialIota, /* node level total Full Pontential iotas
*/
Spare08
AS CodPotentialIota,
/* WD level device Pontential iotas (after COD) */
Spare09
AS UsedIota,
/* WD level device used iotas
*/
Spare10
AS IoThrottleCount,
/* WD level IoThrottleCount
*/
/* TIM stats */
Spare11
As VHLogicalDBRead,
/* WD level logical db reads from VH Cache
*/
Spare12
As VHLogicalDBReadKB, /* WD level KB of logical db reads from VH Cache */
Spare13
As VHPhysicalDBRead,
/* WD level physical db reads for VH data
*/
Spare14
As VHPhysicalDBReadKB, /* WD level KB of physical db reads for VH data */
/* PM/WM CODs */
SpareInt
AS PM_IO_COD,
/* PM IO Capacity On Demand factor
*/
/* transformed fields */
/* PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
( PM_IO_COD * WM_IO_COD / 100 ) (FORMAT '----9') AS IO_COD, /* effective IO COD */
/* Changes in GroupId definition affects the displayed grouping in
* the Res*ByGroup macros. The default setting below shows how to
* select different node families, but does not differentiate the
* resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
/* report average wait time per request for messages */
/* already delivered to AWTs

Resource Usage Macros and Tables

*/

175

Chapter 14: Resource Usage Views


ResSpsView
QWaitTime / NULLIFZERO(NumRequests)

AS QWaitTimeRequestAvg,

/* Avg delay for msgs delivered in period per Request


*/
WorkMsgSendDelay / NULLIFZERO(WorkMsgSendDelayCnt)
AS WorkMsgSendDelayRequestAvg,
/* avg delay per request on receive side for messages not */
/* yet delivered to AWTs
*/
WorkMsgReceiveDelay / NULLIFZERO(WorkMsgReceiveDelayCnt)
AS WorkMsgReceiveDelayRequestAvg,
/* report average service time per request */
ServiceTime / NULLIFZERO(AwtReleases)
AS ServiceTimeRequestAvg,
(CPUUServAWT + CPUUServDisp + CPUUServMisc)
(CPUUExecAWT + CPUUExecDisp + CPUUExecMisc)
(CPUUServ + CPUUExec)
(CpuTime * 10)/(CentiSecs * NCPUs)

AS
AS
AS
AS

CPUUServ,
CPUUExec,
CpuTime,
CpuPct,

/* All WorkTypes in use, averaged per AMP */


(WorkTypeInuse00 + WorkTypeInuse01 + WorkTypeInuse02 + WorkTypeInuse03 +
WorkTypeInuse04 + WorkTypeInuse05 + WorkTypeInuse06 + WorkTypeInuse07 +
WorkTypeInuse08 + WorkTypeInuse09 + WorkTypeInuse10 + WorkTypeInuse11 +
WorkTypeInuse12 + WorkTypeInuse13 + WorkTypeInuse14 + WorkTypeInuse15)
/ NULLIFZERO(AMPcount)
AS WorkTypeInuseAmp,
/* Max of (WorkTypesInuse for all AMPs (NOT per AMP)
*/
/* Can NOT divide by AMPs. MAX(SUM(AWT[AMP]))
*/
(WorkTypeMax00 + WorkTypeMax01 + WorkTypeMax02 + WorkTypeMax03 +
WorkTypeMax04 + WorkTypeMax05 + WorkTypeMax06 + WorkTypeMax07 +
WorkTypeMax08 + WorkTypeMax09 + WorkTypeMax10 + WorkTypeMax11 +
WorkTypeMax12 + WorkTypeMax13 + WorkTypeMax14 + WorkTypeMax15)
AS WorkTypeInuseMax,
(ProcBlksFsgRead + ProcBlksFsgWrite + ProcBlksFsgNIOs) AS IODelay,
(ProcWaitFsgRead + ProcWaitFsgWrite + ProcWaitFsgNIOs) AS IODelayTime,
(FilePDbAcqs + FilePDbPres)
AS LogicalReadPerm,
FilePDbDyRRels
AS LogicalWritePerm,
(FilePDbAcqReads + FilePDbPreReads)
AS PhysicalReadPerm,
FilePDbFWrites
AS PhysicalWritePerm,
FilePDbFWriteKB
AS PhysicalWritePermKB,
(NetPtPReads + NetBrdReads)
AS NetReads,
(NetPtPWrites + NetBrdWrites)
AS NetWrites,
(FilePCiAcqs + FileSDbAcqs + FileSCiAcqs + FileTJtAcqs + FileAPtAcqs +
FilePCiPres + FileSDbPres + FileSCiPres + FileTJtPres + FileAPtPres)
AS LogicalReadOther,
(FilePCiAcqReads + FileSDbAcqReads + FileSCiAcqReads + FileTJtAcqReads
FilePCiPreReads + FileSDbPreReads + FileSCiPreReads + FileTJtPreReads
AS PhysicalReadOther,
(FilePCiDyRRels + FileSDbDyRRels + FileSCiDyRRels + FileTJtDyRRels
AS LogicalWriteOther,
(FilePCiFWrites + FileSDbFWrites + FileSCiFWrites + FileTJtFWrites
AS PhysicalWriteOther,
(FilePCiFWriteKB + FileSDbFWriteKB + FileSCiFWriteKB + FileTJtFWriteKB
AS PhysicalWriteOtherKB,
( FilePDbAcqKB
( FileSDbAcqKB
FilePCiAcqKB
FilePCiPresKB

+
+
+
+

+ FileAPtAcqReads +
+ FileAPtPreReads)
+ FileAPtDyRRels)
+ FileAPtFWrites)
+ FileAPtFWriteKB)

FilePDbPresKB
FileSDbPresKB
FileSCiAcqKB
FileSCiPresKB

)
AS LogicalReadPermKB,
+
+ FileTJtAcqKB + FileAPtAcqKB +
+ FileTJtPresKB + FileAPtPresKB)
AS LogicalReadOtherKB,
( FilePDbPreReadKB + FilePDbAcqReadKB )
AS PhysicalReadPermKB,
( FilePCiAcqReadKB + FileSDbAcqReadKB + FileSCiAcqReadKB + FileTJtAcqReadKB + FileAPtAcqReadKB +
FilePCiPreReadKB + FileSDbPreReadKB + FileSCiPreReadKB + FileTJtPreReadKB + FileAPtPreReadKB)
AS PhysicalReadOtherKB,
FilePDbDyRRelKB
AS LogicalWritePermKB,
( FileSDbDyRRelKB +
FilePCiDyRRelKB + FileSCiDyRRelKB + FileTJtDyRRelKB + FileAPtDyRRelKB)
AS LogicalWriteOtherKB,
(ProcBlksSegNoVirtual + ProcBlksSegMDL + ProcBlksSegLock) AS ProcBlksSeg,
(ProcWaitSegNoVirtual + ProcWaitSegMDL + ProcWaitSegLock) AS ProcWaitSeg,
(ProcBlksMisc + ProcBlksNetThrottle + ProcBlksQnl +ProcBlksTime + ProcBlksFlowControl)
AS ProcBlksOther,
/* Donald.P: ProcBlksMonResume = covered by DBLocks */
(ProcWaitMisc + ProcWaitMonResume + ProcWaitNetThrottle +ProcWaitQnl +
ProcWaitTime + ProcWaitFlowControl)
AS ProcWaitOther,
/* Average number of AWTs used based on WorkTimeInuse
WorkTimeInuse/(Centisecs*10)
AS AwtUsedAvg,

*/

/* iota limit per WD */


( UsedIota*100 / NULLIFZERO(FullPotentialIota) ) AS Used_FullPotential_ByWd,

176

Resource Usage Macros and Tables

Chapter 14: Resource Usage Views


ResSpsView
/* iota limit by WD */
( UsedIota*100 / NULLIFZERO(CodPotentialIota ) ) AS Used_CodPotential_ByWd,
/* Remaining table fields */
FROM DBC.ResUsageSps WITH CHECK OPTION;

Note: The ResUsageSps table fields have been removed from this sample output.

Resource Usage Macros and Tables

177

Chapter 14: Resource Usage Views


ResSvdskView

ResSvdskView
ResSvdskView is based on the ResUsageSvdsk table.
REPLACE VIEW DBC.ResSvdskView
AS SELECT
/* housekeeping fields */
thedate,
NodeID
thetime,
GmtTime,
NodeType
TheTimestamp,
CentiSecs
Secs
NominalSecs
CodFactor
SummaryFlag
Reserved
VprId,

(FORMAT '999-99')

AS NodeID,

(FORMAT 'X(8)')

AS NodeType,

(FORMAT
(FORMAT
(FORMAT
(FORMAT
(FORMAT
(FORMAT

AS
AS
AS
AS
AS
AS

'-------9')
'----9')
'ZZZ9')
'-----9')
'X(1)')
'X(1)')

CentiSecs,
Secs,
NominalSecs,
CodFactor,
SummaryFlag,
Reserved,

/* Aliased Fields */
/* PM/WM CODs */
CodFactor
AS PM_CPU_COD,

/* PM CPU Capacity On Demand factor */

/* Spare Field usage */


/* PM/WM CODs */
Spare00
AS WM_CPU_COD,
Spare01
AS WM_IO_COD,
SpareInt
AS PM_IO_COD,

/* WM CPU Capacity On Demand factor */


/* WM IO Capacity On Demand factor */
/* PM IO Capacity On Demand factor */

/* transformed fields */
/* PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
( PM_IO_COD * WM_IO_COD / 100 ) (FORMAT '----9') AS IO_COD, /* effective IO COD */
/*
*
*
*

Changes in GroupId definition affects the displayed grouping in


the Res*ByGroup macros. The default setting below shows how to
select different node families, but does not differentiate the
resulting groups. If a coexistence system had 5550, 5600 and 5650 n

odes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
/* Remaining table fields */
FROM DBC.ResUsageSvdsk WITH CHECK OPTION;

Note: The ResUsageSvdsk table fields have been removed from this sample output.

178

Resource Usage Macros and Tables

Chapter 14: Resource Usage Views


ResSvprView

ResSvprView
This view allows data to be properly presented and reports all the columns available from the
ResUsageSvpr table.
Note: The data columns in this view will change as the columns in the ResUsageSvpr table
change.
REPLACE VIEW DBC.ResSvprView
AS SELECT
/* housekeeping fields */
thedate,
NodeID,
thetime,
GmtTime,
NodeType,
TheTimestamp,
CentiSecs,
Secs,
NominalSecs,
CodFactor,
SummaryFlag,
Reserved,
NCPUs,
vprid,
VprType,
/* Aliased fields */
/* PM/WM CODs */
CodFactor
AS PM_CPU_COD,

/* PM CPU Capacity On Demand factor */

/* Spare field usage */


/* TIM stats */
Spare00
As VHAgedOut,
/* data blocks aged out of VH Cache */
Spare01
As VHAgedOutKB, /* KB aged out of VH Cache
*/
Spare02
As VHLogicalDBRead,
/* logical db reads from VH Cache
*/
Spare03
As VHLogicalDBReadKB, /* KB of logical db reads from VH Cache */
Spare04
As VHPhysicalDBRead,
/* physical db reads for VH data
*/
Spare05
As VHPhysicalDBReadKB, /* KB of physical db reads for VH data */
/* PM/WM CODs */
Spare06
AS WM_CPU_COD, /* WM CPU Capacity On Demand factor */
/* DTIM inuse cache size */
Spare07
AS VHCacheInuseKB, /* current inuse VH cache KB */
/* transformed fields */
/*PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
/* Changes in GroupId definition affects the displayed grouping in
* the Res*ByGroup macros. The default setting below shows how to
* select different node families, but does not differentiate the
* resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
/* SVpr table fields (remaining) */
FROM DBC.ResUsageSvpr WITH CHECK OPTION;

Note: The ResUsageSvpr table fields have been removed from this sample output.

Resource Usage Macros and Tables

179

Chapter 14: Resource Usage Views


ResSvprView

180

Resource Usage Macros and Tables

Resource Usage Macros

CHAPTER 15

This chapter describes the output format of the resource usage macros and each macro.

Macro Output Format


Resource usage macros provide output in the following general format.
<Report Date>

<Title of Report>
--------------1st
2nd
1st
Date
Time
Type
Id
Id
Stat
-------- -------- ---- ------ ------ ------99/99/99 99:99:99 AAAA 999-99 999-99 999.99%
999.99%
...........
AAAA 999-99 999-99 999.99%
999-99 999-99 999.99%
99:99:99 AAAA 999-99 999-99 999.99%
...........

Page <num>
2nd
3rd
Stat
Stat ...
-------- -------99999.99 99999.99
99999.99 99999.99
99999.99 99999.99
99999.99 99999.99
99999.99 99999.99

where:
Column

Description

Date

The date at the end of a log interval.

Time

The time at the end of a log interval. Statistics on each line cover the time
period ending at the indicated time.

Type

A virtual processor type, logical device type, host type, or a special type
for certain reports.

1st ID, 2nd ID, etc.

The appropriate identifier, which varies, depending on the macro. It is


one or more of the following:

1st Stat, 2nd Stat, etc.

NodeID
VprocID
HostID
GroupID

The appropriate statistics. Details are given with the descriptions of each
macro in this chapter.
Numbers are generally displayed with the appropriate fixed format (for
example, 'zzzz9.99') unless the number represents a percentage or sum of
percentages.
Percentages are displayed with the appropriate format (for example,
'zz9.9%', 'zz9' or 'zz9.99').

Unless otherwise specified, all statistical numbers are expressed as either:

Resource Usage Macros and Tables

181

Chapter 15: Resource Usage Macros


Macro Output Format

Percentage

Millisecond (ms)

Kilobyte (KB)

Columns whose values depend on the logging rate are never reported as raw data. Instead,
they are converted to a normalized value, such as per second.
All reports are ordered by date, time, type, 1st ID, 2nd ID, and so forth. Repeated date, time,
type, and ID column values are suppressed until a new row presents a different value.

Question Marks
Question marks used as values in the output examples are generated when a division by zero is
made. It represents data that is not available. The numbers in the columns are calculated, for
example, by dividing KB by number of blocks read. When there are no blocks read, KB is
divided by zero. A question mark does not mean there is an error, but indicates that there is no
information to report for this time period.

Usage Notes
To get current data, logging must be enabled on the resource usage table used by the view or
macro.

182

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResAWT Macros

ResAWT Macros
Function
Macro...

Reports the average AMP Worker Task...

ResAWT

in use for all AMPs in the system.

ResAWTByAMP

in use for each AMP.

ResAWTByNode

on all AMPs in each node.

Input Format Examples


The input forms of these three macros are described below.
EXEC ResAWT
(FromDate,ToDate,FromTime,ToTime);
EXEC ResAWTByAMP
(FromDate,ToDate,FromTime,ToTime
EXEC ResAWTByNode
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);

See Macro Execution on page 31 for a description of the FromDate, ToDate, FromTime,
ToTime, FromNode, ToNode and Node parameters.

Usage Notes
You must have logging enabled on the ResUsageSawt table.

Output Examples
The reports in the following sections are sample output reports from the ResAWT,
ResAWTByAMP, and ResAWTByNode macros.
In the ResAWT output report, the statistics columns, after the Date and Time columns,
provide a summary of the AMP Worker Tasks resource usage.
The following table describes the statistics columns, after the Date and Time columns, in the
ResAWTByAMP output.
Statistics columns

Description

Node ID.

AMP ID.

3 through 24

Summary of AMP Worker Tasks resource usage.

Resource Usage Macros and Tables

183

Chapter 15: Resource Usage Macros


ResAWT Macros

The following table describes the statistics columns, after the Date and Time columns, in the
ResAWTByNode output report.
Statistics columns

Description

Node ID.

2 through 23

Summary of AMP Worker Tasks resource usage.

The following table describes the columns in all output reports (with the exception of
ResAWTByNode, which has the NodeID column, and ResAWTByAMP, which has the Node
ID and AMP ID columns).

184

Column...

Reports the...

Mail box Depth

current depth of the AMP work mailbox.

In Flow Ctl

AMP that is or is not in flow control.

Flow Ctls Per Sec

number of times during the log period that the system entered the
flow control state from a non-flow controlled state.

Work New AWTs

current number of AMP Worker Tasks in use during the log


period for each new first-level secondary work type for the VprId
vproc.

Work One AWTs

current number of AMP Worker Tasks in use during the log


period for each first-level secondary work type for the VprId
vproc.

New + One AWTs

summary of the previous two columns, Work New AWTs and


Work One AWTs.

Work Two AWTs

current number of AMP Worker Tasks in use during the log


period for each second-level secondary work type for the VprId
vproc.

Work 3 AWTs

current number of AMP Worker Tasks in use during the log


period for each third-level secondary work type for the VprId
vproc.

Work Abrt AWTs

current number of AMP Worker Tasks in use during the log


period for each transaction abort request for the VprId vproc.

Work Spwn AWTs

current number of AMP Worker Tasks in use during the log


period for each spawned abort request for the VprId vproc.

Work Norm AWTs

current number of AMP Worker Tasks in use during the log


period for each message that does not fall within the standard
work type hierarchy for the VprId vproc.

Work Ctl AWTs

Note: This column is not currently used.

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResAWT Macros

Column...

Reports the...

Work Exp AWTs

current number of AMP Worker Tasks in use during the log


period for each expedited Allocation Groups for the VprId vproc.
(Expedited Allocation Groups exist when using the reserved AMP
Worker Task pool. See the Priority Scheduler (schmon) chapter
in Utilities for details.)

Max Work New AWTs

the maximum number of AMP Worker Tasks in use at one time


during the log period for each new work type for the VprId vproc.

Max Work One AWTs

the maximum number of AMP Worker Tasks in use at one time


during the log period for each first-level secondary work type for
the VprId vproc.

Max Work Two AWTs

the maximum number of AMP Worker Tasks in use at one time


during the log period for each second-level secondary work type
for the VprId vproc.

Max Work 3 AWTs

the maximum number of AMP Worker Tasks in use at one time


during the log period for each third-level secondary work type for
the VprId vproc.

Max Work Abrt AWTs

the maximum number of AMP Worker Tasks in use at one time


during the log period for each transaction abort request for the
VprId vproc.

Max Work Spwn AWTs

the maximum number of AMP Worker Tasks in use at one time


during the log period for each spawned abort request for the
VprId vproc.

Max Work Norm AWTs

the maximum number of AMP Worker Tasks in use at one time


during the log period for each message that does not fall within
the standard work type hierarchy for the VprId vproc.

Max Work Ctl AWTs

Note: This column is not currently used.

Max Work Exp AWTs

the maximum number of AMP Worker Tasks in use at one time


during the log period for each expedited Allocation Groups for the
VprId vproc. (Expedited Allocation Groups exist when using the
reserved AMP Worker Task pool. See the Priority Scheduler
(schmon) chapter in Utilities for details.)

For a complete description of the columns above, see Chapter 7: ResUsageSawt Table.

Resource Usage Macros and Tables

185

Chapter 15: Resource Usage Macros


ResAWT Macros

ResAWT Sample Output


07/08/17
Page
1

Date
Time
-------- -------07/08/12 23:00:00
23:01:00
23:02:00
23:03:00
23:04:00
23:05:00
23:06:00
23:07:00
23:08:00
23:09:00
23:10:00
23:11:00
23:12:00
23:13:00
23:14:00
23:15:00
23:16:00
23:17:00
23:18:00
23:19:00
23:20:00
23:21:00
23:22:00

AMP Worker Task Summary Average Usage per AMP Across System

Mail
In
Flow Work Work New+ Work Work
Box Flow
Ctls New One One Two
3
Depth Ctl? PerSec AWTs AWTs AWTs AWTs AWTs
------ ----- ------ ---- ---- ---- ---- ---8 1.00
0.01
31
22
54
0
0
9 0.00
0.02
28
25
54
0
0
6 0.00
0.00
31
22
54
0
0
3 0.00
0.00
31
22
53
0
0
2 0.00
0.00
31
22
52
0
0
5 0.00
0.00
31
22
53
0
0
3 0.00
0.00
27
24
51
0
0
1 0.00
0.00
21
19
40
0
0
1 0.00
0.00
26
20
46
0
0
2 0.00
0.00
30
20
50
0
0
1 0.00
0.00
29
16
46
0
0
2 0.00
0.00
30
19
49
0
0
2 0.00
0.00
31
21
52
0
0
1 0.00
0.00
29
19
49
0
0
1 0.00
0.00
29
18
47
0
0
2 0.00
0.00
29
19
48
0
0
3 0.00
0.00
34
18
52
0
0
6 0.00
0.00
35
19
54
0
0
8 0.00
0.00
30
23
53
0
0
1 0.00
0.01
25
24
49
0
0
1 0.00
0.00
28
18
46
0
0
7 0.00
0.01
34
20
54
0
0
2 0.00
0.01
30
22
53
0
0

Work
Abrt
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
Spwn
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
Norm
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
Ctl
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
Exp
AWTs
---0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Max Max Max Max Max Max Max Max Max


Work Work Work Work Work Work Work Work Work
New One Two
3 Abrt Spwn Norm Ctl Exp
AWTs AWTs AWTs AWTs AWTs AWTs AWTs AWTs AWTs
---- ---- ---- ---- ---- ---- ---- ---- --35
25
2
1
1
0
0
0
3
33
27
1
1
1
0
0
0
2
35
26
1
1
1
0
0
0
2
35
26
1
1
1
0
0
0
3
36
27
1
1
1
0
0
0
3
36
26
1
1
1
0
0
0
4
34
27
1
1
1
0
0
0
3
35
29
1
1
1
0
0
0
3
35
29
1
1
1
0
0
0
3
37
26
1
1
1
0
0
0
4
38
25
2
1
1
0
0
0
3
38
27
1
1
1
0
0
0
3
37
26
1
1
1
0
0
0
3
36
26
1
1
1
0
0
0
4
36
25
1
1
1
0
0
0
3
37
25
2
1
1
0
0
0
3
37
25
1
1
1
0
0
0
4
40
22
1
1
1
0
0
0
4
37
24
1
1
1
0
0
0
4
36
30
1
1
1
0
0
0
3
36
30
2
2
1
0
0
0
4
36
26
1
1
1
0
0
0
3
36
27
1
1
1
0
0
0
3

ResAWTByAMP Sample Output


07/08/17
Page
1
Mail In
AMP Box
Flow
Date
Time
ID
Depth Ctl?
-------- -------- ------ ----- ----07/08/12 23:00:00
0
3
0.00
1
1
0.00
2
12 0.00
3
14 0.00
4
11 0.00
5
12 0.00
6
17 0.00
7
5
0.00
8
4
0.00
9
1
0.00
10
0.00
15
7
0.00
12
1
0.00
13
1
0.00
14
4
0.00
15
27 0.00

186

AMP Worker Task Summary Usage per AMP

Flow
Ctls
PerSec
-----0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

Work
New
AWTs
---31
32
31
32
32
30
32
32
32
32
32
32
30
32
32
32

Work
One
AWTs
---22
22
22
22
22
24
22
22
22
22
22
22
23
22
23
22

New+
One
AWTs
---53
54
54
54
54
53
54
54
54
54
54
54
54
54
52
54

Work
Two
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
3
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
Abrt
AWTs
---1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
Spwn
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
Norm
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
Ctl
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Work
Exp
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

Max
Work
New
AWTs
---33
34
34
34
34
35
34
34
34
34
35
34
34
34
33
34

Max
Work
One
AWTs
---24
24
24
24
25
23
24
23
23
24
24
24
24
24
25
24

Max
Max Max Max Max Max Max
Work Work Work Work Work Work Work
Two
3
Abrt Spwn Norm Ctl Exp
AWTs AWTs AWTs AWTs AWTs AWTs AWTs
---- ---- ---- ---- ---- ---- ---2
1
1
0
0
0
3
1
0
0
0
0
0
2
1
0
0
0
0
0
2
1
0
0
0
0
0
3
1
0
0
0
0
0
3
1
0
0
0
0
0
3
1
0
0
0
0
0
4
1
0
0
0
0
0
3
1
0
0
0
0
0
3
1
0
0
0
0
0
3
1
0
0
0
0
0
4
1
0
0
0
0
0
3
1
0
0
0
0
0
3
1
0
0
0
0
0
3
1
0
0
0
0
0
3
1
0
0
0
0
0
3

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResAWT Macros

ResAWTByNode Sample Output


07/08/17

AMP Worker Task Summary Average Usage per AMP By Node

Max Max Max Max


Mail
In
Flow Work Work New+ Work Work Work Work Work Work Work Work Work Work Work
Node
Box Flow
Ctls New One One Two
3 Abrt Spwn Norm Ctl Exp New One Two
3
Date
Time
ID Depth Ctl? PerSec AWTs AWTs AWTs AWTs AWTs AWTs AWTs AWTs AWTs AWTs AWTs AWTs AWTs AWTs
-------- -------- ------ ------ ----- ------ ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---07/08/12 23:00:00 1-04
8 1.00
0.01
31
22
54
0
0
0
0
0
0
0
35
25
2
1
23:01:00 1-04
9 0.00
0.02
28
25
54
0
0
0
0
0
0
0
33
27
1
1
23:02:00 1-04
6 0.00
0.00
31
22
54
0
0
0
0
0
0
0
35
26
1
1
23:03:00 1-04
3 0.00
0.00
31
22
53
0
0
0
0
0
0
0
35
26
1
1
23:04:00 1-04
2 0.00
0.00
31
22
52
0
0
0
0
0
0
0
36
27
1
1
23:05:00 1-04
5 0.00
0.00
31
22
53
0
0
0
0
0
0
0
36
26
1
1
23:06:00 1-04
3 0.00
0.00
27
24
51
0
0
0
0
0
0
0
34
27
1
1
23:07:00 1-04
1 0.00
0.00
21
19
40
0
0
0
0
0
0
1
35
29
1
1
23:08:00 1-04
1 0.00
0.00
26
20
46
0
0
0
0
0
0
0
35
29
1
1
23:09:00 1-04
2 0.00
0.00
30
20
50
0
0
0
0
0
0
0
37
26
1
1
23:10:00 1-04
1 0.00
0.00
29
16
46
0
0
0
0
0
0
0
38
25
2
1
23:11:00 1-04
2 0.00
0.00
30
19
49
0
0
0
0
0
0
0
38
27
1
1
23:12:00 1-04
2 0.00
0.00
31
21
52
0
0
0
0
0
0
0
37
26
1
1
23:13:00 1-04
1 0.00
0.00
29
19
49
0
0
0
0
0
0
0
36
26
1
1
23:14:00 1-04
1 0.00
0.00
29
18
47
0
0
0
0
0
0
0
36
25
1
1
23:15:00 1-04
2 0.00
0.00
29
19
48
0
0
0
0
0
0
0
37
25
2
1
23:16:00 1-04
3 0.00
0.00
34
18
52
0
0
0
0
0
0
0
37
25
1
1
23:17:00 1-04
6 0.00
0.00
35
19
54
0
0
0
0
0
0
0
40
22
1
1
23:18:00 1-04
8 0.00
0.00
30
23
53
0
0
0
0
0
0
0
37
24
1
1
23:19:00 1-04
1 0.00
0.01
25
24
49
0
0
0
0
0
0
0
36
30
1
1
23:20:00 1-04
1 0.00
0.00
28
18
46
0
0
0
0
0
0
0
36
30
2
2

Resource Usage Macros and Tables

Page
Max
Work
Abrt
AWTs
---1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1

Max
Work
Spwn
AWTs
---0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0

1
Max Max Max
Work Work Work
Norm Ctl Exp
AWTs AWTs AWTs
---- ---- ---0
0
3
0
0
2
0
0
2
0
0
3
0
0
3
0
0
4
0
0
3
0
0
3
0
0
3
0
0
4
0
0
3
0
0
3
0
0
3
0
0
4
0
0
3
0
0
3
0
0
4
0
0
4
0
0
4
0
0
3
0
0
4

187

Chapter 15: Resource Usage Macros


ResCPUByAMP Macros

ResCPUByAMP Macros
Function
Macro...

Reports the following...

ResCPUByAMP

how each AMP on each node utilizes the CPUs.

ResCPUByAMPOneNode

how each AMP on a specific node utilizes the CPUs.

ResAmpCpuByGroup

the summary of AMP CPU usage by node grouping.

Input Format Examples


The input forms of these three macros are described below.
EXECUTE ResCPUByAMP
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXECUTE ResCPUByAMPOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXECUTE ResCAMPCpuByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, ToDate, FromTime,
ToTime, FromNode, ToNode and Node parameters.

Usage Notes
You must have logging enabled on the ResUsageSvpr table.

Output Examples
The reports in the following sections are sample output reports from the ResCPUByAMP, the
ResCPUByAMPOneNode, and the ResAmpCpuByGroup macros, respectively, where:
Column...

Reports the percent of time AMPs were busy doing user...

Awt User Serv%

service for the AMP Worker Task (Awt) partition.

Misc User Serv%

service for miscellaneous (all other except Partition 0) AMP partitions.

Awt User Exec%

execution within the AMP Worker Task (Awt) partition.

Misc User Exec%

execution within miscellaneous (all other except Partition 0) AMP


partitions.

Total User Serv%

service work. This is the sum of the Awt User Serv%, the Misc User
Serv%, and AMP Partition 0 user service%.
Note: User service is the time that a CPU is busy executing kernel system
calls or servicing I/O and timer hardware interrupts.

188

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResCPUByAMP Macros

Column...

Reports the percent of time AMPs were busy doing user...

Total User Exec%

execution work. This is the sum of the Awt User Exec%, Misc User
Exec%, and AMP Partition 0 user execution.
Note: User execution is the time a CPU is busy executing user execution
code, which is the time spent in a user state on behalf of a process.

Total Busy%

service and execution work. This is the sum of the Total User Serv% and
the Total User Exec% columns.

Note: The above CPU statistics represent the aggregate of all time spent in the indicated way
by all CPUs on the node. Because there are multiple CPUs, the Total Busy % should be
compared to a theoretical maximum of 100% times the number of CPUs on the node.
The Node CPU column in the following sample outputs reports the number of CPUs
(NCPUs).
For more information on how to monitor busy AMP Worker Tasks, see AWT Monitor
(awtmon) in Utilities.

ResCPUByAMP Sample Output


Note: The NodeID column only appears in the ResCPUByAMP output report.
01/07/12

CPU USAGE BY AMP

Date
-------01/07/12

Page

Vproc
Id
----0
1

Node
Id
-----001-01
001-01

Node
CPUs
-------4
4

Awt
User
Serv%
------0.36%
0.26%

Misc
User
Serv%
------0.00%
0.00%

Awt
User
Exec%
------0.05%
0.12%

Misc
User
Exec%
------0.00%
0.00%

Total
User
Serv%
------0.36%
0.30%

Total
User
Exec%
------0.05%
0.12%

Total
Busy
%
------0.41%
0.42%

09:57:20

0
1

001-01
001-01

4
4

0.41%
0.34%

0.00%
0.00%

0.12%
0.05%

0.00%
0.00%

0.45%
0.38%

0.12%
0.05%

0.58%
0.42%

09:57:40

0
1

001-01
001-01

4
4

0.25%
0.19%

0.00%
0.00%

0.18%
0.06%

0.00%
0.00%

0.28%
0.29%

0.18%
0.06%

0.45%
0.35%

09:58:00

0
1

001-01
001-01

4
4

0.38%
0.31%

0.00%
0.00%

0.08%
0.09%

0.00%
0.00%

0.45%
0.34%

0.08%
0.09%

0.52%
0.42%

09:58:20

0
1

001-01
001-01

4
4

0.31%
0.36%

0.00%
0.00%

0.08%
0.09%

0.00%
0.00%

0.34%
0.40%

0.08%
0.09%

0.41%
0.49%

09:58:40

0
1

001-01
001-01

4
4

0.39%
0.32%

0.00%
0.00%

0.11%
0.12%

0.00%
0.00%

0.41%
0.36%

0.11%
0.12%

0.52%
0.49%

09:59:00

0
1

001-01
001-01

4
4

0.29%
0.21%

0.00%
0.00%

0.11%
0.09%

0.00%
0.00%

0.30%
0.22%

0.11%
0.09%

0.41%
0.31%

09:59:20

0
1

001-01
001-01

4
4

0.30%
0.30%

0.00%
0.00%

0.06%
0.19%

0.00%
0.00%

0.31%
0.32%

0.06%
0.19%

0.38%
0.51%

09:59:40

0
1

001-01
001-01

4
4

0.40%
0.26%

0.00%
0.00%

0.09%
0.08%

0.00%
0.00%

0.46%
0.38%

0.09%
0.08%

0.55%
0.45%

10:00:00

0
1

001-01
001-01

4
4

0.32%
0.28%

0.00%
0.00%

0.08%
0.09%

0.00%
0.00%

0.34%
0.31%

0.08%
0.09%

0.41%
0.40%

Time
-------09:57:00

Resource Usage Macros and Tables

189

Chapter 15: Resource Usage Macros


ResCPUByAMP Macros

ResCPUByAMPOneNode Sample Output


01/07/12

CPU Usage by AMP for Node 001-01 (4 CPUs)

Date
-------01/07/12

Page

68

Vproc
Id
----0
1

NCPUs
----4
4

Awt
User
Serv%
------0.36%
0.26%

09:57:20

0
1

4
4

0.41%
0.34%

0.00%
0.00%

0.12%
0.05%

0.00%
0.00%

0.45%
0.38%

0.12%
0.05%

0.58%
0.42%

09:57:40

0
1

4
4

0.25%
0.19%

0.00%
0.00%

0.18%
0.06%

0.00%
0.00%

0.28%
0.29%

0.18%
0.06%

0.45%
0.35%

09:58:00

0
1

4
4

0.38%
0.31%

0.00%
0.00%

0.08%
0.09%

0.00%
0.00%

0.45%
0.34%

0.08%
0.09%

0.52%
0.42%

09:58:20

0
1

4
4

0.31%
0.36%

0.00%
0.00%

0.08%
0.09%

0.00%
0.00%

0.34%
0.40%

0.08%
0.09%

0.41%
0.49%

09:58:40

0
1

4
4

0.39%
0.32%

0.00%
0.00%

0.11%
0.12%

0.00%
0.00%

0.41%
0.36%

0.11%
0.12%

0.52%
0.49%

09:59:00

0
1

4
4

0.29%
0.21%

0.00%
0.00%

0.11%
0.09%

0.00%
0.00%

0.30%
0.22%

0.11%
0.09%

0.41%
0.31%

Time
-------09:57:00

Misc
User
Serv%
------0.00%
0.00%

Awt
User
Exec%
------0.05%
0.12%

Misc
User
Exec%
------0.00%
0.00%

Total
User
Serv%
------0.36%
0.30%

Total
User
Exec%
------0.05%
0.12%

Total
Busy
%
------0.41%
0.42%

ResAmpCpuByGroup Sample Output


Note: The GroupID column only appears in the ResAmpCpuByGroup output report.
01/07/12
45

AMP CPU USAGE BY GROUP

Date
-------01/07/12

190

Page

Node
CPUs
-------4

Awt
User
Serv%
------0.32%

Misc
User
Serv%
------0.00%

Awt
User
Exec%
------0.07%

Misc
User
Exec%
------0.00%

Total
User
Serv%
------0.36%

Total
User
Exec%
------0.07%

Total
Busy
%
------0.43%

0.33%

0.00%

0.08%

0.00%

0.36%

0.08%

0.44%

09:52:20

0.35%

0.00%

0.07%

0.00%

0.37%

0.07%

0.44%

09:52:40

0.36%

0.00%

0.09%

0.00%

0.39%

0.09%

0.48%

09:53:00

0.27%

0.00%

0.09%

0.00%

0.28%

0.09%

0.37%

09:53:20

0.29%

0.00%

0.06%

0.00%

0.34%

0.06%

0.40%

09:53:40

0.36%

0.00%

0.06%

0.00%

0.40%

0.06%

0.46%

09:54:00

0.35%

0.00%

0.11%

0.00%

0.38%

0.11%

0.49%

09:54:20

0.34%

0.00%

0.07%

0.00%

0.36%

0.07%

0.43%

09:54:40

0.41%

0.00%

0.04%

0.00%

0.43%

0.04%

0.47%

09:55:00

0.28%

0.00%

0.09%

0.00%

0.28%

0.09%

0.37%

09:55:20

0.35%

0.00%

0.09%

0.00%

0.43%

0.09%

0.53%

09:55:40

0.34%

0.00%

0.06%

0.00%

0.42%

0.06%

0.48%

09:56:00

0.26%

0.00%

0.08%

0.00%

0.29%

0.08%

0.37%

Time
-------09:51:40

Group
Id
----A

09:52:00

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResCPUByAMP Macros

Normalized Viewing of CPU Usage by AMP


Some users may prefer to view CPU usage by AMP in a normalized fashion. Conceptually, this
restates each of the above statistics in terms of percentage of total CPU capacity of the node.
The following SQL example shows how to perform this normalization for the Total Busy %
statistic.
SEL TheDate, TheTime, Vproc, NodeID,
(AmpTotalUserExec+AmpTotalUserServ)
/Secs/NCPUs
(FORMAT zz9%,TITLE Total// Busy// %)
FROM ResCpuUsageByAMPView
WHERE TheDate = CURRENT_DATE AND TheTime>080000
ORDER BY 1,2,3;

Resource Usage Macros and Tables

191

Chapter 15: Resource Usage Macros


ResCPUByPE Macros

ResCPUByPE Macros
Function
Macro...

Reports...

ResCPUByPE

how each PE on each node is utilizing the CPUs.

ResCPUByPEOneNode

how each PE on a specific node is utilizing the CPUs.

ResPeCpuByGroup

the PE CPU utilization summarized by a node grouping.

Input Format Examples


The input forms of the these three macros are described below.
EXEC ResCPUByPE
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXEC ResCPUByPEOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXEC ResPeCpuByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, ToDate, FromTime,
ToTime, FromNode, ToNode and Node parameters.

Usage Notes
You must have logging enabled on the ResUsageSvpr table.

Output Examples
The reports in the following sections are sample output reports from the ResCPUByPE,
ResCPUByPEOneNode, and ResPeCPUByGroup macros, respectively, where:

192

Column...

Reports the percent of time PEs are busy doing user...

Disp User Serv%

service for the Dispatcher partition of the PE.

Ses User Serv%

service for the Session Control partition of the PE.

Misc User Serv%

service for miscellaneous (all other, except Partition 0) PE partitions.

Disp User Exec%

execution within the Dispatcher partition of the PE.

Ses User Exec%

execution within the Session Control partition of the PE.

Misc User Exec%

execution within miscellaneous (all other, except Partition 0) PE partitions.

Total User Serv%

service work. This is the sum of the four user service columns above plus PE
Partition 0 user service.

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResCPUByPE Macros

Column...

Reports the percent of time PEs are busy doing user...

Total User Exec%

execution work. This is the sum of the four user execution columns above
plus PE Partition 0 user execution.

Total Busy%

service and execution work. This is the sum of the Total User Serv% and the
Total User Exec% columns.

Note: The above CPU statistics represent the aggregate of all time spent in the indicated way
by all CPUs on the node. Because there are multiple CPUs, the Total Busy % should be
compared to a theoretical maximum of 100% times the number of CPUs on the node.
The Node CPU column in the following sample outputs reports the number of CPUs
(NCPUs).

ResCPUByPE Sample Output


Note: The NodeID column only appears in the ResCPUByPE output report.
01/07/12
Page 1

CPU USAGE BY PE

Vproc
Node Node
Date
Time
Id
Id CPUs
-------- -------- ----- ------ ---01/07/12 09:57:00 16382 001-01
4
16383 001-01
4

Disp
Ses
Misc
User
User
User
Serv%
Serv%
Serv%
------- ------- ------0.00%
0.00%
0.00%
0.00%
0.00%
0.00%

Disp
User
Exec%
------0.00%
0.00%

Ses
User
Exec%
------0.00%
0.00%

Misc
User
Exec%
------0.00%
0.00%

Total
Total
Total
User
User
Busy
Serv%
Exec%
%
------ ------ ------0.00%
0.00%
0.00%
0.00%
0.00%
0.00%

09:57:20 16382 001-01


16383 001-01

4
4

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

09:57:40 16382 001-01


16383 001-01

4
4

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

09:58:00 16382 001-01


16383 001-01

4
4

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

09:58:20 16382 001-01


16383 001-01

4
4

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

09:58:40 16382 001-01


16383 001-01

4
4

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

09:59:00 16382 001-01


16383 001-01

4
4

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

09:59:20 16382 001-01


16383 001-01

4
4

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

09:59:40 16382 001-01


16383 001-01

4
4

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

10:00:00 16382 001-01


16383 001-01

4
4

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

0.00%
0.00%

ResCPUByPEOneNode Sample Output


01/09/13
Page 4

CPU Usage by PE for Node 001-01 (4 CPUs)

Vproc
Date
Time
Id
------------ ----01/08/21 15:41:00 2

Node
CPUs
---4

Disp
User
Serv%
-----0.05%

Ses
User
Serv%
-----0.00%

Misc
User
Serv%
-----0.00%

Disp
User
Exec%
-----0.01%

Ses
User
Exec%
-----0.00%

Misc
User
Exec%
-----0.00%

Total
User
Serv%
-----0.08%

Total
User
Exec%
-----0.58%

Total
Busy
%
----0.7%

15:42:00 2

0.01%

0.00%

0.00%

0.00%

0.00%

0.00%

0.02%

0.18%

0.2%

15:43:00 2

0.02%

0.00%

0.00%

0.00%

0.00%

0.00%

0.04%

0.20%

0.2%

15:44:00 2

0.02%

0.00%

0.00%

0.00%

0.00%

0.00%

0.05%

0.56%

0.6%

Resource Usage Macros and Tables

193

Chapter 15: Resource Usage Macros


ResCPUByPE Macros

01/08/27

15:45:00 2

0.01%

0.00%

0.00%

0.00%

0.00%

0.00%

0.02%

0.18%

0.2%

16:21:00 2

0.08%

0.00%

0.00%

0.00%

0.00%

0.00%

0.13%

0.70%

0.8%

ResPeCpuByGroup Sample Output


Note: The GroupID column only appears in the ResPeCpuByGroup output report.
01/07/12
Page 8

PE CPU USAGE BY GROUP

Group Node
Date
Time
Id CPUs
----------- ----- ---01/07/12 04:55:40 A
4

Disp
User
Serv%
-----0.00%

Ses
User
Serv%
------0.00%

Misc
User
Serv%
-----0.00%

Disp
User
Exec%
-----0.00
0.00

Ses
User
Exec%
-----0.00%

Misc
Total
User
User
Exec%
Serv%
------- -----0.00%
0.00%

0.00%

0.00%

0.00%

Total
User
Exec%
----0.00%

Total
Busy
%
----0.00%

0.00%

0.00%

04:56:00

0.00%

0.00%

0.00%

04:56:20

0.00%

0.00%

0.00%

0.00

0.00%

0.00%

0.00%

0.00%

0.00%

04:56:40

0.00%

0.00%

0.00%

0.00

0.00%

0.00%

0.00%

0.00%

0.00%

04:57:00

0.00%

0.00%

0.00%

0.00

0.00%

0.00%

0.00%

0.00%

0.00%

04:57:20

0.00%

0.00%

0.00%

0.00

0.00%

0.00%

0.00%

0.00%

0.00%

04:57:40

0.00%

0.00%

0.00%

0.00

0.00%

0.00%

0.00%

0.00%

0.00%

04:58:00

0.00%

0.00%

0.00%

0.00

0.00%

0.00%

0.00%

0.00%

0.00%

04:58:20

0.00%

0.00%

0.00%

0.00

0.00%

0.00%

0.00%

0.00%

0.00%

Normalized Viewing of CPU Usage by PE


Some users may prefer to view CPU usage by PEs in a normalized fashion. Conceptually, this
restates each of the above statistics in terms of percentage of total CPU capacity of the node.
The following SQL example shows how to perform this normalization for the Total Busy %
statistic.
SEL TheDate, TheTime,Vproc,NodeID,
(PETotalUserExec+PETotalUserServ)
/Secs/NCPUs
(FORMAT zz9%,TITLE Total// Busy// %)
FROM ResCpuUsageByPEView
WHERE TheDate = CURRENT_DATE AND TheTime>080000
ORDER BY 1,2,3;

194

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResCPUByNode Macros

ResCPUByNode Macros
Function
Macro...

Reports how...

ResCPUByNode

each individual node is utilizing its CPUs.

ResCPUOneNode

a specific node is utilizing its CPUs.

ResCPUByGroup

a specified Node Group is utilizing the system CPUs.

Input Format Examples


The input forms of these three macros are described below.
EXEC ResCPUByNode
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXEC ResCPUOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXEC ResCPUByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, ToDate, FromTime,
ToTime, FromNode, ToNode and Node parameters.

Usage Notes
You must have logging enabled on the ResUsageSpma table.

Output Examples
The reports in the following sections are sample output reports from the ResCPUByNode, the
ResCPUOneNode, and the ResCPUByGroup macros.
The following columns are the averages for all CPUs on the node.
This column ...

Lists percentage of time spent ...

I/O Wait %

idle and waiting for I/O completion.

Total User Serv %

busy doing user service work.

Total User Exec %

busy doing user execution work.

Total Busy %

busy doing user service and execution work.


This is the sum of Total User Serv % and the Total User Exec %
columns.

Resource Usage Macros and Tables

195

Chapter 15: Resource Usage Macros


ResCPUByNode Macros

where:
This variable

Describes the time a CPU is busy executing

User service

kernel system calls or servicing I/O and timer hardware interrupts.

User execution

user execution code, which is the time spent in a user state on behalf of a
process.

ResCPUByNode Sample Output


Note: The NodeID column only appears in the ResCPUByNode output report.
01/07/12

CPU USAGE BY NODE Page

Date
-------01/07/12

Time
-------09:51:40
09:52:00
09:52:20
09:52:40
09:53:00
09:53:20

Node
Id
-----001-01
001-01
001-01
001-01
001-01
001-01

I/O
Wait
%
-----16.2%
17.2%
15.5%
16.1%
15.8%
15.5%

Total
User
Serv%
-----1.4%
1.3%
1.6%
1.5%
1.0%
1.5%

45

Total
User
Exec%
-----0.1%
0.2%
0.2%
0.2%
0.2%
0.2%

Total
Busy
%
-----1.5%
1.5%
1.8%
1.7%
1.2%
1.7%

ResCPUOneNode Sample Output


01/07/12

CPU Usage for Node 001-01

Date
-------01/07/12

196

Time
-------09:44:20
09:44:40
09:45:00
09:45:20
09:45:40
09:46:00
09:46:20

I/O
Wait
%
-----16.2%
16.9%
16.5%
17.0%
17.4%
16.6%
16.2%

Page

01

Total
Total
Total
User
User
Busy
Serv%
Exec%
%
------ ------ -----1.6%
0.2%
1.9%
1.3%
0.2%
1.5%
1.1%
0.1%
1.2%
1.7%
0.2%
1.9%
1.1%
0.2%
1.3%
1.3%
0.2%
1.5%
1.6%
0.2%
1.8%

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResCPUByNode Macros

ResCPUByGroup Sample Output


Note: The GroupID column only appears in the ResCpuByGroup output report.
00/10/16

CPU USAGE BY Group

Date
-------00/10/16

Resource Usage Macros and Tables

Time
-------11:25:00

Group
Id
----A
B

Page 2

I/O
Wait
%
-----0.0%
0.0%

Total
User
Serv%
-----0.0%
0.0%

Total
User
Exec%
-----0.0%
0.0%

Total
Busy
%
-----0.0%
0.0%

11:30:00

A
B

0.0%
0.0%

0.0%
0.0%

0.0%
0.0%

0.0%
0.0%

11:35:00

A
B

0.0%
0.0%

0.6%
0.3%

0.6%
0.4%

1.1%
0.7%

11:40:00

A
B

0.0%
0.0%

1.3%
1.1%

0.9%
0.9%

2.2%
2.0%

11:45:00

A
B

0.0%
0.0%

0.6%
0.3%

0.9%
1.0%

1.5%
1.3%

11:50:00

A
B

0.0%
0.0%

0.6%
0.6%

0.6%
0.8%

1.2%
1.3%

11:55:00

A
B

0.0%
0.0%

1.5%
1.6%

1.1%
1.0%

2.6%
2.6%

12:00:00

A
B

0.0%
0.0%

0.5%
0.7%

0.8%
0.9%

1.3%
1.6%

12:05:00

A
B

0.0%
0.0%

1.2%
0.6%

0.7%
0.5%

1.8%
1.1%

12:10:00

A
B

0.0%
0.0%

0.6%
1.1%

0.9%
1.2%

1.6%
2.2%

12:15:00

A
B

0.0%
0.0%

0.6%
0.5%

0.8%
0.7%

1.4%
1.2%

12:20:00

A
B

0.0%
0.0%

1.4%
1.1%

0.8%
0.8%

2.2%
1.9%

12:25:00

A
B

0.0%
0.0%

0.9%
0.9%

1.0%
0.9%

1.9%
1.8%

12:30:00

A
B

0.0%
0.0%

0.6%
0.6%

0.6%
0.8%

1.2%
1.4%

12:35:00

A
B

0.0%
0.0%

1.6%
1.3%

1.1%
0.9%

2.7%
2.2%

197

Chapter 15: Resource Usage Macros


ResHostByLink Macros

ResHostByLink Macros
Function
Macro...

Reports the host traffic for...

ResHostByLink

every communication link in the system.

ResHostOneNode

the communication links of a specific node.

ResHostByGroup

the communication links of a node grouping.

Input Format Examples


The input forms of these three macros are described below.
EXEC ResHostByLink
(FromDate,ToDate,FromTime,ToTime);

Note: The ResHostByLink macro syntax does not include the FromNode and ToNode
parameters to specify a range of nodes.
EXEC ResHostOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXEC ResHostByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, FromTime, ToDate,
ToTime, and Node parameters.

Usage Notes
The ResHostByLink macros help you answer the following questions:

Is my set up correct?

Am I making good use of the channels? If not, how high are they? If not high, then there
may not be enough host resources.

Study the incoming traffic. Problems with incoming traffic may be simply caused by an
incorrect configuration. Once configured correctly, if there is still a traffic problem, consider
studying the TCP/IP network traffic, for example, when doing an export, the ResUsageSpma
table may show 30 million rows/log period.
There must be ResUsageShst table rows logged for the period of time that the macro is going
to produce a report for.

Output Examples
The reports in the following sections are sample output reports from the ResHostByLink, the
ResHostOneNode, and the ResHostByGroup macros, respectively, where:

198

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResHostByLink Macros

Column...

Reports the...

Host Type

type of host connection:

IBMECS
IBMMUX
IBMNET
NETWORK

Host Id

Logical ID of host (or client) with sessions logged on.

KBs Read/ Sec

number of KB read per second.

KBs Write/ Sec

number of KB written per second.

Blks Read/ Sec

number of successful blocks read per second.

Blks Write/ Sec

number of successful blocks written per second.

Blk Read Fail %

percentage of block read attempts that failed.

Blk Write Fail %

percentage of block write attempts that failed.

KBs/Blk Read

average number of KB per block read.

KBs/Blk Write

average number of KB per block written.

Msgs/Blk Read

average number of messages per block read.

Msgs/Blk Write

average number of messages per block written.

Avg ReqQ Len

average number of messages queued for output to the host.

Max ReqQ Len

maximum number of messages queued for output to the host.

ResHostByLink Sample Output


Note: The NodeID column only appears in the ResHostByLink output report.
00/10/16

HOST COMMUNICATIONS BY COMMUNICATION LINK

Page

KBs
KBs
Blks Blks
Blk
Blk
KBs
KBs Msgs
Msgs
Avg
Max
Node Vproc
Host Host Read
Write Read Write Read Write /Blk
/Blk /Blk
/Blk ReqQ ReqQ
Date
Time
Id
Id
Type
Id /Sec
/Sec
/Sec
/Sec Fail% Fail% Read
Write Read Write Len
Len
------- ------- ------ ----- ------- ----- ----- ------ ----- ----- ----- ----- ------ ------ ----- ----- ----- ---00/10/16 11:07:00 105-04 65535 NETWORK
0 24.0
13.3
0.1
0.1
0.0%
0.0% 350.5
186.2 0.8
0.8
0.0
0.0
IBMMUX 101 0.0
0.0
0.0
0.0
?
?
?
?
?
0.0
0.0
0.0
105-05 65535 NETWORK
0
IBMMUX 202
IBMMUX 304

0.0
0.0
0.0

0.0
0.0
0.0

0.0
0.0
0.0

0.0
0.0
0.0

IBMMUX 101
105-05 65535 NETWORK
0
IBMMUX 202
IBMMUX 304

0.0
4.1
0.0
0.0

0.0
22.6
0.0
0.0

0.0
0.1
0.0
0.0

0.0
0.1
0.0
0.0

Resource Usage Macros and Tables

?
?
?
?
0.0%
?
?

?
?
?
?
0.0%
?
?

?
?
?

?
?
?

?
?
?

?
?
?

0.0
0.0
0.0

0.0
0.0
0.0

?
391.8
?
?

?
206.5
?
?

?
0.9
?
?

?
0.9
?
?

0.0
0.0
0.0
0.0

0.0
0.0
0.0
0.0

199

Chapter 15: Resource Usage Macros


ResLdvByNode Macros

ResHostOneNode Sample Output


00/10/16

Host Communications for Node 105-05


KBs
KBs
Read
Write
/Sec
/Sec
------- -------0.0
0.0
0.0
0.0
0.0
0.0

Blks Blks
Blk
Blk
Read Write Read Write
/Sec /Sec Fail% Fail%
----- ----- ----- ----0.0
0.0
?
?
0.0
0.0
?
?
0.0
0.0
?
?

Page
KBs
KBs
/Blk
/Blk
Read
Write
------ ------?
?
?
?
?
?

Vproc Host
Date
Time
Id Type
-------- -------- ----- -------00/10/16 11:07:00 65535 NETWORK
IBMMUX
IBMMUX

Host
Id
----0
202
304

11:22:42 65535 NETWORK


IBMMUX
IBMMUX

0
202
304

44.1
0.0
0.0

22.6
0.0
0.0

0.1
0.0
0.0

0.1
0.0
0.0

0.0%
?
?

0.0%
?
?

391.8
?
?

11:32:42 65535 NETWORK


IBMMUX
IBMMUX

0
202
304

0.0
0.0
0.0

0.0
0.0
0.0

0.0
0.0
0.0

0.0
0.0
0.0

?
?
?

?
?
?

?
?
?

Msgs Msgs
Avg
Max
/Blk /Blk ReqQ ReqQ
Read Write
Len
Len
----- ----- ----- --?
?
0.0
0.0
?
?
0.0
0.0
?
?
0.0
0.0

206.5
?
?
?
?
?

0.9
?
?

0.9 0.0
?
0.0
?
0.0

0.0
0.0
0.0

?
?
?

?
?
?

0.0
0.0
0.0

0.0
0.0
0.0

ResHostByGroup Sample Output


Note: The GroupID column only appears in the ResHostByGroup output report.
Date
-------00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16
00/10/16

Time
-------11:30:00
11:30:00
11:35:00
11:35:00
11:40:00
11:40:00
11:45:00
11:45:00
11:50:00
11:50:00
11:55:00
11:55:00
12:00:00
12:00:00
12:05:00
12:05:00
12:10:00
12:10:00
12:15:00
12:15:00
12:20:00
12:20:00
12:25:00
12:25:00
12:30:00

Group
Id
-----A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A
B
A

Host
Type
-------NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK
NETWORK

KBs
Read
/Sec
-------0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

KBs
Write
/Sec
-------0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

Blks
Read
/Sec
----0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

Blks
Write
/Sec
----0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

Blk
Read
Fail%
----?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?

Blk
Write
Fail%
----?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?

KBs
/Blk
Read
------?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?

KBs
/Blk
Write
------?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?

Msgs
/Blk
Read
----?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?

Msgs
/Blk
Write
----?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?
?

Avg
ReqQ
Len
----0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

Max
ReqQ
Len
----0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

ResLdvByNode Macros
Function
Macro...

Reports the logical device traffic channeled through...

ResLdvByNode

each node by totaling its controller links into one summarized node
output line.

ResLdvOneNode

a specific node by totaling all its controller links into one summarized
node output line.

ResLdvByGroup

a node grouping.

Input Format Examples


The input forms of these three macros are described below.

200

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResLdvByNode Macros
EXEC ResLdvByNode
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXEC ResLdvOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXEC ResLdvByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, FromTime, ToDate,
ToTime, FromNode, ToNode and Node parameters.

Usage Notes
You must have logging enabled on the ResUsageSldv table.

Output Examples
The reports in the following sections are sample output reports from the ResLdvByNode, the
ResLdvOneNode, and the ResLdvByGroup macros, respectively, where:
Column...

Reports the...

Reads / Sec

average number of logical device reads per second.

Writes / Sec

average number of logical device writes per second.

Rd KB/ I/O

average number of KB per logical device read.

Wrt KB / I/O

average number of KB per logical device write.

Avg I/O Resp

average response time for a logical device read or write in seconds.

Max Concur Rqsts

maximum number of concurrent requests during the log period.

Avg Out Rqsts

average number of outstanding requests.

Out Rqst Time %

percent of time there are outstanding requests.

ResLdvByNode Sample Output


Note: The NodeID column only appears in the ResLdvByNode output report.
06/09/26

LOGICAL DEVICE TRAFFIC BY NODE


Ldv
Date Type Time
-------- ---- -------06/09/26 DISK 10:09:45
10:10:00
10:10:15
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00
10:12:15
10:12:30
10:12:45

Resource Usage Macros and Tables

Node
Id
-----001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01

Reads
/ Sec
-------0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

Writes
/ Sec
-------2.00
1.27
2.20
1.20
3.53
1.33
2.00
1.33
1.87
40.67
3.40
5.40
1.87

KB
/ I/O
-----4.20
5.89
5.88
6.22
3.96
5.85
4.30
5.65
8.71
31.27
16.57
7.44
14.29

Avg
I/O
Resp
------0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000

Page
1
Avg
Out
Out
Rqst
Rqsts Time %
----- ----0.0
1.3%
0.0
0.6%
0.0
1.3%
0.0
0.8%
0.0
2.5%
0.0
1.0%
0.0
1.3%
0.0
0.7%
0.0
1.0%
4.0 100.0%
0.0
1.4%
0.0
5.2%
0.0
0.9%

201

Chapter 15: Resource Usage Macros


ResLdvByNode Macros
SDSK 10:09:45
10:10:00
10:10:15
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00

001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01

0.13
0.07
0.00
0.44
0.50
0.48
0.51
0.88
0.97
0.98

1.00
7.98
9.21
8.73
9.01
8.45
8.85
2.72
0.34
0.28

55.42
109.53
111.57
107.32
98.02
100.64
100.83
410.24
******
******

0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000

0.0
1.2
1.3
1.1
1.1
1.0
1.0
0.3
0.1
0.0

3.2%
9.1%
9.1%
9.1%
11.2%
9.1%
9.1%
9.1%
9.2%
8.6%

ResLdvOneNode Sample Output


06/09/26

LOGICAL DEVICE TRAFFIC FOR NODE 001-01

Ldv
Date Type
-------- ---06/09/26 DISK

SDSK

Time
-------10:09:45
10:10:00
10:10:15
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00
10:12:15
10:12:30
10:12:45

Reads
/ Sec
-------0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

Writes
/ Sec
-------2.00
1.27
2.20
1.20
3.53
1.33
2.00
1.33
1.87
40.67
3.40
5.40
1.87

KB
/ I/O
-----4.20
5.89
5.88
6.22
3.96
5.85
4.30
5.65
8.71
31.27
16.57
7.44
14.29

Avg
I/O
Resp
------0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000

Avg
Out
Rqsts
----0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
4.0
0.0
0.0
0.0

10:09:45
10:10:00
10:10:15
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00
10:12:15
10:12:30
10:12:45

0.13
0.07
0.00
0.44
0.50
0.48
0.51
0.88
0.97
0.98
0.59
0.08
0.00

1.00
7.98
9.21
8.73
9.01
8.45
8.85
2.72
0.34
0.28
3.14
4.15
0.39

55.42
109.53
111.57
107.32
98.02
100.64
100.83
410.24
******
******
285.78
16.99
32.94

0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000

0.0
1.2
1.3
1.1
1.1
1.0
1.0
0.3
0.1
0.0
0.1
0.1
0.0

Out
Rqst
Time %
-----1.3%
0.6%
1.3%
0.8%
2.5%
1.0%
1.3%
0.7%
1.0%
100.0%
1.4%
5.2%
0.9%
3.2%
9.1%
9.1%
9.1%
11.2%
9.1%
9.1%
9.1%
9.2%
8.6%
9.1%
9.2%
1.0%

ResLdvByGroup Sample Output


Note: The GroupID column only appears in the ResLdvByGroup output report.
06/09/26

Grp
Date Id
-------- --06/09/26 A

LOGICAL DEVICE TRAFFIC BY GROUP Page

Ldv
Type
---DISK

SDSK

202

Time
--------10:09:45
10:10:00
10:10:15
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00
10:12:15
10:12:30
10:12:45

Reads
/ Sec
-------0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

10:09:45
10:10:00
10:10:15

0.13
0.07
0.00

Writes
Rd KB
Wrt KB
/ Sec
/ I/O
/ I/O
-------- ------- -----2.00
?
4.20
1.27
?
5.89
2.20
?
5.88
1.20
?
6.22
3.53
?
3.96
1.33
?
5.85
2.00
?
4.30
1.33
?
5.65
1.87
?
8.71
40.67
?
31.27
3.40
?
16.57
5.40
?
7.44
1.87
?
14.29
1.00
7.98
9.21

121.27
139.00
?

46.64
109.28
111.57

Avg
I/O
Resp
------0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000

Max
Concur
Rqsts
----0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

Out
Rqst
Time %
-----1.3%
0.6%
1.3%
0.8%
2.5%
1.0%
1.3%
0.7%
1.0%
100.0%
1.4%
5.2%
0.9%
3.2%
9.1%
9.1%

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResLdvByNode Macros
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00
10:12:15
10:12:30
10:12:45

Resource Usage Macros and Tables

0.44
0.50
0.48
0.51
0.88
0.97
0.98
0.59
0.08
0.00

8.73
9.01
8.45
8.85
2.72
0.34
0.28
3.14
4.15
0.39

113.33
12.12
12.01
12.08
******
******
******
******
295.14
?

107.02
102.76
105.72
105.93
94.90
39.43
50.13
9.85
11.29
32.94

0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000
0.000

0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

9.1%
11.2%
9.1%
9.1%
9.1%
9.2%
8.6%
9.1%
9.2%
1.0%

203

Chapter 15: Resource Usage Macros


ResMemMgmtByNode Macros

ResMemMgmtByNode Macros
Function
Macro...

Reports memory management activity for...

ResMemMgmtByNode

each individual node.

ResMemMgmtOneNode

a specific node.

ResMemByGroup

a node grouping.

Input Format Examples


The input forms of these three macros are described below.
EXEC ResMemMgmtByNode
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXEC ResMemMgmtOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXEC ResMemByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, FromTime, ToDate,
ToTime, FromNode, ToNode and Node parameters.

Usage Notes
You must have logging enabled on the ResUsageSpma table.

Output Examples
The reports in the following sections are sample output reports from the
ResMemMgmtByNode, the ResMemMgmtOneNode macros, and the ResMemByGroup,
respectively, where:
Column...

Reports the...

% Mem Free

current snapshot of the percent of memory that is unused.

Text Alocs/ Sec

average number of text page allocations per second.


text pages are allocations of memory for code that is not associated with
system-level overhead tasks.

204

VPR Alocs/ Sec

average number of vproc-specific page and segment allocations per


second on a node.

KB/ VPR Aloc

average KB per vproc-specific page and segment allocation on a node.

Aloc Fail %

percent of memory allocation attempts that failed.

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResMemMgmtByNode Macros

Column...

Reports the...

Ages/ Sec

average number of times memory pages were aged out per second.

# Proc Swp

current number of processes that are swapped out.

Page Drops/ Sec

average number of text pages dropped from memory per second.


Page drops are text pages that are dropped from memory to increase the
amount of available memory.

Page Reads KBs/ Sec

average number of memory paging KB, read from disk per second.
Page reads include both memory text pages and task context pages, such
as scratch, stack, etc.

Page Writes/ Sec

average number of memory pages written to disk per second.


Page writes include only task context pages.

Swap Drops/ Sec

average number of disk segments dropped from memory per second.


Swap drops include all disk segments dropped from memory because
their ancestor processes were swapped out.

Swap Reads/ Sec

average number of disk segments reread back into memory, after being
swapped, out per second.
Swap reads include all reread disk segments that had been previously
dropped from memory because their ancestor processes were swapped
out.

KB/Swp Drp

average size, in KB, of disk segments dropped from memory because their
ancestor processes were swapped out.

KB/Swp Rd

average size, in KB, of reread disk segments that had been previously
dropped from memory because their ancestor processes were swapped
out.

P+S Drops/ Sec

average number of paged, swapped page, or segment drops per second.


This statistic includes both the memory text pages (Pg Drps/ Sec), and the
disk segments (Swp Drps/ Sec), that were dropped.

P+S Read KBs/ Sec

average total number of paged, swapped page, or segment read KB, per
second. Includes both the memory text pages and task context pages (Pg
Rds/ Sec), and the disk segments (Swp Rds/ Sec), reread back into
memory after being swapped out.

P+S Writes/ Sec

average total number of paged, swapped page, or segment writes per


second.

P+S IO KB %

percent of logical device input and output KB done for paging or


swapping inputs and outputs.

Resource Usage Macros and Tables

205

Chapter 15: Resource Usage Macros


ResMemMgmtByNode Macros

ResMemMgmtByNode Sample Output


Note: The NodeID column only appears in the ResMemMgmtByNode output report.
11/02/01

MEMORY MANAGEMENT ACTIVITY BY NODE

Date
-------11/02/01

Time
-------09:15:00
09:16:00
09:17:00
09:18:00
09:19:00
09:20:00
09:21:00
09:22:00
09:23:00
09:24:00
09:25:00
09:26:00
09:27:00
09:28:00
09:29:00
09:30:00
09:31:00
09:32:00
09:33:00
09:34:00
09:35:00
09:36:00
09:37:00

Node
Id
-----001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01

%
Mem
Free
---26%
27%
26%
27%
26%
26%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%

Page
Read KBs
/Sec
--------0
15
786
407
1,628
648
788
113
145
36
203
139
79
40
6
22
41
14
43
0
0
7
0

Page+Swap
Read KBs
/Sec
--------0
19
813
446
1,698
837
957
143
184
42
234
164
85
40
8
31
47
15
43
3
0
7
0

Page

Page

P+S
IO KB %
------0%
26%
85%
73%
14%
8%
11%
1%
1%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%

ResMemMgmtOneNode Sample Output


11/02/01

Memory Management Activity for Node 001-01

Date
-------11/02/01

206

Time
-------09:15:00
09:16:00
09:17:00
09:18:00
09:19:00
09:20:00
09:21:00
09:22:00
09:23:00
09:24:00
09:25:00
09:26:00
09:27:00
09:28:00
09:29:00
09:30:00
09:31:00
09:32:00
09:33:00
09:34:00
09:35:00
09:36:00
09:37:00
09:38:00
09:39:00
09:40:00
09:41:00
09:42:00
09:43:00
09:44:00
09:45:00
09:46:00
09:47:00
09:48:00
09:49:00

%
Mem
Free
---26%
27%
26%
27%
26%
26%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%

Page
Read KBs
/Sec
-------0
15
786
407
1628
648
788
113
145
36
203
139
79
40
6
22
41
14
43
0
0
7
0
0
16
2
30
23
0
0
0
1
0
0
6

Page+Swap
Read KBs
/Sec
--------0
19
813
446
1698
837
957
143
184
42
234
164
85
40
8
31
47
15
43
3
0
7
0
1
19
4
31
24
0
0
0
1
1
4
7

P+S
IO KB %
------0%
26%
85%
73%
14%
8%
11%
1%
1%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
1%
1%
0%
0%
0%
10%
1%
2%
5%

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResMemMgmtByNode Macros
09:50:00

27%

3%

ResMemByGroup Sample Output


Note: The GroupID column only appears in the ResMemByGroup output report.
11/02/01

Date
-------11/02/01

MEMORY MGMT ACTIVITY BY GROUP

Time
-------09:15:00
09:16:00
09:17:00
09:18:00
09:19:00
09:20:00
09:21:00
09:22:00
09:23:00
09:24:00
09:25:00
09:26:00
09:27:00
09:28:00
09:29:00
09:30:00
09:31:00
09:32:00
09:33:00
09:34:00
09:35:00
09:36:00
09:37:00

Group
Id
-------AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes
AMPNodes

Resource Usage Macros and Tables

%
Mem
Free
---26%
27%
26%
27%
26%
26%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%
27%

Page
Read KBs
/Sec
--------0
15
786
407
1,628
648
788
113
145
36
203
139
79
40
6
22
41
14
43
0
0
7
0

Page
Page+Swap
Read KBs
/Sec
--------0
19
813
446
1,698
837
957
143
184
42
234
164
85
40
8
31
47
15
43
3
0
7
0

P+S
IO KB %
------0%
26%
85%
73%
14%
8%
11%
1%
1%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%
0%

207

Chapter 15: Resource Usage Macros


ResNetByNode Macros

ResNetByNode Macros
Function
Macro...

Reports net traffic for...

ResNetByNode

each node.

ResNetOneNode

a specific node.

ResNetByGroup

nodes summarized by node groups.

Input Format Examples


The input forms of these three macros are described below.
EXEC ResNetByNode
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXEC ResNetOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXEC ResNetByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, FromTime, ToDate,
ToTime, FromNode, ToNode and Node parameters.

Usage Notes
You must have logging enabled on the ResUsageSpma table.

Output Examples
The reports in the following sections are sample output reports from the ResNetByNode, the
ResNetOneNode, and the ResNetByGroup macros, respectively, where:
Column...

Reports the...

% Retries

percent of total net circuit attempts that caused software backoffs (BNS
service-blocked occurrences).
Note: This value reflects how many times the hardware backed off a
connection because the switch nodes could not route to the end point. That
implies that the end point was busy or, in switch node terms, the routing
path was busy. A value over 100% does not imply a problem, but shows that
there were multiple attempts to send new messages while the Bynet path was
busy. On a busy system, this can be a normal level of activity.

208

Total Reads/ Sec

average number of net reads per second.

Total Writes/ Sec

average number of net writes per second.

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResNetByNode Macros

Column...

Reports the...

Total IOs/ Sec

average number of net reads and writes per second.

KB/ IO

average KB per net read or write.

% PtP

percent of total net reads and writes that are point-to-point reads and writes.

% Brd

percent of total net reads and writes that are broadcast reads and writes.

Note: In the following examples, the NodeID column appears only in the ResNetByNode
output report. The GroupID column only appears in the ResNetByGroup output report. For
all the examples, the values in the Total Reads/ Sec and Total Writes/ Sec are expected to be
equal on SMP (single-node, vnet) systems.

ResNetByNode Sample Output


00/10/16

NET ACTIVITY BY NODE

Date
Time
---------- -------2000/10/16 11:20:00

Page 2

Total
Total
Total
Node
% ReReads
Writes
IOs
KB
Id
tries
/Sec
/Sec
/Sec
/IO
------ ------ ------- ------- ------- -----001-03
0.0%
0.46
0.39
0.85
0.4
001-04
0.0%
0.55
0.47
1.02
0.4

%
PtP
--92
93

%
Brd
--8
7

11:25:00

001-03
001-04

0.0%
0.0%

0.39
0.39

0.33
0.32

0.72
0.71

0.4
0.4

90
90

10
10

11:30:00

001-03
001-04

0.0%
0.0%

0.44
0.55

0.37
0.47

0.81
1.02

0.4
0.4

91
92

9
8

11:35:00

001-03
001-04

2.5%
2.5%

20.84
23.07

12.53
15.51

33.37
38.58

1.8
1.8

73
74

27
26

11:40:00

001-03
001-04

24.7%
20.6%

35.44
37.16

35.56
38.87

71.00
76.03

17.4
13.8

93
93

7
7

11:45:00

001-03
001-04

15.9%
28.1%

13.47
11.79

10.71
12.63

24.18
24.42

8.3
12.8

76
83

24
17

11:50:00

001-03
001-04

3.3%
4.1%

18.92
22.77

14.18
20.97

33.11
43.74

1.3
1.9

77
75

23
25

ResNetOneNode Sample Output


00/10/16

Net Activity for Node 001-03

Date
-------00/10/16

Resource Usage Macros and Tables

Time
-------10:19:00
10:20:00
10:21:00
10:22:00
10:23:00
10:30:00
10:35:00
10:40:00
10:45:00
10:50:00
10:55:00
11:00:00
11:05:00
11:10:00
11:15:00
11:20:00

% Retries
-----0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%

Total
Reads
/Sec
------0.78
2.87
3.08
2.13
2.23
0.25
0.53
0.51
0.48
0.52
0.46
0.58
0.57
0.54
0.46
0.46

Total
Writes
/Sec
------0.07
2.65
2.07
2.07
2.17
0.18
0.47
0.44
0.42
0.45
0.39
0.51
0.38
0.47
0.40
0.39

Total
IOs
/Sec
------0.85
5.52
5.15
4.20
4.40
0.43
1.00
0.95
0.90
0.97
0.85
1.09
0.95
1.01
0.86
0.85

Page 1
KB
/IO
-----1.1
0.5
0.6
0.5
0.5
0.4
0.4
0.5
0.4
0.5
0.4
0.4
0.5
0.4
0.5
0.4

%
PtP
--8
96
80
98
98
84
93
93
92
93
92
94
79
93
92
92

%
Brd
--92
4
20
2
2
16
7
7
8
7
8
6
21
7
8
8

209

Chapter 15: Resource Usage Macros


ResNetByNode Macros

ResNetByGroup Sample Output


00/10/16

NET ACTIVITY BY Group

Date
---------2000/10/16

210

Group
% ReTime
Id
tries
-------- ------ -----11:20:00
B
0.0%

Page 2

Reads
Writes
/Sec
/Sec
------- ------0.55
0.47

IOs
/Sec
------1.02

KB
%
%
/IO PtP Brd
------ --- --0.4
93
7

11:25:00

A
B

0.0%
0.0%

0.39
0.39

0.33
0.32

0.72
0.71

0.4
0.4

90
90

10
10

11:30:00

A
B

0.0%
0.0%

0.44
0.55

0.37
0.47

0.81
1.02

0.4
0.4

91
92

9
8

11:35:00

A
B

2.5%
2.5%

20.84
23.07

12.53
15.51

33.37
38.58

1.8
1.8

73
74

27
26

11:40:00

A
B

24.7%
20.6%

35.44
37.16

35.56
38.87

71.00
76.03

17.4
13.8

93
93

7
7

11:45:00

A
B

15.9%
28.1%

13.47
11.79

10.71
12.63

24.18
24.42

8.3
12.8

76
83

24
17

11:50:00

A
B

3.3%
4.1%

33.11
43.74

1.3
1.9

77
75

23
25

11:55:00

A
B

55.8%
41.1%

73.46
88.72

22.3
17.7

95
96

5
4

12:00:00

A
B

5.1%
5.8%

32.27
33.16

2.0
1.7

73
70

27
30

12:05:00

A
B

24.2%
10.6%

33.09
26.97

28.57
25.97

61.67
52.94

14.6
5.5

90
91

10
9

12:10:00

A
B

73.8%
57.4%

17.33
28.12

14.01
26.65

31.34
54.77

23.2
23.0

91
93

9
7

12:15:00

A
B

3.9%
6.3%

37.12
36.83

2.0
1.8

73
73

27
27

12:20:00

A
B

48.7%
34.9%

70.83
71.86

18.0
13.2

95
93

5
7

18.92
22.77
40.01
44.35
19.11
22.13

21.02
22.13
36.18
38.16

14.18
20.97
33.45
44.37
13.16
11.03

16.10
14.70
34.65
33.70

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResNode Macros

ResNode Macros
Function
Macro...

Provides a summary of resource usage ...

ResNode

averaged across all nodes, excluding any PE-only nodes.

ResOneNode

by returning the node requested.

ResNodeByNode

by returning the nodes requested.

ResNodeByGroup

by defined groups of nodes.

Input Format Examples


The input forms of these four macros are described below.
EXEC ResNode
(FromDate,ToDate,FromTime,ToTime);

Note: The ResNode macro syntax does not include the FromNode and ToNode parameters to
specify a range of nodes.
EXEC ResOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXEC ResNodeByNode
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXEC ResNodeByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, ToDate, FromTime,
ToTime, FromNode, ToNode, and Node parameters.

Usage Notes
You must have logging enabled on the ResUsageSpma table.

Output Examples
The reports in the following sections are sample output reports from the ResNode, the
ResOneNode, the ResNodebyNode, and the ResNodeByGroup macros, respectively.
The following table describes the statistics columns, after the Date and Time columns, in the
ResNode output report.
Statistics columns

Description

1 through 3

CPU usage.

4 through 8

Logical device interface.

Resource Usage Macros and Tables

211

Chapter 15: Resource Usage Macros


ResNode Macros

Statistics columns

Description

9 through 14

Memory interface.

15 through 17

Net interface.

18 and 19

General node process scheduling.

The following table describes the statistics columns, after the Date and Time columns, in the
ResOneNode output report.
Statistics columns

Description

1 and 2

CPU usage.

3 through 6

Logical device interface.

7 through 11

Memory interface.

12 through 14

Net interface.

15 and 16

General node process scheduling.

The following table describes the statistics columns, after the Date and Time columns, in the
ResNodebyNode output report.
Statistics columns

Description

1 and 2

CPU usage.

3 through 6

Logical device interface.

7 through 12

Memory interface.

13 through 15

Net interface.

16 and 17

General node process scheduling.

The following table describes the statistics columns, after the Date and Time columns, in the
ResNodeByGroup output report.

212

Statistics columns

Description

GroupId as defined in the associated view as a grouping of one or


more nodes.

2 and 3

CPU usage.

4 through 7

Logical device interface.

8 through 12

Memory interface.

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResNode Macros

Statistics columns

Description

13 through 15

Net interface.

16 and 17

General node process scheduling.

The following table describes the statistics columns in all output reports (with the exception
of ResNodeByNode, which has a NodeID column, and ResNodeByGroup, which has a
GroupId column).
Column

Reports the

CPU Bsy %

percent of time the CPUs are busy, based on average CPU usage
per node.

CPU Eff % (ResNode report)

parallel efficiency of node CPU usage.


Parallel efficiency is the total percent of time nodes are busy. It is
the average for all nodes of total busy divided by the total busy
time of the busiest node.

WIO %

percent of time the CPUs are idle and waiting for completion of an
I/O operation.

Ldv IO KBs /Sec

average number of logical device read and write KB per second.

Ldv Eff % (ResNode report)

parallel efficiency of the logical device (disk) I/Os. It is the average


number of I/Os per node divided by the number of I/Os
performed by the node with the most I/Os.

P+S % of IO KBs

percent of logical device read and write KB that are for paging
purposes.

Read % of IO KBs

percent of logical device read and write KB that are reads.

Ldv KB / IO

average size of a non-swap/page logical device read or write.

Fre Mem %

percent of memory that is unused.

TPtP IOs /Sec

total point-to-point net reads and writes per second, per node.

TMlt IOs /Sec

total multicast (broadcast or merge) net reads and writes per


second, per node.

Net Rtry %

percent of transmission attempts that resulted in retries.

Prc Blks / Sec

number of times per second, per node, that processes other than
message and timer waits are blocked.

ms /Blk

average time, in milliseconds, spent waiting for a blocked process


other than message and timer waits.

Net Rx % Bsy

percent of time the network was busy either receiving.

Net Tx % Bsy %

percent of time the network was busy transmitting.

Resource Usage Macros and Tables

213

Chapter 15: Resource Usage Macros


ResNode Macros

ResNode Sample Output


11/02/01

Date
Time
-------- -------11/02/01 09:15:00
09:16:00
09:17:00
09:18:00
09:19:00
09:20:00
09:21:00
09:22:00
09:23:00
09:24:00
09:25:00
09:26:00
09:27:00
09:28:00
09:29:00
09:30:00
09:31:00
09:32:00
09:33:00
09:34:00
09:35:00
09:36:00
09:37:00
09:38:00
09:39:00
09:40:00
09:41:00
09:42:00
09:43:00
09:44:00
09:45:00
09:46:00
09:47:00
09:48:00
09:49:00
09:50:00

214

GENERAL RESUSAGE SUMMARY


Average across all nodes
CPU
Bsy
%
--0
0
1
2
32
76
45
42
37
16
16
13
4
4
5
10
9
8
5
5
4
4
4
4
3
2
40
2
0
0
0
0
0
0
1
1

Page

CPU
Ldv Ldv
P+S
Read Ldv Fre TPtP TMlt Net Proc
ms Net Net
Eff WIO
IO KBs Eff
%of
% of
KB Mem
IOs
IOs Rtry Blks
/ Rx % Tx %
%
%
/Sec
% IO KBs IO KBs /IO
% /Sec /Sec
% /Sec
Blk Busy Busy
--- --- ---------- --- ------ ------ ---- --- ----- ----- ---- ----- ------ ---- ---100
0
70 100
0
16
21 26
0
0
?
49
344
?
?
100
1
73 100
26
37
17 27
0
0
?
49
1735
?
?
100
6
954 100
85
88
25 26
0
0
?
56
1804
?
?
100 14
609 100
73
76
18 27
0
0
?
63
685
?
?
100 57
11,719 100
14
46
74 26
0
0
? 2385
16
?
?
100 23
10,706 100
8
40
72 26
0
0
?
187
260
?
?
100 47
11,607 100
11
11
87 27
0
0
? 2215
19
?
?
100 54
26,460 100
1
14
92 27
0
0
?
747
659
?
?
100 61
24,873 100
1
8
64 27
0
0
?
607
69
?
?
100 81
41,507 100
0
1
64 27
0
0
?
710
44
?
?
100 84
50,374 100
0
62 157 27
0
0
?
383
152
?
?
100 87
54,921 100
0
100 1546 27
0
0
?
140
330
?
?
100 96
58,023 100
0
100 1589 27
0
0
?
164
244
?
?
100 96
58,644 100
0
100 1683 27
0
0
?
156
248
?
?
100 95
59,642 100
0
100 1558 27
0
0
?
182
452
?
?
100 87
36,044 100
0
63 117 27
0
0
?
431
104
?
?
100 91
27,847 100
0
57
95 27
0
0
?
441
81
?
?
100 90
32,465 100
0
53 100 27
0
0
?
483
68
?
?
100 95
32,193 100
0
50 123 27
0
0
?
365
75
?
?
100 95
33,125 100
0
50 124 27
0
0
?
300
94
?
?
100 96
27,514 100
0
50 124 27
0
0
?
287
219
?
?
100 96
27,725 100
0
50 123 27
0
0
?
244
109
?
?
100 96
27,764 100
0
50 125 27
0
0
?
200
141
?
?
100 96
25,126 100
0
59 122 27
0
0
?
201
370
?
?
100 97
18,595 100
0
99 120 27
0
0
?
257
503
?
?
100 79
13,516 100
0
97 114 27
0
0
?
195
627
?
?
100 47
4,406 100
1
54
57 27
0
0
?
122
474
?
?
100 27
2,039 100
1
49
35 27
0
0
?
113
309
?
?
100
1
170 100
0
53
32 27
0
0
?
51
503
?
?
100
2
255 100
0
53
32 27
0
0
?
54
593
?
?
100
1
118 100
0
48
25 27
0
0
?
50
335
?
?
100
1
158 100
10
38
25 27
0
0
?
51
329
?
?
100
1
156 100
1
45
28 27
0
0
?
52
1450
?
?
100
1
198 100
2
46
29 27
0
0
?
53
1596
?
?
100
1
129 100
5
47
25 27
0
0
?
52
1284
?
?
100
1
97 100
3
41
23 27
0
0
?
50
955
?
?

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResNode Macros

ResOneNode Sample Output


11/02/01

Date
Time
-------- -------11/02/01 09:15:00
09:16:00
09:17:00
09:18:00
09:19:00
09:20:00
09:21:00
09:22:00
09:23:00
09:24:00
09:25:00
09:26:00
09:27:00
09:28:00
09:29:00
09:30:00
09:31:00
09:32:00
09:33:00
09:34:00
09:35:00
09:36:00
09:37:00
09:38:00
09:39:00
09:40:00
09:41:00
09:42:00
09:43:00
09:44:00
09:45:00
09:46:00
09:47:00
09:48:00
09:49:00
09:50:00

General Resource Usage Summary for Node 001-01

Page

CPU
Ldv
P+S
Read
Ldv Fre TPtP TMlt Net Proc
ms Net Net
Bsy WIO
IO KBs
%of
% of
KB Mem
IOs
IOs Rtry Blks
/ Rx % Tx %
%
%
/Sec IO KBs IO KBs
/IO
% /Sec /Sec
% /Sec
Blk Busy Busy
--- --- ---------- ------ ------ ------ --- ----- ----- ---- ----- ------ ---- ---0
0
70
0
16
21 26
0
0
?
49
344
?
?
0
1
73
26
37
17 27
0
0
?
49
1735
?
?
1
6
954
85
88
25 26
0
0
?
56
1804
?
?
2 14
609
73
76
18 27
0
0
?
63
685
?
?
32 57
11,719
14
46
74 26
0
0
? 2385
16
?
?
76 23
10,706
8
40
72 26
0
0
?
187
260
?
?
45 47
11,607
11
11
87 27
0
0
? 2215
19
?
?
42 54
26,460
1
14
92 27
0
0
?
747
659
?
?
37 61
24,873
1
8
64 27
0
0
?
607
69
?
?
16 81
41,507
0
1
64 27
0
0
?
710
44
?
?
16 84
50,374
0
62
157 27
0
0
?
383
152
?
?
13 87
54,921
0
100 1,546 27
0
0
?
140
330
?
?
4 96
58,023
0
100 1,589 27
0
0
?
164
244
?
?
4 96
58,644
0
100 1,683 27
0
0
?
156
248
?
?
5 95
59,642
0
100 1,558 27
0
0
?
182
452
?
?
10 87
36,044
0
63
117 27
0
0
?
431
104
?
?
9 91
27,847
0
57
95 27
0
0
?
441
81
?
?
8 90
32,465
0
53
100 27
0
0
?
483
68
?
?
5 95
32,193
0
50
123 27
0
0
?
365
75
?
?
5 95
33,125
0
50
124 27
0
0
?
300
94
?
?
4 96
27,514
0
50
124 27
0
0
?
287
219
?
?
4 96
27,725
0
50
123 27
0
0
?
244
109
?
?
4 96
27,764
0
50
125 27
0
0
?
200
141
?
?
4 96
25,126
0
59
122 27
0
0
?
201
370
?
?
3 97
18,595
0
99
120 27
0
0
?
257
503
?
?
2 79
13,516
0
97
114 27
0
0
?
195
627
?
?
40 47
4,406
1
54
57 27
0
0
?
122
474
?
?
2 27
2,039
1
49
35 27
0
0
?
113
309
?
?
0
1
170
0
53
32 27
0
0
?
51
503
?
?
0
2
255
0
53
32 27
0
0
?
54
593
?
?
0
1
118
0
48
25 27
0
0
?
50
335
?
?
0
1
158
10
38
25 27
0
0
?
51
329
?
?
0
1
156
1
45
28 27
0
0
?
52
1450
?
?
0
1
198
2
46
29 27
0
0
?
53
1596
?
?
1
1
129
5
47
25 27
0
0
?
52
1284
?
?
1
1
97
3
41
23 27
0
0
?
50
955
?
?

ResNodeByNode Sample Output


11/02/01

Date
Time NodeID
-------- -------- -----11/02/01 09:15:00 001-01
09:16:00 001-01
09:17:00 001-01
09:18:00 001-01
09:19:00 001-01
09:20:00 001-01
09:21:00 001-01
09:22:00 001-01
09:23:00 001-01
09:24:00 001-01
09:25:00 001-01
09:26:00 001-01
09:27:00 001-01
09:28:00 001-01
09:29:00 001-01
09:30:00 001-01
09:31:00 001-01
09:32:00 001-01
09:33:00 001-01
09:34:00 001-01
09:35:00 001-01
09:36:00 001-01
09:37:00 001-01

Node by Node General Resource Usage Summary

Page

CPU
Ldv
P+S
Read
Ldv Fre TPtP TMlt Net Proc
ms Net Net
Bsy WIO
IO KBs
%of
% of
KB Mem
IOs
IOs Rtry Blks
/ Rx % Tx %
%
%
/Sec IO KBs IO KBs
/IO
% /Sec /Sec
% /Sec
Blk Busy Busy
--- --- ---------- ------ ------ ------ --- ----- ----- ---- ----- ------ ---- ---0
0
70
0
16
21 26
0
0
?
49
344
?
?
0
1
73
26
37
17 27
0
0
?
49
1735
?
?
1
6
954
85
88
25 26
0
0
?
56
1804
?
?
2 14
609
73
76
18 27
0
0
?
63
685
?
?
32 57
11,719
14
46
74 26
0
0
? 2385
16
?
?
76 23
10,706
8
40
72 26
0
0
?
187
260
?
?
45 47
11,607
11
11
87 27
0
0
? 2215
19
?
?
42 54
26,460
1
14
92 27
0
0
?
747
659
?
?
37 61
24,873
1
8
64 27
0
0
?
607
69
?
?
16 81
41,507
0
1
64 27
0
0
?
710
44
?
?
16 84
50,374
0
62
157 27
0
0
?
383
152
?
?
13 87
54,921
0
100 1,546 27
0
0
?
140
330
?
?
4 96
58,023
0
100 1,589 27
0
0
?
164
244
?
?
4 96
58,644
0
100 1,683 27
0
0
?
156
248
?
?
5 95
59,642
0
100 1,558 27
0
0
?
182
452
?
?
10 87
36,044
0
63
117 27
0
0
?
431
104
?
?
9 91
27,847
0
57
95 27
0
0
?
441
81
?
?
8 90
32,465
0
53
100 27
0
0
?
483
68
?
?
5 95
32,193
0
50
123 27
0
0
?
365
75
?
?
5 95
33,125
0
50
124 27
0
0
?
300
94
?
?
4 96
27,514
0
50
124 27
0
0
?
287
219
?
?
4 96
27,725
0
50
123 27
0
0
?
244
109
?
?
4 96
27,764
0
50
125 27
0
0
?
200
141
?
?

Resource Usage Macros and Tables

215

Chapter 15: Resource Usage Macros


ResNode Macros

ResNodeByGroup Sample Output


11/02/01

GENERAL RESOURCE USAGE SUMMARY BY GROUP

Page

CPU
Ldv
P+S
Read
Ldv Fre TPtP TMlt Net
Proc
ms
Net Net
Group
Bsy WIO
IO KBs
%of
% of
KB Mem
IOs IOs
Rtry Blks
/
Rx % Tx %
Date
Time
Id
%
%
/Sec IO KBs IO KBs
/IO
% /Sec /Sec
% /Sec
Blk Busy Busy
-------- -------- -------- --- --- ---------- ------ ------ ------ --- ----- ----- ---- ----- ------ ---- ---11/02/01 09:15:00 AMPNodes
0
0
70
0
16
21 26
0
0
?
49
344
?
?
09:16:00 AMPNodes
0
1
73
26
37
17 27
0
0
?
49
1735
?
?
09:17:00 AMPNodes
1
6
954
85
88
25 26
0
0
?
56
1804
?
?
09:18:00 AMPNodes
2 14
609
73
76
18 27
0
0
?
63
685
?
?
09:19:00 AMPNodes 32 57
11,719
14
46
74 26
0
0
? 2385
16
?
?
09:20:00 AMPNodes 76 23
10,706
8
40
72 26
0
0
?
187
260
?
?
09:21:00 AMPNodes 45 47
11,607
11
11
87 27
0
0
? 2215
19
?
?
09:22:00 AMPNodes 42 54
26,460
1
14
92 27
0
0
?
747
659
?
?
09:23:00 AMPNodes 37 61
24,873
1
8
64 27
0
0
?
607
69
?
?
09:24:00 AMPNodes 16 81
41,507
0
1
64 27
0
0
?
710
44
?
?
09:25:00 AMPNodes 16 84
50,374
0
62
157 27
0
0
?
383
152
?
?
09:26:00 AMPNodes 13 87
54,921
0
100 1,546 27
0
0
?
140
330
?
?
09:27:00 AMPNodes
4 96
58,023
0
100 1,589 27
0
0
?
164
244
?
?
09:28:00 AMPNodes
4 96
58,644
0
100 1,683 27
0
0
?
156
248
?
?
09:29:00 AMPNodes
5 95
59,642
0
100 1,558 27
0
0
?
182
452
?
?
09:30:00 AMPNodes 10 87
36,044
0
63
117 27
0
0
?
431
104
?
?
09:31:00 AMPNodes
9 91
27,847
0
57
95 27
0
0
?
441
81
?
?
09:32:00 AMPNodes
8 90
32,465
0
53
100 27
0
0
?
483
68
?
?
09:33:00 AMPNodes
5 95
32,193
0
50
123 27
0
0
?
365
75
?
?
09:34:00 AMPNodes
5 95
33,125
0
50
124 27
0
0
?
300
94
?
?
09:35:00 AMPNodes
4 96
27,514
0
50
124 27
0
0
?
287
219
?
?
09:36:00 AMPNodes
4 96
27,725
0
50
123 27
0
0
?
244
109
?
?
09:37:00 AMPNodes
4 96
27,764
0
50
125 27
0
0
?
200
141
?
?

216

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResPdskByNode Macros

ResPdskByNode Macros
Function
Macro...

Reports the device traffic...

ResPdskByNode

by a physical node.

ResPdskOneNode

for a specified node.

ResPdskByGroup

node grouping.

Input Format Examples


The input forms of these three macros are described below.
EXEC ResPdskByNode
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXEC ResPdskOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXEC ResPdskByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, FromTime, ToDate,
ToTime, FromNode, ToNode and Node parameters.

Usage Notes
You must have logging enabled on the ResUsageSpdsk table.

Output Examples
The following table describes the statistics columns in all output reports (with the exception
of ResPdiskByNode, which reports by NodeID columns, and ResPdiskByGroup, which reports
by NodeType column).
Column...

Reports the...

Reads/Sec

average number of device reads per second.

Writes/Sec

average number of device writes per second.

Rd KB/ I/O

average number of KB per device read.

Wrt KB/ I/O

average number of KB per device write.

Avg I/O Resp

average response time for a device read or write in seconds.

Max Concur Rqsts

maximum number of concurrent requests during the log period.

Out Rqst Time %

percent of time there are outstanding requests.

Resource Usage Macros and Tables

217

Chapter 15: Resource Usage Macros


ResPdskByNode Macros

ResPdskByNode Sample Output


Note: The NodeID column only appears in the ResPdskByNode output report.
07/11/28

Date
-------07/11/28

218

PDISK TRAFFIC BY NODE

Pdisk
Type
-----DISK

Page

Time
-------13:20:00

Node
Id
-----001-01

Reads
/ Sec
-------0.10

Writes
/ Sec
-------0.23

Rd KB
/ I/O
------*******

Wrt KB
/ I/O
------*******

Avg
I/O
Resp
------0.004

13:21:00

001-01

0.12

0.41

*******

4202.15

0.002

0.0%

13:22:00

001-01

0.13

0.49

*******

3623.05

0.001

0.0%

13:24:00

001-01

0.05

0.20

4717.71

5262.77

0.002

0.0%

13:25:00

001-01

0.07

0.35

2560.00

4637.26

0.001

0.0%

13:26:00

001-01

0.11

0.42

6537.85

4485.12

0.001

0.0%

13:28:00

001-01

0.06

0.19

3396.27

4785.63

0.001

0.0%

13:29:00

001-01

0.14

0.45

3602.29

4943.64

0.003

0.0%

13:30:00

001-01

0.12

0.40

5961.14

5274.67

0.002

0.0%

13:32:00

001-01

0.07

0.18

3990.59

3856.34

0.000

0.0%

13:33:00

001-01

0.17

0.53

4532.71

5745.23

0.001

0.0%

13:34:00

001-01

0.11

0.38

5730.46

4975.30

0.002

0.0%

13:36:00

001-01

0.05

0.17

5218.46

5677.51

0.002

0.0%

13:37:00

001-01

0.18

0.51

3990.59

5125.33

0.001

0.0%

13:38:00

001-01

0.11

0.37

5218.46

4846.55

0.001

0.0%

13:39:00

001-01

0.14

0.42

3990.59

4758.59

0.001

0.0%

13:41:00

001-01

0.05

0.17

5139.69

4660.36

0.002

0.0%

13:42:00

001-01

0.11

0.28

5017.60

5546.67

0.001

0.0%

13:43:00

001-01

0.14

0.41

6731.29

5616.33

0.002

0.0%

13:45:00

001-01

0.06

0.19

4176.00

4874.04

0.001

0.0%

13:46:00

001-01

0.07

0.34

2523.43

6260.36

0.002

0.0%

13:47:00

001-01

0.13

0.38

6192.00

5990.40

0.002

0.0%

Out
Rqst
Time %
-----0.0%

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResPdskByNode Macros

ResPdskOneNode Sample Output


Note: The NodeID column only appears in the ResPdskOneNode output report.
07/11/28

PDISK Traffic for Node 001-01

Date
-------07/11/28

Pdisk
Type
-----DISK

Time
-------13:20:00
13:21:00
13:22:00
13:24:00
13:25:00
13:26:00
13:28:00
13:29:00
13:30:00
13:32:00
13:33:00
13:34:00
13:36:00
13:37:00
13:38:00
13:39:00
13:41:00
13:42:00
13:43:00
13:45:00
13:46:00
13:47:00
13:49:00

Reads
/ Sec
-------0.10
0.12
0.13
0.05
0.07
0.11
0.06
0.14
0.12
0.07
0.17
0.11
0.05
0.18
0.11
0.14
0.05
0.11
0.14
0.06
0.07
0.13
0.05

Writes
/ Sec
-------0.23
0.41
0.49
0.20
0.35
0.42
0.19
0.45
0.40
0.18
0.53
0.38
0.17
0.51
0.37
0.42
0.17
0.28
0.41
0.19
0.34
0.38
0.21

Rd KB
/ I/O
------*******
*******
*******
4717.71
2560.00
6537.85
3396.27
3602.29
5961.14
3990.59
4532.71
5730.46
5218.46
3990.59
5218.46
3990.59
5139.69
5017.60
6731.29
4176.00
2523.43
6192.00
3693.71

Page

Wrt KB
/ I/O
------*******
4202.15
3623.05
5262.77
4637.26
4485.12
4785.63
4943.64
5274.67
3856.34
5745.23
4975.30
5677.51
5125.33
4846.55
4758.59
4660.36
5546.67
5616.33
4874.04
6260.36
5990.40
5878.52

Avg
I/O
Resp
------0.004
0.002
0.001
0.002
0.001
0.001
0.001
0.003
0.002
0.000
0.001
0.002
0.002
0.001
0.001
0.001
0.002
0.001
0.002
0.001
0.002
0.002
0.002

Out
Rqst
Time %
-----0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%

ResPdskByGroup Sample Output


Note: The GroupID column only appears in the ResPdskByGroup output report.
06/09/26

PDISK TRAFFIC BY GROUP

Node
Pdisk
Date Type Type
-------- ---- -----06/09/26 4400 DISK

Resource Usage Macros and Tables

Time
-------10:09:45
10:10:00
10:10:15
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00
10:12:15
10:12:30
10:12:45

Avg
ReadCnt
WriteCnt Rd KB
Wrt KB
I/O
/ Sec
/ Sec
/ I/O
/ I/O
Resp
-------- -------- ------- ------- ------0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?
0.00
0.00
?
?
?

Page
Max
Concur
Rqsts
-----0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0

Out
Rqst
Time %
-----0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%
0.0%

219

Chapter 15: Resource Usage Macros


ResPs Macros

ResPs Macros
Function
Macro...

Provides a summary of the Priority Scheduler resource usage...

ResPsByNode

By TheDate, TheTime, NodeId, PGId columns on SLES 10 or


earlier systems.
By the pWDId column on SLES 11 or later systems.
The ResPsByNode macro produces one row of data for every set as
defined in the GROUP BY clause.

ResPsByGroup

by coexistence group, produces one set of rows of data for each for
each node type in the system for each log interval.

Input Format Examples


The input forms of the macros are described below.
EXEC ResPsByNode
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXEC ResPsByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, ToDate, FromTime,
ToTime, FromNode, ToNode, and Node parameters.
Note: Coexistence support can be accomplished using the NodeType column to do a group by
in SQL directly. Therefore, the GroupId column is not needed and the ResUsageSps table view
is not provided.

Usage Notes
To use ResPs macros, existing rows must exist in the table.
These macros can be used to report historical data.

Output Examples
The reports in the following sections are sample output reports from the ResPsByNode and
ResPsByGroup macros.
The following table describes the statistics columns, after the Date and Time columns, in the
ResPsByNode output report.

220

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResPs Macros

Statistics columns

Description

Node ID.

On SLES 10 or later systems, this column contains the PG ID.


On SLES 11 or later systems, this column contains the pWDId.
Note: In the ResPs macro outputs, the RowIndex1 column
displays as PGid, instead of pWDid, when running SLES 11 or
later systems. For a description of the RowIndex1 column, see
Chapter 11: ResUsageSps.

3 through 12

Summary of the Priority Scheduler statistics.

The following table describes the statistics columns, after the Date and Time columns, in the
ResPsByGroup output report.
Statistics columns

Description

Node type.

2 through 11

Summary of the Priority Scheduler statistics.

The following table describes the statistics columns in all output reports (with the exception
of ResSpsByNode, which reports by NodeID columns, and ResSpsByGroup, which reports by
NodeType column).
Column...

Reports the...

CPU Bsy %

percent of time the CPUs are busy, based on average CPU usage per node.

IO Blks Sec

number of logical data blocks read or written.

Num Procs

average number of tasks of online nodes.

Num Requests

number of AMP Worker Task messages or requests that got assigned AMP
Worker Tasks to them.

Avg QWait Time

average time in milliseconds that work requests waited on an input queue


before being serviced.

Max QWait Time

maximum time in milliseconds that work requests waited on an input queue


before being serviced.

Q Length

average number of messages queued for output to the host.

Q Len Max

maximum number of messages queued for output to the host.

Avg Svc Time

average time in milliseconds that work requests required for service

Max Svc Time

maximum time in milliseconds that work requests required for service.

For a complete description of the above columns, see Chapter 11: ResUsageSps.
Resource Usage Macros and Tables

221

Chapter 15: Resource Usage Macros


ResPs Macros

ResPsByNode Macro Sample Output


Note: The column label, RowIndex1, displays as PGid, instead of pWDid, when running SLES
11 or later systems. For a description of the RowIndex1 column, see Chapter 11:
ResUsageSps.
ResPsByNode:
10/12/01

PS by Node Usage Summary

----0
1
250
0
1
250

CPU
Bsy
%
--0
29
0
0
26
8

IO
Blks
/Sec
----0
0
2
0
0
2

Num
Procs
------19
66
722
18
87
697

Num
Requests
-------0
0
3
0
162033
3

Avg
QWait
Time
-----?
?
0
?
1
0

Max
QWait
Time
------0
0
0
0
31
0

Q
Length
------0
0
0
0
1
0

Q
Len
Max
------0
0
0
0
2
0

Avg
Svc
Time
------?
?
36
?
5
26

Max
Svc
Time
------0
0
78
0
265
46

2-09
2-09
2-09
2-10
2-10
2-10

0
1
250
0
1
250

0
31
0
0
26
9

0
0
2
0
0
2

19
66
722
18
121
705

0
0
3
0
164728
2

?
?
0
?
1
0

0
0
0
0
31
0

0
0
0
0
1
0

0
0
0
0
2
0

?
?
25
?
5
78

0
0
46
0
218
93

14:37:00

2-09
2-09
2-09
2-10
2-10
2-10

0
1
250
0
1
250

0
30
0
0
26
8

0
0
2
0
0
2

19
66
722
18
96
698

0
0
47
0
164847
50

?
?
0
?
1
0

0
0
15
0
31
15

0
0
0
0
1
0

0
0
0
0
2
0

?
?
6
?
4
9

0
0
78
0
265
46

14:38:00

2-09
2-09
2-09
2-09
2-10
2-10
2-10
2-10

0
1
3
250
0
1
3
250

0
27
0
0
0
31
0
9

0
0
0
2
0
0
0
2

19
66
0
722
18
100
0
697

0
27
9
3
0
158281
9
12

?
3
0
0
?
1
10
0

0
15
0
0
0
31
15
0

0
0
0
0
0
1
0
0

0
0
0
0
0
2
0
0

?
3
0
78
?
5
3
14

0
31
0
156
0
234
15
78

14:39:00

2-09
2-09
2-09
2-10
2-10
2-10

0
1
250
0
1
250

0
30
0
0
23
8

0
0
2
0
0
2

19
66
722
18
91
698

0
0
4
0
165816
2

?
?
0
?
1
8

0
0
0
0
31
15

0
0
0
0
1
0

0
0
0
0
2
0

?
?
12
?
4
78

0
0
31
0
203
109

14:40:00

2-09
2-09
2-09
2-09
2-10
2-10
2-10
2-10

0
1
3
250
0
1
3
250

0
26
0
0
0
33
0
11

0
0
0
2
0
0
0
2

19
66
0
722
18
95
0
698

0
36
9
5
0
161946
9
8

?
0
0
0
?
1
2
0

0
0
0
0
0
31
15
0

0
0
0
0
0
1
0
0

0
0
0
0
0
2
0
0

?
1
0
18
?
4
0
18

0
46
0
46
0
203
0
109

PGid
Date
-------10/12/01

Page

Time
-------14:35:00

NodeID
-----2-09
2-09
2-09
2-10
2-10
2-10

14:36:00

ResPsByGroup Macro Sample Output


Note: The column label, RowIndex1, displays as PGid, instead of pWDid, when running SLES
11 or later systems. For a description of the RowIndex1 column, see Chapter 11:
ResUsageSps.
ResPsByGroup:
10/12/01

PS by Group Usage Summary

Page

CPU
IO
Avg
Max
Q
Avg
Max
Node
Bsy Blks
Num
Num
QWait QWait
Q
Len
Svc
Svc
Date
Time Type
NodeID PGid
% /Sec
Procs Requests
Time
Time
Length
Max
Time
Time
-------- -------- -------- ------ ----- --- ----- ------- -------- ------ ------- ------- ------- ------- ------10/12/01 14:35:00 5400
2-09
0
0
0
19
0
?
0
0
0
?
0
5400
2-09
1 29
0
66
0
?
0
0
0
?
0
5400
2-09
250
0
2
722
3
0
0
0
0
36
78
5400
2-10
0
0
0
18
0
?
0
0
0
?
0
5400
2-10
1 26
0
87
162033
1
31
1
2
5
265
5400
2-10
250
8
2
697
3
0
0
0
0
26
46
14:36:00 5400
5400
5400
5400

222

2-09
2-09
2-09
2-10

0
1
250
0

0
31
0
0

0
0
2
0

19
66
722
18

0
0
3
0

?
?
0
?

0
0
0
0

0
0
0
0

0
0
0
0

?
?
25
?

0
0
46
0

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResVdskByNode Macros
5400
5400

2-10
2-10

1
250

26
9

0
2

121
705

164728
2

1
0

31
0

1
0

2
0

5
78

218
93

14:37:00 5400
5400
5400
5400
5400
5400

2-09
2-09
2-09
2-10
2-10
2-10

0
1
250
0
1
250

0
30
0
0
26
8

0
0
2
0
0
2

19
66
722
18
96
698

0
0
47
0
164847
50

?
?
0
?
1
0

0
0
15
0
31
15

0
0
0
0
1
0

0
0
0
0
2
0

?
?
6
?
4
9

0
0
78
0
265
46

14:38:00 5400
5400
5400
5400
5400
5400
5400
5400

2-09
2-09
2-09
2-09
2-10
2-10
2-10
2-10

0
1
3
250
0
1
3
250

0
27
0
0
0
31
0
9

0
0
0
2
0
0
0
2

19
66
0
722
18
100
0
697

0
27
9
3
0
158281
9
12

?
3
0
0
?
1
10
0

0
15
0
0
0
31
15
0

0
0
0
0
0
1
0
0

0
0
0
0
0
2
0
0

?
3
0
78
?
5
3
14

0
31
0
156
0
234
15
78

14:39:00 5400
5400
5400
5400
5400
5400

2-09
2-09
2-09
2-10
2-10
2-10

0
1
250
0
1
250

0
30
0
0
23
8

0
0
2
0
0
2

19
66
722
18
91
698

0
0
4
0
165816
2

?
?
0
?
1
8

0
0
0
0
31
15

0
0
0
0
1
0

0
0
0
0
2
0

?
?
12
?
4
78

0
0
31
0
203
109

14:40:00 5400
5400
5400
5400
5400
5400
5400
5400

2-09
2-09
2-09
2-09
2-10
2-10
2-10
2-10

0
1
3
250
0
1
3
250

0
26
0
0
0
33
0
11

0
0
0
2
0
0
0
2

19
66
0
722
18
95
0
698

0
36
9
5
0
161946
9
8

?
0
0
0
?
1
2
0

0
0
0
0
0
31
15
0

0
0
0
0
0
1
0
0

0
0
0
0
0
2
0
0

?
1
0
18
?
4
0
18

0
46
0
46
0
203
0
109

ResVdskByNode Macros
Function
Macro...

Reports the logical device traffic by...

ResVdskByNode

a physical node.

ResVdskOneNode

for a specified node.

ResVdskByGroup

a node grouping.

Input Format Examples


The input forms of these three macros are described below.
EXEC ResVdskByNode
(FromDate,ToDate,FromTime,ToTime,FromNode,ToNode);
EXEC ResVdskOneNode
(FromDate,ToDate,FromTime,ToTime,Node);
EXEC ResVdskByGroup
(FromDate,ToDate,FromTime,ToTime);

See Macro Execution on page 31 for a description of the FromDate, FromTime, ToDate,
ToTime, FromNode, ToNode and Node parameters.

Usage Notes
To use the macros, existing rows must exist in the ResUsageSvdsk table.

Resource Usage Macros and Tables

223

Chapter 15: Resource Usage Macros


ResVdskByNode Macros

Output Examples
The following table describes the statistics columns in all output reports (with the exception
of ResVdiskByNode, which has the NodeID column, and ResVdskByGroup, which has the
NodeType column).
Column...

Reports the...

Read Cnt / Sec

average number of logical device reads per second.

Write Cnt / Sec

average number of logical device writes per second.

Rd KB / I/O

average number of KB per logical device read.

Wrt KB / I/O

average number of KB per logical device write.

Avg I/O Resp

average response time for a logical device read or write in seconds.

Out Rqst Time %

percent of time there are outstanding requests.

Max Concur Rqsts

maximum number of concurrent requests during the log period.

ResVdskByNode Sample Output


Note: The NodeID column only appears in the ResVdskByNode output report.
06/09/26

VDISK TRAFFIC BY NODE

Date
-------06/09/26

Time
-------10:09:45
10:10:00
10:10:15
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00
10:12:15
10:12:30
10:12:45

Node
Id
-----001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01
001-01

Page

Read Cnt Write Cnt Rd KB


/ Sec
/ Sec
I/O /
-------- -------- ------0.73
5.20
121.27
0.40
41.17
127.50
0.00
47.40
?
2.43
45.17
111.79
2.77
45.90
12.00
2.67
43.77
12.00
2.83
46.10
12.00
4.87
14.13 1374.25
5.37
1.77 1785.00
5.43
0.30 1785.00
3.20
9.70 1759.00
0.50
12.60
275.47
0.00
2.17
?

Avg
Wrt KB
I/O
I/O
Resp
------- ------46.69
0.023
116.34
0.152
118.98
0.143
113.54
0.126
109.93
0.118
112.07
0.119
112.32
0.107
100.01
0.081
38.15
0.065
71.56
0.064
16.76
0.039
18.84
0.038
32.94
0.017

Out
Rqst
Time %
-----9.7%
85.3%
98.8%
98.4%
97.7%
98.0%
97.3%
54.9%
41.0%
36.2%
45.8%
38.6%
3.1%

ResVdskOneNode Sample Output


06/09/26

VDISK Traffic for Node 001-01

Date
-------06/09/26

224

Time
-------10:09:45
10:10:00
10:10:15
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00

ReadCnt
/ Sec
--------0.73
0.40
0.00
2.43
2.77
2.67
2.83
4.87
5.37
5.43

Page

WriteCnt
Rd KB
Wrt KB
/ Sec
/ I/O
/ I/O
-------- ------- ------5.20
121.27
46.69
41.17
127.50
116.34
47.40
?
118.98
45.17
111.79
113.54
45.90
12.00
109.93
43.77
12.00
112.07
46.10
12.00
112.32
14.13 1374.25
100.01
1.77 1785.00
38.15
0.30 1785.00
71.56

Avg
I/O
Resp
------0.023
0.152
0.143
0.126
0.118
0.119
0.107
0.081
0.065
0.064

Out
Rqst
Time %
-----9.7%
85.3%
98.8%
98.4%
97.7%
98.0%
97.3%
54.9%
41.0%
36.2%

Resource Usage Macros and Tables

Chapter 15: Resource Usage Macros


ResVdskByNode Macros
10:12:15
10:12:30
10:12:45

3.20
0.50
0.00

9.70
12.60
2.17

1759.00
275.47
?

16.76
18.84
32.94

0.039
0.038
0.017

45.8%
38.6%
3.1%

ResVdskByGroup Sample Output


Note: The NodeType column only appears in the ResVdskByGroup output report.
06/09/26

VDISK TRAFFIC BY GROUP

Date
-------06/09/26

Node
Type Time
---- -------4400 10:09:45
10:10:00
10:10:15
10:10:30
10:10:45
10:11:00
10:11:15
10:11:30
10:11:45
10:12:00
10:12:15
10:12:30
10:12:45

Resource Usage Macros and Tables

ReadCnt
/ Sec
-------0.73
0.40
0.00
2.43
2.77
2.67
2.83
4.87
5.37
5.43
3.20
0.50
0.00

WriteCnt Rd KB
/ Sec
/ I/O
-------- ------5.20
121.27
41.17
127.50
47.40
?
45.17
111.79
45.90
12.00
43.77
12.00
46.10
12.00
14.13 1374.25
1.77 1785.00
0.30 1785.00
9.70 1759.00
12.60
275.47
2.17
?

Wrt KB
/ I/O
------46.69
116.34
118.98
113.54
109.93
112.07
112.32
100.01
38.15
71.56
16.76
18.84
32.94

Page
Avg
I/O
Resp
------0.023
0.152
0.143
0.126
0.118
0.119
0.107
0.081
0.065
0.064
0.039
0.038
0.017

Max
Concur
Rqsts
-----4.5
20.5
20.0
17.5
13.5
14.5
14.0
12.5
3.0
2.0
2.0
3.5
2.0

Out
Rqst
Time %
-----9.7%
85.3%
98.8%
98.4%
97.7%
98.0%
97.3%
54.9%
41.0%
36.2%
45.8%
38.6%
3.1%

225

Chapter 15: Resource Usage Macros


ResVdskByNode Macros

226

Resource Usage Macros and Tables

APPENDIX A

How to Read Syntax Diagrams

This appendix describes the conventions that apply to reading the syntax diagrams used in
this book.

Syntax Diagram Conventions


Notation Conventions
Item

Definition / Comments

Letter

An uppercase or lowercase alphabetic character ranging from A through Z.

Number

A digit ranging from 0 through 9.


Do not use commas when typing a number with more than 3 digits.

Word

Keywords and variables.


UPPERCASE LETTERS represent a keyword.
Syntax diagrams show all keywords in uppercase, unless operating system
restrictions require them to be in lowercase.
lowercase letters represent a keyword that you must type in lowercase, such as a
Linux command.
Mixed Case letters represent exceptions to uppercase and lowercase rules. The
exceptions are noted in the syntax explanation.
lowercase italic letters represent a variable such as a column or table name.
Substitute the variable with a proper value.
lowercase bold letters represent an excerpt from the diagram. The excerpt is
defined immediately following the diagram that contains it.
UNDERLINED LETTERS represent the default value.
This applies to both uppercase and lowercase words.

Spaces

Use one space between items such as keywords or variables.

Punctuation

Type all punctuation exactly as it appears in the diagram.

Paths
The main path along the syntax diagram begins at the left with a keyword, and proceeds, left
to right, to the vertical bar, which marks the end of the diagram. Paths that do not have an
arrow or a vertical bar only show portions of the syntax.
The only part of a path that reads from right to left is a loop.

Resource Usage Macros and Tables

227

Appendix A: How to Read Syntax Diagrams


Syntax Diagram Conventions

Continuation Links
Paths that are too long for one line use continuation links. Continuation links are circled
letters indicating the beginning and end of a link:
A

FE0CA002

When you see a circled letter in a syntax diagram, go to the corresponding circled letter and
continue reading.

Required Entries
Required entries appear on the main path:

SHOW

FE0CA003

If you can choose from more than one entry, the choices appear vertically, in a stack. The first
entry appears on the main path:

SHOW

CONTROLS
VERSIONS
FE0CA005

Optional Entries
You may choose to include or disregard optional entries. Optional entries appear below the
main path:

SHOW
CONTROLS

228

FE0CA004

Resource Usage Macros and Tables

Appendix A: How to Read Syntax Diagrams


Syntax Diagram Conventions

If you can optionally choose from more than one entry, all the choices appear below the main
path:

READ
SHARE

JC01A010

ACCESS

Some commands and statements treat one of the optional choices as a default value. This
value is UNDERLINED. It is presumed to be selected if you type the command or statement
without specifying one of the options.

Strings
String literals appear in apostrophes:

'msgtext '
JC01A004

Abbreviations
If a keyword or a reserved word has a valid abbreviation, the unabbreviated form always
appears on the main path. The shortest valid abbreviation appears beneath.

SHOW

CONTROLS
CONTROL
FE0CA042

In the above syntax, the following formats are valid:

SHOW CONTROLS

SHOW CONTROL

Loops
A loop is an entry or a group of entries that you can repeat one or more times. Syntax
diagrams show loops as a return path above the main path, over the item or items that you can
repeat:

,
,
(

cname

3
4
)
JC01B012

Resource Usage Macros and Tables

229

Appendix A: How to Read Syntax Diagrams


Syntax Diagram Conventions

Read loops from right to left.


The following conventions apply to loops:
IF...

THEN...

there is a maximum number of


entries allowed

the number appears in a circle on the return path.

there is a minimum number of


entries required

the number appears in a square on the return path.

a separator character is required


between entries

the character appears on the return path.

In the example, you may type cname a maximum of 4 times.

In the example, you must type at least three groups of column


names.

If the diagram does not show a separator character, use one


blank space.
In the example, the separator character is a comma.

a delimiter character is required


around entries

the beginning and end characters appear outside the return


path.
Generally, a space is not needed between delimiter characters
and entries.
In the example, the delimiter characters are the left and right
parentheses.

Excerpts
Sometimes a piece of a syntax phrase is too large to fit into the diagram. Such a phrase is
indicated by a break in the path, marked by (|) terminators on each side of the break. The
name for the excerpted piece appears between the terminators in boldface type.
The boldface excerpt name and the excerpted phrase appears immediately after the main
diagram. The excerpted phrase starts and ends with a plain horizontal line:
LOCKING

excerpt

A
HAVING

con
excerpt

where_cond
,
cname
,
col_pos
JC01A014

230

Resource Usage Macros and Tables

Appendix A: How to Read Syntax Diagrams


Syntax Diagram Conventions

Multiple Legitimate Phrases


In a syntax diagram, it is possible for any number of phrases to be legitimate:

dbname
DATABASE
tname
TABLE
vname
VIEW

JC01A016

In this example, any of the following phrases are legitimate:

dbname

DATABASE dbname

tname

TABLE tname

vname

VIEW vname

Sample Syntax Diagram


,
CREATE VIEW

viewname

AS

A
LOCKING

cname

CV

LOCK
ACCESS

dbname

A
DATABASE

tname

FOR

SHARE

IN

READ

TABLE

WRITE
EXCLUSIVE

vname
VIEW

EXCL
,

SEL

B
MODE

expr

,
FROM

qual_cond

tname

.aname
C
HAVING cond

;
qual_cond
,

WHERE cond
GROUP BY

cname
,
col_pos
JC01A018

Resource Usage Macros and Tables

231

Appendix A: How to Read Syntax Diagrams


Syntax Diagram Conventions

Diagram Identifier
The alphanumeric string that appears in the lower right corner of every diagram is an internal
identifier used to catalog the diagram. The text never refers to this string.

232

Resource Usage Macros and Tables

APPENDIX B

ResUsageIpma Table

The ResUsageIpma table includes resource usage data for system-wide, node information.
Note: The ResUsageIpma table is generally not used at customer sites and is only used by
Teradata engineers.
This table is created as a MULTISET table. For more information, see Relational Primary
Index on page 37.
Note: Summary Mode is not applicable to this table.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the nonunique primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be


logged late (typically, by no more than one or two seconds), but this
column will still contain the time value when the entry should have
been logged. For more information, see the Secs and NominalSecs
columns.
NodeID

n/a

Node ID. The Node ID is formatted as CCC-MM, where CCC


denotes the three-digit cabinet number and MM denotes the twodigit chassis number of the node. For example, a node in chassis 9
of cabinet 3 has a node ID of '003-09'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


These columns provide a generalized picture of the vprocs running on this node, shown as
Type n virtual processors where n = 1 to 7. Under the current implementation, only Type 1
(AMP), Type 2 (PE), Type 3 (GTW), Type 4 (RSG), and Type 5 (TVS) vprocs exist; vproc
types 6 through 7 are not currently used.

Resource Usage Macros and Tables

233

Appendix B: ResUsageIpma Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR(8)

NCPUs

n/a

Number of CPUs on this node.

SMALLINT

This column is useful for normalizing the CPU utilization column


values for the number of CPUs on the node. This is especially
important in coexistence systems where the number of CPUs can
vary across system nodes.
Vproc1

n/a

Current count of type 1 (AMP) virtual processors running under


the node.

SMALLINT

Vproc2

n/a

Current count of type 2 (PE) virtual processors running under the


node.

SMALLINT

Vproc3

n/a

Current count of type 3 (GTW) virtual processors running under


the node.

SMALLINT

Vproc4

n/a

Current count of type 4 (RSG) virtual processors running under


the node.

SMALLINT

Vproc5

n/a

Current count of type 5 (TVS) virtual processors running under the


node.

SMALLINT

Vproc6

n/a

Current count of type 6 virtual processors running under the node.

SMALLINT

This column reports zeros and (blanks).


Vproc7

n/a

Current count of type 7 virtual processors running under the node.

SMALLINT

This column reports zeros and (blanks).


VprocType1

n/a

Type of virtual processor for Vproc1. Value is always AMP.

CHAR(4)

VprocType2

n/a

Type of virtual processor for Vproc2. Value is always PE.

CHAR(4)

VprocType3

n/a

Type of virtual processor for Vproc3. Value is always GTW.

CHAR(4)

VprocType4

n/a

Type of virtual processor for Vproc4. Value is always RSG.

CHAR(4)

VprocType5

n/a

Type of virtual processor for Vproc5. The value is always TVS (see
Teradata Virtual Storage).

CHAR(4)

VprocType6

n/a

The type of virtual processor for Vproc6.

CHAR(4)

Note: This column is not currently valid. It should not be used.


VprocType7

n/a

The type of virtual processor for Vproc7.

CHAR(4)

Note: This column is not currently valid. It should not be used.


MemSize

234

n/a

Amount of memory on this node in megabytes. Useful for


performing memory usage calculations.

INTEGER

Resource Usage Macros and Tables

Appendix B: ResUsageIpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

NodeNormFactor

n/a

A per CPU normalization factor that is used to normalize the


reported CPU values of the ResUsageSpma table to a single 5100
CPU.

INTEGER

This value is scaled by a factor of 100. For example, if the actual


factor is 5.25, the value of the NodeNormFactor will be 525.
Note: The normalization factor is related to the NodeType value
reported in the ResUsageSpma table.
For information on this value, see Chapter 6: ResUsageSpma
Table.
Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally the same as NominalSecs, but can be different in three
cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
CentiSecs

n/a

Number of centiseconds in the logging period. This column is


useful when performing data calculations with small elapsed times
where the difference between centisecond-based data and whole
seconds results in a percentage error.

INTEGER

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

Active

max

Controls whether or not the rows will be logged to the resource


usage tables if Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.
For example, if you enable Active Row Filter Mode, the rows that
have a zero Active column value will not be logged to the resource
usage tables.
TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

CODFactor

n/a

PM CPU COD value in one tenths of a percent. For example, a


value of 500 represents a PM CPU COD value of 50.0%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.

Statistics Columns
Process Scheduling CPU Switching Columns
These columns identify the number of times CPUs were switched by the scheduler from one
type of work to another type of work.

Resource Usage Macros and Tables

235

Appendix B: ResUsageIpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

CPUProcSwitches

count

Number of times the scheduler switched a currently active process


of a CPU to a new process.

FLOAT

Net Columns
Message Type Columns
These columns subdivide all messages sent and received into the type of message, where:

Hash messages (Hash) are data sent to a destination through its primary or fallback hash
value

Processor messages (Proc) are data sent to a destination through a vproc ID

Group messages (Group) are broadcasted messages to be received by members of a group

Local messages (Local) are messages communicated locally within the node

Channel messages (Chan) are data sent between vprocs through channel IDs for purposes
of a private conversation to perform functions such as row redistribution, and so on

Mailbox messages (Mbox) are data sent between vprocs through mailbox IDs for similar
purposes as channel messages.

A duplicated accounting is done with two different perspectives, since Hash + Proc + Group +
Local messages = Chan + MBox messages.
Column Name

Mode

Description

Data Type

MsgChanReads

count

Number of channel messages read by this node.

FLOAT

MsgChanWrites

count

Number of channel messages written by this node.

FLOAT

MsgHashReads

count

Number of hash messages read by this node.

FLOAT

MsgHashWrites

count

Number of hash messages written by this node.

FLOAT

MsgGroupReads

count

Number of group messages read by this node.

FLOAT

MsgGroupWrites

count

Number of group messages written by this node.

FLOAT

MsgLocalReads

count

Number of local messages read by this node.

FLOAT

MsgLocalWrites

count

Number of local messages written by this node.

FLOAT

MsgMboxReads

count

Number of mailbox messages read by this node.

FLOAT

MsgMboxWrites

count

Number of mailbox messages written by this node.

FLOAT

MsgProcReads

count

Number of processor messages read by this node.

FLOAT

MsgProcWrites

count

Number of processor messages written by the node.

FLOAT

236

Resource Usage Macros and Tables

Appendix B: ResUsageIpma Table


Statistics Columns

Message Delivery Times Columns


These columns identify the time it took for hash, processor, group and local messages to reach
their destination. Two times are provided:

Message transmission to mailbox delivery (MDelivery)

Mailbox delivery to process delivery (PDelivery)

Column Name

Mode

Description

Data Type

MsgHashMDelivery

count

Total amount of time read hash messages took for mailbox delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgHashPDelivery

count

Total amount of time read hash messages took for process delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgGroupMDelivery

count

Total amount of time read group messages took for mailbox


delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgGroupPDelivery

count

Total amount of time read group messages took for process


delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgLocalMDelivery

count

Total amount of time read local messages took for mailbox delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgLocalPDelivery

count

Total amount of time read local messages took for process delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgProcMDelivery

count

Total amount of time read processor messages took for mailbox


delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgProcPDelivery

count

Total amount of time read processor messages took for process


delivery.

FLOAT

Note: This column is not currently valid. It should not be used.

Net Circuit Management Columns


For names and descriptions of these columns, see Net Circuit Management Columns in
Chapter 6 ResUsageSpma Table on page 49.
Per Bynet Network Transport Data Columns
For the names and descriptions of these columns, see Per-Bynet Network Transport Data
Columns of Chapter 6 ResUsageSpma Table on page 49.
Net Miscellaneous Contention Management Columns
These columns identify some additional contention management not addressed in the other
contention management areas.

Resource Usage Macros and Tables

237

Appendix B: ResUsageIpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

NetActiveMrg

count

The number of concurrent active merges on all Bynets.

FLOAT

NetBrdWindowOverrun

count

Broadcast window overruns on all Bynets.

FLOAT

Note: This column is net-specific, that is, it relates to each specific


Bynet. On a single-node system, net-specific statistics are not
meaningful and are always zero
NetCBRErrror

count

Note: This column is not valid and returns a zero value.

FLOAT

NetDanglingAborts

count

Note: This column is not valid and returns a zero value.

FLOAT

NetDanglingCommits

count

Note: This column is not valid and returns a zero value.

FLOAT

NetMrgBlock

count

Number of times a merge message was blocked until delivery of


outstanding outgoing messages.

FLOAT

NetMrgBufWaits

count

Number of times an IOP task encountered an empty row-block


buffer on all Bynets.

FLOAT

NetMsgChannelBlock

count

Number of times the net software was blocked because the channel
was not in RxReady state on the receiver.

FLOAT

NetMrgHeapFails

count

Number of times a merge operation heap space request failed.

FLOAT

NetMrgHeapRequests

count

Number of times a merge operation requested heap space.

FLOAT

NetMsgFCBlock

count

Number of times the net software was blocked because the receiver
was flow controlled.

FLOAT

NetMsgFCSleep

count

Number of times a transmitter process was put to sleep because it


was flow controlled.

FLOAT

NetMsgGroupBlock

count

Number of times the net software was blocked because the receiver
could not implicitly enter the group.

FLOAT

NetMsgResourceBlock

count

Number of times the net software was blocked because the receiver
could not get the necessary resources.

FLOAT

NetMsgRxBlock

count

Number of times the net software could not accept a message and
caused a transmitter to block.

FLOAT

Net Queues Columns


These columns identify lengths of the various internal queues used by the network controllers.

238

NetSamples can be used to normalize all aggregated sampled statistics to an average


queue-length basis.

Example: Dividing (NetPtPQueue/NetSamples) yields the average point-to-point queue


length over all samples on all networks taken during this log interval.

All of the aggregated sampled statistics columns in the following table are net-specific, that
is, they relate to each specific Bynet. On a single-node system, net-specific statistics are not
meaningful and are always zero.

Resource Usage Macros and Tables

Appendix B: ResUsageIpma Table


Statistics Columns

Column Name

Mode

Description

Data Type

NetPtPQueue

count

Aggregated sample point-to-point normal priority queue length on


all Bynets.

FLOAT

NetBlockQueue

track

Total number of services on the BlockableService queue at the


current time.

FLOAT

Services can be blocked for a variety of reasons including receiver


flow control, receiver resource usage, and daemon services.
NetBlockQueueMax

max

Maximum number of services on the BlockableServices queue in


this log interval.

FLOAT

NetBlockQueueTotal

count

Total number of services added to the BlockableServices queue in


this log interval.

FLOAT

NetBrdQueue

count

Aggregated sample broadcast normal priority queue length on all


Bynets.

FLOAT

NetPtPQueueMax

max

Maximum reported value of NetPtPQueue in this reporting


interval.

FLOAT

NetBrdQueueMax

max

Maximum reported value of NetBrdQueue in this reporting


interval.

FLOAT

NetHPBrdQueue

count

Aggregated sample broadcast high priority queue length on all


networks.

FLOAT

NetHPBrdQueueMax

max

Maximum value of NetHPBrdQueue in this reporting interval.

FLOAT

NetPendMrgQueue

count

Current count of pending merges, regardless of which net. A merge


may be queued for reasons such as:

FLOAT

The local IOP memory is saturated


System memory is trashing.
NetHPPtPQueue

count

Aggregated sample point-to point high priority queue length on all


Bynets.

FLOAT

NetHPPtPQueueMax

max

Maximum value of NetHPPtPQueue in this reporting interval.

FLOAT

Reserved Columns
These reserved columns expand to values 00 - 19, so that column names would be
RSSInternal00, RSSInternal01, and so on.
Column Name

Mode

Description

Data Type

RSSInternal[00-06]

track

Reserved for use by the RSS application.

FLOAT

RSSInternal[07-08, 10-11, 13-14, 16-19]

count

Reserved for use by the RSS application.

FLOAT

RSSInternal[09, 12, 15]

max

Reserved for use by the RSS application.

FLOAT

Resource Usage Macros and Tables

239

Appendix B: ResUsageIpma Table


Spare Columns

Spare Columns
The ResUsageIpma table spare fields are named Spare00 through Spare09, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare field currently being used.
Column Name

Description

Spare00

WM CPU COD value in one tenths of a percent. For example, a value


of 500 represents a WM CPU COD value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in
Teradata Database 15.0.
Note: WM CPU COD is not supported on SLES 10. Its value is set to
1000 on SLES 10.

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

240

Resource Usage Macros and Tables

APPENDIX C

ResUsageIvpr Table

The ResUsageIvpr table includes resource usage data for system-wide information.
Note: The ResUsageIvpr table is generally not used at customer sites and is only used by
Teradata engineers.

Housekeeping Columns
Relational Primary Index Columns
These columns taken together form the primary index.
Column Name

Mode

Description

Data Type

TheDate

n/a

Date of the log entry.

DATE

TheTime

n/a

Nominal time of the log entry.

FLOAT

Note: Under conditions of heavy system load, entries may be


logged late (typically, by no more than one or two seconds), but this
column will still contain the time value when the entry should have
been logged. For more information, see the Secs and NominalSecs
columns.
NodeID

n/a

Node ID on which the vproc resides. The Node ID is formatted as


CCC-MM, where CCC denotes the three-digit cabinet number and
MM denotes the two-digit chassis number of the node. For
example, a node in chassis 9 of cabinet 3 has a node ID of '003-09'.

INTEGER

Note: SMP nodes have a chassis and cabinet number of 1. For


example, the node ID of an SMP node is '001-01'.

Miscellaneous Housekeeping Columns


Column Name

Mode

Description

Data Type

GmtTime

n/a

Greenwich Mean Time is not affected by the Daylight Savings Time


adjustments that occur twice a year.

FLOAT

NodeType

n/a

Type of node, representing the per node system family type. For
example, 5600C or 5555H.

CHAR(8)

VprId

n/a

Identifies the vproc number (non-Summary Mode) or the vproc


type (Summary Mode; 0 = NODE, 1 = AMP, 2 = PE, 3=GTW,
4=RSG, 5=TVS).

INTEGER

Resource Usage Macros and Tables

241

Appendix C: ResUsageIvpr Table


Housekeeping Columns

Column Name

Mode

Description

Data Type

VprType

n/a

The values can be NODE, AMP, PE, GTW, RSG, or TVS (see
Teradata Virtual Storage).

CHAR(4)

Secs

n/a

Actual number of seconds in the log period represented by this row.


Normally the same as NominalSecs, but can be different in three
cases:

SMALLINT

The first interval after a log rate change


A sample logged late because of load on the system
System clock adjustments affect reported Secs
Useful for normalizing the count statistics contained in this row, for
example, to a per-second measurement.
CentiSecs

n/a

Number of centiseconds in the logging period. This column is


useful when performing data calculations with small elapsed times
where the difference between centisecond-based data and whole
seconds results in a percentage error.

INTEGER

NominalSecs

n/a

Specified or nominal number of seconds in the logging period.

SMALLINT

SummaryFlag

n/a

Summarization status of this row. Possible values are 'N' if the row
is a non-summary row, and 'S' if the row is a summary row.

CHAR (1)

NCPUs

n/a

Number of CPUs on this node.

SMALLINT

This column is useful for normalizing the CPU utilization column


values for the number of CPUs on the node. This is especially
important in coexistence systems where the number of CPUs can
vary across system nodes.
Active

max

Controls whether or not the rows will be logged to the resource


usage tables if Active Row Filter Mode is enabled.

FLOAT

If Active is set to a non-zero value, the row contains data columns.


If Active is set to a zero value, none of the data columns in the row
have been updated during the logging period.
For example, if you enable Active Row Filter Mode, the rows that
have a zero Active column value will not be logged to the resource
usage tables.
TheTimestamp

n/a

Number of seconds since midnight, January 1, 1970.

BIGINT

This column is useful for aligning data with the DBQL log.
CODFactor

n/a

PM CPU COD value in one tenths of a percent. For example, a


value of 500 represents a PM CPU COD value of 50.0%.

SMALLINT

The value is set to 1000 if the PM CPU COD is disabled.

242

Resource Usage Macros and Tables

Appendix C: ResUsageIvpr Table


Statistics Columns

Statistics Columns
File System Columns
Cylinder Defragmentation Overhead Columns
These columns identify background file system overhead associated with fragmented free
space to achieve one large free space within that cylinder (CylDefrag).

Event counts are found in ResUsageSvpr.

Each cylinder defragment event implies one logical cylinder index read and one logical
cylinder index write.

Only logical I/Os and the amount moved (KB) are identified. Cylinder defragments are
done on cylinders containing permanent tables (including append and transient journal
tables) only.

Column Name

Mode

Description

Data Type

FileDbCylDefragKB

count

KB moved by the FileDbCylDefragIO.

FLOAT

FileDbCylDefragIO

count

Number of permanent data block logical I/Os due to cylinder


defragmentation.

FLOAT

Cylinder MiniCylPack Overhead Columns


These columns identify file system overhead associated with MiniCylPack operations that get
performed to make available a free cylinder when one is needed but not available.

Event counts are found in ResUsageSvpr.

Only logical I/Os and the amount moved (KB) are identified, except the amount moved
for cylinder indexes because they can be calculated by multiplying the current cylinder
index fixed size and the I/Os.

MiniCylPack operations are done on cylinders containing permanent tables (including


append and transient journal tables) only.

Column Name

Mode

Description

Data Type

FilePCiMCylPackIO

count

Number of permanent cylinder index logical I/Os due to


performing MiniCylPack operations.

FLOAT

FilePDbMCylPackKB

count

KB moved by FilePDbCylPackIOs.

FLOAT

FilePDbMCylPackIO

count

Number of permanent data block logical I/Os due to performing


MiniCylPack operations.

FLOAT

Cylinder Split and Migrate Overhead Columns


These columns further identify file system cylinder split/migrate (CylMigr) overhead
performed when cylinders can not accommodate new data.

Resource Usage Macros and Tables

243

Appendix C: ResUsageIvpr Table


Statistics Columns

Note: Event counts are found in Chapter 13: ResUsageSvpr Table..


Only logical I/Os and the amount moved (KB) for data blocks are identified. Each cylinder
migration event implies one logical read and three logical writes of the cylinder index. Only
permanent tables (including append and transient journal tables) are migrated.
Column Name

Mode

Description

Data Type

FileDbCylMigrKB

count

KB moved by FileDbCylMigrIOs.

FLOAT

FileDbCylMigrIO

count

Number of data block logical I/Os due to cylinder migration.

FLOAT

Data Block Creation Columns


These columns identify the file system operations required when a data block is being created
(BlkCreate).
Note: These columns do not include data blocks created due to any of the new data blocks
created when a data block was updated as described in the Data Block Update Operations
Columns description.
Column Name

Mode

Description

Data Type

FilePDbCreateKB

count

KB created by FilePDbCreates.

FLOAT

FilePDbCreates

count

Number of permanent table (including append and transient


journal tables) data blocks created.

FLOAT

FileSDbCreateKB

count

KB created by FileSDbCreates.

FLOAT

FileSDbCreates

count

Number of spool data blocks created.

FLOAT

Data Block Update Operations Columns


These columns identify the file system operations required when a data block is being updated
(BlkUpd). When a block is updated, it can be in place and requires no new data blocks, or it
could spill over the current data block and require one, two, three or more new data blocks in
addition to the current data block. Only logical I/Os and the amount moved (KB) are
identified, except for the amount moved for cylinder indexes because they can be calculated by
multiplying the current fixed cylinder index size by the I/Os. Data block updates should only
be performed on permanent tables (including append and transient journal tables), so no
attempt is made to separate permanent and spool data segments.
Column Name

Mode

Description

Data Type

FileCiUpd0IO

count

Number of cylinder index logical I/Os performed for a block


update operation requiring no new data blocks.

FLOAT

FileCiUpd1IO

count

Number of cylinder index logical I/Os performed for a block


update operation requiring 1 new data blocks.

FLOAT

244

Resource Usage Macros and Tables

Appendix C: ResUsageIvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileCiUpd2IO

count

Number of cylinder index logical I/Os performed for a block


update operation requiring two new data blocks.

FLOAT

FileCiUpd3IO

count

Number of cylinder index logical I/Os performed for a block


update operation requiring three new data blocks.

FLOAT

FileCiUpdNIO

count

Number of cylinder index logical I/Os performed for a block


update operation requiring over three new data blocks.

FLOAT

FileDbUpd0IO

count

Number of data block logical I/Os performed for a block update


operation requiring no new data blocks.

FLOAT

FileDbUpd1IO

count

Number of data block logical I/Os performed for a block update


operation requiring one new data blocks.

FLOAT

FileDbUpd2IO

count

Number of data block logical I/Os performed for a block update


operation requiring two new data blocks.

FLOAT

FileDbUpd3IO

count

Number of data block logical I/Os performed for a block update


operation requiring three new data blocks.

FLOAT

FileDbUpd0KB

count

KB moved by FileDbUpd0IO.

FLOAT

FileDbUpd1KB

count

KB moved by FileDbUpd1IO.

FLOAT

FileDbUpd2KB

count

KB moved by FileDbUpd2IO.

FLOAT

FileDbUpd3KB

count

KB moved by FileDbUpd3IO.

FLOAT

FileDbUpdNKB

count

KB moved by FileDbUpdNIO.

FLOAT

FileDbUpdNIO

count

Number of data block logical I/Os performed for a block update


operation requiring over three new data blocks.

FLOAT

Multi-Row Requests Columns


These columns identify the significant multi-row requests made by application software on
the file system. Rows are distinguished as permanent data (P), spool (S) or user append table /
permanent journal table (APt).
Column Name

Mode

Description

Data Type

FileAPtBlkRead

count

Number of requests for an append data block read.

FLOAT

FileAPtBlkReplace

count

Number of requests for an append data block replace.

FLOAT

FileAPtRownins

count

Number of requests for an append data multi-row insert.

FLOAT

FileAPtRowNDel

count

Number of requests for an append data multi-row delete.

FLOAT

FileAPtRowNUpd

count

Number of requests for an append data multi-row update.

FLOAT

FileAPtSortable

count

Number of requests for an append table sort.

FLOAT

FileAPtTabdelete

count

Number of requests for an append table delete.

FLOAT

Resource Usage Macros and Tables

245

Appendix C: ResUsageIvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileAPtTabdelra

count

Number of requests for an append multi-row delete.

FLOAT

FileAPtTabmrows

count

Number of requests for an append table modification.

FLOAT

FileAPtTabrblocks

count

Number of requests for an append table multi-block read.

FLOAT

FilePBlkRead

count

Number of requests for a permanent data block read.

FLOAT

FilePBlkReplace

count

Number of requests for a permanent data block replace.

FLOAT

FilePRowNDel

count

Number of requests for a permanent data multi-row delete.

FLOAT

FilePRownins

count

Number of requests for a permanent data multi-row insert.

FLOAT

FilePRowNUpd

count

Number of requests for a permanent data multi-row update.

FLOAT

FilePSortable

count

Number of requests for permanent table sort.

FLOAT

FilePTabdelete

count

Number of requests for a permanent table delete.

FLOAT

FilePTabdelra

count

Number of requests for a multi-row delete.

FLOAT

FilePTabmrows

count

Number of requests for a permanent table modification.

FLOAT

FilePTabrblocks

count

Number of requests for a permanent table multi-block read.

FLOAT

FileSBlkRead

count

Number of requests for a spool data block read.

FLOAT

FileSBlkReplace

count

Number of requests for a spool data block replace.

FLOAT

FileSRowNDel

count

Number of requests for a spool data multi-row delete.

FLOAT

FileSRownins

count

Number of requests for a spool data multi-row insert.

FLOAT

FileSRowNUpd

count

Number of requests for a spool data multi-row update.

FLOAT

FileSSortable

count

Number of requests for spool table sort.

FLOAT

FileSTabdelete

count

Number of requests for a spool table delete.

FLOAT

FileSTabdelra

count

Number of requests for a spool multi-row delete.

FLOAT

FileSTabmrows

count

Number of requests for a spool table modification.

FLOAT

FileSTabrblocks

count

Number of requests for a spool table multi-block read.

FLOAT

Single-Row Requests Columns


These columns identify the significant single-row requests made by application software on
the file system. Rows are distinguished as permanent data (P), spool (S) or user append table /
permanent journal table (APt).
Column Name

Mode

Description

Data Type

FileAPtRowAppend

count

Number of requests for an append row append.

FLOAT

FileAPtRowDelete

count

Number of requests for an append row delete.

FLOAT

246

Resource Usage Macros and Tables

Appendix C: ResUsageIvpr Table


Statistics Columns

Column Name

Mode

Description

Data Type

FileAPtRowInsert

count

Number of requests for an append row insert.

FLOAT

FileAPtRowReadInit

count

Number of requests for an initial append row read.

FLOAT

FileAPtRowReadCont

count

Number of requests for an continued append row read.

FLOAT

FileAPtRowReplace

count

Number of requests for an append row replace/update.

FLOAT

FilePRowAppend

count

Number of requests for a row append.

FLOAT

FilePRowDelete

count

Number of requests for a permanent row delete.

FLOAT

FilePRowInsert

count

Number of requests for a permanent row insert.

FLOAT

FilePRowReadCont

count

Number of requests for a continued permanent row read.

FLOAT

FilePRowReadInit

count

Number of requests for an initial permanent row read.

FLOAT

FilePRowReplace

count

Number of requests for a permanent row replace.

FLOAT

FileSRowAppend

count

Number of requests for a row append.

FLOAT

FileSRowDelete

count

Number of requests for a spool row delete.

FLOAT

FileSRowInsert

count

Number of requests for a spool row insert.

FLOAT

FileSRowReadCont

count

Number of requests for a continued spool row read.

FLOAT

FileSRowReadInit

count

Number of requests for an initial spool row read.

FLOAT

FileSRowReplace

count

Number of requests for a spool row replace/update.

FLOAT

Transient Journal Overhead Columns


These columns identify file system overhead associated with maintaining a transient journal
(TJ).
Column Name

Mode

Description

Data Type

FileTJBufUpdates

count

Number of transient journal buffer updates.

FLOAT

Transient Journal Requests Column


These columns identify the significant transient journal requests made by application software
on the file system.
Column Name

Mode

Description

Data Type

FileTJCalls

count

Number of transient journal calls.

FLOAT

FileTJDbUpdates

count

Number of WAL data blocks modified. The modification can either


be an update or a delete of an existing WAL or TJ record.

FLOAT

Resource Usage Macros and Tables

247

Appendix C: ResUsageIvpr Table


Statistics Columns

Write Ahead Logging


These columns identify the log-based file system recovery scheme in which modifications to
permanent data are written to a log file, the WAL log.
Column Name

Mode

Description

Data Type

FileTJAppends

count

Number of times transient journal records were appended to the


WAL log. A single append call can append multiple transient
journal rows. A transient journal append by itself does not imply a
write of a WAL block, nor a WAL Cylinder Index (WCI)
modification.

FLOAT

FileTJFlush

count

Number of times a request to force transient journal records within


the WAL log to be written to disk is issued. An increment of this
counter may or may not result in an I/O depending on whether the
request was to flush records that were already on disk.

FLOAT

FileWAppends

count

Number of times a record was appended to the WAL log. A single


append call can append multiple rows. Subtracting FileTJAppends
from this counter results in the number of times non-transient
journal rows were appended to the WAL log.

FLOAT

FileWDBCreates

count

Number of WAL data blocks created. The block can contain either
TJ records, WAL records or both

FLOAT

FileWFlush

count

Number of times a request to force any record in the WAL log to be


written to disk is issued. An increment of this counter may or may
not result in an I/O depending on whether the request was to flush
records that were already on disk. Subtracting FileTJFlush from this
counter results in the number of times a non-transient journal
WAL flush was issued.

FLOAT

FileWRowDelete

count

Number of times rows were deleted from the WAL log.

FLOAT

FileWTabDelRa

count

Number of requests for a WAL multi-row delete.

FLOAT

General Concurrency Control Monitor Management Columns


These columns identify monitor activities for Teradata Database concurrency control.
Column Name

Mode

Description

Data Type

MonAllocates

count

Number of monitors allocated.

FLOAT

MonBlocks

count

Number of times entry into a monitor was blocked. For example,


the number of requests - the number of blocks = immediate grants.

FLOAT

MonEnters

count

Number of times entry into a monitor was requested.

FLOAT

MonYields

count

Number of times a monitor yield was requested.

FLOAT

248

Resource Usage Macros and Tables

Appendix C: ResUsageIvpr Table


Statistics Columns

Net Columns
Message Type Columns
These columns subdivide all messages sent and received into the type of message, where:

Hash messages (Hash) are data sent to a destination through its primary or fallback hash
value.

Processor messages (Proc) are data sent to a destination through a vproc ID.

Group messages (Group) are broadcasted messages to be received by members of a group.

local messages (Local) are messages communicated locally within the node.

Channel messages (Chan) are data sent between vprocs through channel IDs for purposes
of a private conversation to perform functions such as row redistribution, and so on.

Mailbox messages (Mbox) are data sent between vprocs through mailbox IDs for similar
purposes as channel messages.

A duplicated accounting is done with two different perspectives, since Hash + Proc + Group +
Local messages = Chan + MBox messages.
Column Name

Mode

Description

Data Type

MsgChanReads

count

Number of channel messages read by this vproc.

FLOAT

MsgChanWrites

count

Number of channel messages written by this vproc.

FLOAT

MsgHashReads

count

Number of hash messages read by this vproc.

FLOAT

MsgHashWrites

count

Number of hash messages written by this vproc.

FLOAT

MsgGroupReads

count

Number of group messages read by this vproc.

FLOAT

MsgGroupWrites

count

Number of group messages written by this vproc.

FLOAT

MsgLocalReads

count

Number of local messages read by this vproc.

FLOAT

MsgLocalWrites

count

Number of local messages written by this vproc.

FLOAT

MsgProcReads

count

Number of processor messages read by this vproc.

FLOAT

MsgProcWrites

count

Number of processor messages written by this vproc.

FLOAT

MsgMboxReads

count

Number of mailbox messages read by this vproc.

FLOAT

MsgMboxWrites

count

Number of mailbox messages written by this vproc.

FLOAT

Resource Usage Macros and Tables

249

Appendix C: ResUsageIvpr Table


Statistics Columns

Message Delivery Times Columns


These columns identify the time it took for hash, processor, group and local messages to reach
their destination. Two times are provided:

Message transmission to mailbox delivery (MDelivery).

Mailbox delivery to process delivery (PDelivery).

Column Name

Mode

Description

Data Type

MsgHashMDelivery

count

Total amount of time read hash messages took for mailbox delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgHashPDelivery

count

Total amount of time read hash messages took for process delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgGroupMDelivery

count

Total amount of time read group messages took for mailbox


delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgGroupPDelivery

count

Total amount of time read group messages took for process


delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgLocalMDelivery

count

Total amount of time read local messages took for mailbox delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgLocalPDelivery

count

Total amount of time read local messages took for process delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgProcMDelivery

count

Total amount of time read processor messages took for mailbox


delivery.

FLOAT

Note: This column is not currently valid. It should not be used.


MsgProcPDelivery

count

Total amount of time read processor messages took for process


delivery.

FLOAT

Note: This column is not currently valid. It should not be used.

Transient Journal Purge Overhead Columns


These columns identify the background overhead associated with the occasional transient
journal purge operation.
Column Name

Mode

Description

Data Type

TJPurges

count

The number of purge passes in which a block-by-block scan is


done.

FLOAT

250

Resource Usage Macros and Tables

Appendix C: ResUsageIvpr Table


Summary Mode

Column Name

Mode

Description

Data Type

TJDbPurgeDeletes

count

The number of blocks mapped in during the scan that were


included in the ranges of blocks that were deleted.

FLOAT

Before Write Ahead Logging (WAL), the ratio of deletes to reads


would have been a useful measure of the effectiveness of the purge
processing. However, with WAL, the ratio cannot be interpreted
quite so simply because:
1 The range of deleted blocks could include blocks that were not

actually mapped in (and therefore not counted). Blocks that


contain only WAL records are not mapped in during the scan, as
they are automatically filtered out. Under typical conditions,
there are probably relatively few such blocks. TJ and WAL
records are typically generated in an interleaved sequence by
regular SQL transactions. But during periods when the system
workload is dominated by MultiLoad/FastLoad work, there will
be relatively few TJ records written, so the proportion of WALonly blocks would probably be significant.
2 Post-WAL, neither TJDbPurgeReads nor TJDbPurgeDeletes gets

incremented during a normal purge pass. Instead of scanning


the active data blocks, a pointer to the oldest active transaction is
maintained which is a quicker method. Therefore, PurgeTJ() can
simply compute the bounds of the range of records that can be
deleted in the part of the WAL/TJ that precedes the start of the
oldest transaction. This does not require any scanning and the
system cannot definitely determine how many blocks actually
get deleted.
3 If the oldest transaction remains open for a long time, the quick

purge method is not effective. Therefore, the system reverts back


to the full scan method. The TJDbPurgeReads and
TJDbPurgeDeletes are only incremented during a full scan.
TJDbPurgeReads

count

The number of blocks actually mapped in during the purge scan.


This is a reasonable approximate measure of the I/O load. The
system uses full-cylinder read mode, but the block count would still
be roughly proportionate to the I/O load.

FLOAT

Summary Mode
When Summary Mode is active for the ResUsageIvpr table, one row is written to the database
for each type of vproc on each node in the system, summarizing the vprocs of that type on
that node, for each log interval.
You can determine if a row is in Summary Mode by checking the SummaryFlag column for
that row.

Resource Usage Macros and Tables

251

Appendix C: ResUsageIvpr Table


Spare Columns

IF the SummaryFlag column value is

THEN the data for that row is being logged

'S'

in Summary Mode.

'N'

normally.

Spare Columns
The ResUsageIvpr table spare fields are named Spare00 through Spare09, and SpareInt.
The SpareInt field has a 32-bit internal resolution while all other spare fields have a 64-bit
internal resolution. All spare fields default to count data types but can be converted to min,
max, or track type data fields if needed when they are used.
The following table describes the Spare field currently being used.
Column Name

Description

Spare00

WM CPU COD value in one tenths of a percent. For example, a value


of 500 represents a WM CPU COD value of 50.0%.
The value is set to 1000 if the WM CPU COD is disabled.
Note: This field will be converted to the WM_CPU_COD column in
Teradata Database 15.0.
Note: WM CPU COD is not supported on SLES 10. Its value is set to
1000 on SLES 10.

Related Topics
For details on the different type of data fields, see About the Mode Column on page 42.

252

Resource Usage Macros and Tables

APPENDIX D

ResIpmaView and ResIvprView


Views

This chapter provides the definitions of the ResIpmaView and ResIvprView views.
Note: These views are intended primarily for Teradata engineers.

Resource Usage Macros and Tables

253

Appendix D: ResIpmaView and ResIvprView Views


ResIpmaView

ResIpmaView
ResIpmaView is based on the ResUsageIpma table.
REPLACE VIEW DBC.ResIpmaView
AS SELECT
/* housekeeping fields */
TheDate,
NodeID (FORMAT '999-99') AS NodeID,
TheTime,
GmtTime,
NodeType,
TheTimestamp,
CentiSecs,
Secs,
NominalSecs,
CodFactor,
NCPUs,
Reserved,
Vproc1,
VprocType1,
Vproc2,
VprocType2,
Vproc3,
VprocType3,
Vproc4,
VprocType4,
Vproc5,
VprocType5,
Vproc6,
VprocType6,
Vproc7,
VprocType7,
MemSize,
NodeNormFactor,
/* Aliased Fields */
/* PM/WM CODs */
CodFactor
AS
/
/* RSS time stamps
RssInternal00 AS
RssInternal01 AS
RssInternal02 AS
RssInternal04 AS
RssInternal05 AS
RssInternal06 AS
RssInternal07 AS
RssInternal08 AS
RssInternal09 AS
RssInternal10 AS
RssInternal11 AS
RssInternal12 AS
RssInternal13 AS
RssInternal14 AS
RssInternal15 AS

PM_CPU_COD,

/* PM CPU Capacity On Demand factor *

in seconds since 1970.


GatherStart,
GatherCpuInfo,
GatherMemInfo,
GatherDiskInfo,
Kcollect,
GatherMmBufs,
OpenMsgCnt,
OpenMsgTotal,
OpenMsgMax,
GatherMsgCnt,
GatherMsgTotal,
GatherMsgMax,
CloseMsgCnt,
CloseMsgTotal,
CloseMsgMax,

/* Spare Field usage */


/* PM/WM CODs */
Spare00
AS WM_CPU_COD,

*/

/* WM CPU Capacity On Demand factor */

/* transformed fields */
/* PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* effective CPU COD */
/* The default GroupId setting below shows how to
* select different node families, but does not differentiate the
* resulting groups. If a coexistence system had 5550, 5600 and 5650 nodes
* as well as some PE-only nodes the CASE expression might look like:
* WHEN (VPROCTYPE1='AMP' AND NodeType
IN ('5650H')) THEN '5650Nodes'
* WHEN (VPROCTYPE1='AMP' AND NodeType
IN ('5600H')) THEN '5600Nodes'
* WHEN (VPROCTYPE1='AMP' AND NodeType NOT IN ('5600H', '5650')) THEN
'5550Nodes'
* ELSE 'PEonly'

254

Resource Usage Macros and Tables

Appendix D: ResIpmaView and ResIvprView Views


ResIpmaView
*/
CASE
WHEN (VPROCTYPE1='AMP' AND NodeType
IN ('5600H')) THEN 'AMPNodes'
WHEN (VPROCTYPE1='AMP' AND NodeType NOT IN ('5600H')) THEN 'AMPNodes'
ELSE 'PEonly'
END AS GroupId,
/* table data fields */
FROM DBC.ResUsageIpma WITH CHECK OPTION;

Note: The ResUsageIpma table fields have been removed from this sample output.

Resource Usage Macros and Tables

255

Appendix D: ResIpmaView and ResIvprView Views


ResIvprView

ResIvprView
ResIvprView is based on the ResUsageIvpr table.
REPLACE VIEW DBC.ResIvprView
AS SELECT
/* housekeeping fields */
thedate,
NodeID
thetime,
GmtTime,
NodeType
TheTimestamp,
CentiSecs
Secs
NominalSecs
CodFactor
SummaryFlag
Reserved
NCPUs
vprid,
VprType

(FORMAT '999-99')

AS NodeID,

(FORMAT 'X(8)')

AS NodeType,

(FORMAT
(FORMAT
(FORMAT
(FORMAT
(FORMAT
(FORMAT
(FORMAT

AS
AS
AS
AS
AS
AS
AS

'-------9')
'----9')
'ZZZ9')
'-----9')
'X(1)')
'X(3)')
'ZZ9')

(FORMAT 'X(4)')

CentiSecs,
Secs,
NominalSecs,
CodFactor,
SummaryFlag,
Reserved,
NCPUs,

AS VprType,

/* Aliased Fields */
/* PM/WM CODs */
CodFactor
AS PM_CPU_COD,

/* PM CPU Capacity On Demand factor */

/* Spare Field usage */


/* PM/WM CODs */
Spare00
AS WM_CPU_COD,

/* WM CPU Capacity On Demand factor */

/* transformed fields */
/* PM/WM CODs */
( PM_CPU_COD * WM_CPU_COD / 1000 ) (FORMAT '----9') AS CPU_COD, /* eff
ective CPU COD */
/*
*
*
*

Changes in GroupId definition affects the displayed grouping in


the Res*ByGroup macros. The default setting below shows how to
select different node families, but does not differentiate the
resulting groups. If a coexistence system had 5550, 5600 and 5650 n

odes
* the CASE expression might look like:
* WHEN NodeType
IN ('5650H') THEN '5650Nodes'
* WHEN NodeType
IN ('5600H') THEN '5600Nodes'
* ELSE '5550Nodes'
*/
CASE
WHEN NodeType
IN ('5650H') THEN 'A'
ELSE 'A'
END AS GroupId,
/* Remaining table fields */
FROM DBC.ResUsageIvpr WITH CHECK OPTION;

Note: The ResUsageIvpr table fields have been removed from this sample output.

256

Resource Usage Macros and Tables

APPENDIX E

Partition Assignments

With regards to Teradata Database, there is more than one definition of partition. The
partitions here refer to the following Parallel Database Extensions (PDE) and vproc definition:

A partition is a collection of tasks and associated resources grouped within a virtual


processor according to the function of the tasks. There are multiple partitions within a
single virtual processor. Partitions are the primary mechanism used by Teradata Database
for managing parallel programs.

Partitions are the subdivision of vproc software processes into 48 semi-isolated domains.

For example, in an AMP vproc, Partition 11 is the AMP Worker Task Partition. In all other
vproc types, Partition 11 is unused.
Another partition description is only meaningful in a dialog between client programs and
Teradata Database. It has nothing to do with PDE vproc partitions, but is a way of enforcing
rules about what a client session is allowed to do and of keeping client sessions isolated from
each other. This concept of partitions is centered in the CLIv2 interface, specifically the
CONNECT parcel.
Partition reservation is as follows:

Partitions 0 through 6 are reserved by PDE

Partitions 7 through 47 are for use by Teradata Database

The table listed under Partition Assignment Listing on page 259 describes the individual
partitions. Teradata Database uses the following vprocs:
Vproc Type

Description

AMP

Access module processors perform database functions, such as executing database


queries. Each AMP owns a portion of the overall database storage.

GTW

Gateway vprocs provide a socket interface to Teradata Database.

Node

The node vproc handles PDE and operating system functions not directly related to
AMP and PE work. Node vprocs cannot be externally manipulated, and do not
appear in the output of the Vproc Manager utility.

PE

Parsing engines perform session control, query parsing, security validation, query
optimization, and query dispatch.

RSG

Relay Services Gateway provides a socket interface for the replication agent, and for
relaying dictionary changes to the Teradata Meta Data Services utility.

TVS

Manages Teradata Database storage. AMPs acquire their portions of database


storage through the TVS vproc.

Resource Usage Macros and Tables

257

Appendix E: Partition Assignments


Table Conventions

For more information on partition usage, see CPU Utilization Columns on page 151.

Table Conventions
The following table describes the table symbols used in the partition assignments table below.
The symbol used in the Partition
Assignment Listing

Indicates

partition is unused.

activity is observed but not identified.

258

Resource Usage Macros and Tables

Resource Usage Macros and Tables

Partition Assignment Listing


The following table lists the Node, AMP, PE, GTW, RSG, and TVS usage of PDE vproc partitions by PDE and Teradata
Database.
Partition:

Usage in Vprocs of Type

259

Number

Name

Node

AMP

PE

GTW

RSG

Kernel

PDE daemons

--------

System Debugger

System Debugger tasks

Console

PDE console
control process
(cnscim)

3-6

Interactive 1 through 4

Console interactive partition programs

Service

Console utilities

scpstart

--------

Host Utility Procedures

--------

utadvtsk

--------

Filesys

--------

File System
processes

--------

10

Gateway

--------

Gateway
processes

--------

11

AWT

--------

12

Session

--------

Session Control
tasks

--------

13

Dispatch

--------

Dispatcher/
Parser Partition

--------

14

[Unused]

--------

15

Startup

--------

16

[Unused]

--------

--------

AMP Worker
Tasks

Startup tasks

--------

--------

TVS

260

Partition:

Usage in Vprocs of Type

Resource Usage Macros and Tables

Number

Name

Node

AMP

PE

GTW

RSG

TVS

17

RSS Startup

--------

File System RSS


startup

--------

18

Distributed Database
File (DDF) Server

DDF services

--------

19

Metadata

Metadata
Services Gateway
rsgmain

--------

20-29

Interactive Partitions

--------

30

[Unused]

--------

31

Allocator

--------

tvsaallocator

32

Node Agent

--------

tvsa_agent

33

Clique Coordinator

--------

Clique
Coordinator
services

34-41

[Unused]

--------

42

GDO Monitor

gdom

--------

43

Parallel Data Collector

pdcmaster

--------

44

Dump save slaves

csp

--------

45

Dump save or clear

csp

--------

46

Dump list

csp

--------

47

Replication

--------

DBCCONS utilities

Replication
Gateway
rsgdbsmain

Glossary

AG

Allocation Group

AMP

Access Module Processor

API Application Programming Interface


AWS

Administration Workstation

AWT

AMP Worker Task

BLC

Block-level compression

BYNET

Banyan Network (high-speed connection)

COD Capacity on Demand


DBW

Database Window

DDF

Distributed Database File

DDL

Data Definition Language

FSG

File Segment

GTW

Teradata Gateway

I/O Input/Output
MPP Massively Parallel Processing
NOMOD No modification
NUPI

Nonunique Primary Index

PDE Parallel Database Extensions. PDE is a software interface layer between the operating
system and the Teradata Database software. It provides Teradata Database the ability to run in
a parallel environment, execute vprocs, and more.
PE

Parsing Engine

PG

Performance Group

PGWL
PM

Platform Metering

PMPC
PM/API
PP

Performance Group Workload

Performance Monitor and Production Control


Performance Monitor Application Programming Interface

Performance Period

Resource Usage Macros and Tables

261

Glossary

pWDid

Priority Scheduler Workload Definition ID

RDBMS

Relational Database Management System

ResUsage Resource Usage. The data stored in the database system resource usage tables.
RSG

Relay Services Group

RSS Resource Sampling Subsystem. The RSS provides a method to gather statistics from
across the Teradata Database system, and provides the ability to access the statistics through
an API. Resource usage uses the RSS data from the RSS API to log data to the selected resource
usage tables.
SLG

Service Level Goal

SMP

Symmetric Multi-Processing

TBBLC

Temperature-based Block-level Compression

TCHN

Teradata Channel

Tactical Workload Management Method This workload yields the fastest available response
time and executes at the highest tier, preempting all resource needs of other tiers. This method
is well suited for critical, short-running queries that require fast response times. For more
information, see Teradata Viewpoint User Guide.
TASM

Teradata Active System Management

Timeshare Workload Management Method This workload can be assigned to one of four
stepped access levels, Top, High, Medium, or Low. The higher access levels are given larger
access rates than the lower levels. For example, an SQL request assigned to a Timeshare WD
with a Top access level, which has an access rate of 8, would receive eight times the amount of
resources than an SQL request assigned to a Low access level.
Timeshare workloads are assigned resources remaining after all allocations have been made
for tactical and Workload Share Percent workloads. For more information, see Teradata
Viewpoint User Guide.
VNET

Virtual Network. Virtual BYNET for a single-node.

VP Virtual Partition. A virtual partition divides a system so that a percentage of resources


are allocated to a collection of workloads. A virtual partition can consist of WDs from all
management methods.
TPA
vproc
TVS

Trusted Parallel Application


Virtual Processor
Teradata Virtual Storage

VH Cache Very hot cache. This cache holds the hottest permanent table cylinders. For tables
that are assigned a temperature of very hot, they are kept in the FSG cache as long as they:

262

Stay very hot.

Resource Usage Macros and Tables

Glossary

Fit into the memory assigned. If the tables cannot fit, the FSG cache considers a sorted list
of the hottest segments and assigns them to the very hot cache in temperature sorted
order. The temperature of the segment is hot enough to qualify.

The temperature takes into account both physical and logical disk accesses and cache hits
(such as, physical and logical I/Os).
WCI

WAL Cylinder Index

Workload Share Percent Management Method This workload is assigned a proportion of


the resources that are available after allocations have been made for tactical workloads. The
percentage of resources is divided equally between all requests running in the WD. For
example, if the Workload Share Percent is 5% and there are five SQL requests, each SQL
request will get 1% of the share resources. For more information, see Teradata Viewpoint User
Guide.
WM

Workload Management

Resource Usage Macros and Tables

263

Glossary

264

Resource Usage Macros and Tables

Index

Symbols
?, meaning in macro outputs 182

A
Allocation columns
for ResUsageSpdsk 95
for ResUsageSvdsk 125
for ResUsageSvpr 155
AMP information
macros 183, 188
table 131
view 164
AMP Worker Task columns
for ResUsageSawt 73
for ResUsageSpma 64
for ResUsageSvpr 109
API 18
Application programming interfaces. See API
AutoCylPack columns, ResUsageSvpr 143

B
BLC columns, ResUsageSvpr 144
Block-level compression columns. See BLC columns
Broadcast net traffic columns
for ResUsageSpma 57
for ResUsageSps 116
ByGroup macro 30
Bynet network transport data columns, ResUsageIpma 237

C
Channel management columns, ResUsageShst 82
Channel traffic columns
for ResUsageShst 82
for ResUsageSpma 55
Chnsignal status tracking columns, ResUsageSvpr 151
Co-existing node macros. See ByGroup macros
CPU use by AMPs macros
normalized viewing 191
ResAmpCpuByGroup 188
ResCPUByAMP 188
ResCPUByAMPOneNode 188
CPU use by each node macros
ResCPUByGroup 195
ResCPUByNode 195
ResCPUOneNode 195

Resource Usage Macros and Tables

CPU use by PEs macros


normalized viewing 194
ResCPUByPE 192
ResCPUByPEOneNode 192
ResPeCpuByGroup 192
CPU utilization columns
for ResUsageScpu 46
for ResUsageSpma 63
for ResUsageSps 109
for ResUsageSvpr 151
Cylinder defragmentation overhead columns, ResUsageIvpr
243
Cylinder management overhead columns, ResUsageSvpr 141
Cylinder read columns, ResUsageSvpr 151

D
Data block merge columns, ResUsageSvpr 143
Data block prefetches columns
for ResUsageSpma 53
for ResUsageSps 113, 135
Data segment lock requests columns
for ResUsageSpma 54
for ResUsageSvpr 141
Database commands
SET LOGTABLE 28
SET RESOURCE 27
SET SUMLOGTABLE 28
Database Window Supervisor
setting logging rates 28
Database Window. See DBW
DBW 27
enabling RSS logging
Depot columns, ResUsageSpma 54
DISABLE LOGONS
effects on logging 34

E
Example
executing a ResCPUByAmp macro 33
ResAmpCpuByGroup macro report 190
ResAWT macro report 186
ResAWTByAMP macro report 186
ResAWTByNode macro report 187
ResCPUByAMP macro report 189
ResCPUByGroup macro report 197
ResCPUByNode macro report 196

265

Index

ResCPUByPE macro report 193


ResCPUByPEOneNode macro report 193
ResCPUOneNode macro report 196
ResHostByGroup macro report 200
ResHostByLink macro report 199
ResHostOneNode macro report 200
ResLdvByGroup macro report 202, 219
ResLdvByNode macro report 201, 218, 219
ResLdvOneNode macro report 202
ResMemByGroup macro report 207
ResMemMgmtByNode macro report 206
ResMemMgmtOneNode macro report 206
ResNetByGroup macro report 210
ResNetByNode macro report 209
ResNetOneNode macro report 209
ResNode macro report 214
ResNodeByGroup macro report 216
ResNodeByNode macro report 215
ResOneNode macro report 215
ResPeCpuByGroup macro report 194
ResPsByGroup macro report 222
ResPsByNode macro report 222
ResVdskByGroup macro report 225
ResVdskByNode macro report 224
ResVdskOneNode macro report 224
EXECUTE MACRO statement
parameters 32, 33
syntax of 31
Extent driver I/O column, ResUsageSvpr 156

F
File system columns
AutoCylPack 143
BLC 144
cylinder management overhead events 141
cylinder overhead 243
data block merge 143
data block prefetches 53, 113, 135
data segment lock requests 54, 141
Depot 54
FSG Cache Wait 142
FSG I/O 142
MI 142
segment acquired columns 133
segments acquired 52, 112
segments released 52, 114, 136
synchronized full table scans 133
write ahead logging 142
File system columns, ResUsageSpma 52
Formats
using FromDate 32
using FromNode 33
using Node 33

266

using ToDate 32
using ToNode 33
using ToTime 32
FSG Cache Wait columns, ResUsageSvpr 142
FSG I/O column, ResUsageSvpr 142

G
Gather Buffer 16
General concurrency control database locks columns
for ResUsageSpma 55
for ResUsageSvpr 146
General concurrency monitor management columns,
ResUsageIvpr 248
General format of macros 181
GmtTime 37
Group coordination message columns, ResUsageSpma 59

H
Host communications traffic macros
ResHostByGroup 198
ResHostByLink 198
ResHostOneNode 198
Host controller columns
channel management 82
channel traffic 55, 82
Network traffic 55
user command 82
user command arrival and departure 83
Housekeeping columns
for miscellaneous housekeeping 45, 49, 72, 79, 85, 93, 102,
131, 241
for relational primary index 45, 49, 71, 79, 85, 93, 101, 123,
131, 233, 241
How to re-enable logging 34

I
I/O statistics columns
for ResUsageSpdsk 96
for ResUsageSvdsk 125
Input and output traffic columns, ResUsageSldv 87
Input format for macros 29

L
Log Buffer 16
Logging period 16
Logging rates 27
Logging resource usage tables 20
Logic device columns
input and output traffic 87
outstanding requests 88
response time 87

Resource Usage Macros and Tables

Index

Logical device information


macros 200
table 85

M
Macros
All-node 29
ByGroup 29, 30
examples of 33
executing 31
input format 29
Multiple-node 29
One-node 29, 30
output format 181
overview 17
syntax for 31
types of 181
Master Index columns. See MI 142
Memory allocation columns
for ResUsageSpma 56
for ResUsageSps 116
for the ResUsageSvpr table 149
Memory availability management columns, ResUsageSpma
56
Memory management by node macros
ResMemByGroup 204
ResMemMgmtByNode 204
ResMemMgmtOneNode 204
Memory resident columns, ResUsageSvpr 146
Merge services columns, ResUsageSpma 60
Message delivery times columns, ResUsageIvpr 250
Message type columns, ResUsageIvpr 249
MI columns, ResUsageSvpr 142
Migration columns
for ResUsageSpdsk 96
for ResUsageSvdsk 126
Miscellaneous housekeeping columns
for ResUsageIvpr 241
for ResUsageSawt 72
for ResUsageScpu 45
for ResUsageShst 79
for ResUsageSldv 85
for ResUsageSpdsk 93
for ResUsageSpma 49
for ResUsageSps 102
for ResUsageSvdsk 123
for ResUsageSvpr 131
Modes of resource usage data
count 42
max 42
min 42
track 42
Monitor WD columns, ResUsageSps 104

Resource Usage Macros and Tables

Multinode macro, example of 33


MULTISET tables 37

N
Net circuit management columns
for ResUsageSpma 58
Net columns 150
broadcast net traffic 57, 116, 150
bynet network transport data 237
group coordination message 59
merge services 60
message delivery time 237
message delivery times 250
message type 236, 249
net circuit management 58, 237
net controller status and miscellaneous management 58
net miscellaneous contention management 237
net queues 238
network transport data 57
point-to-point net traffic 57, 117, 150
work mailbox queue 150
Net miscellaneous contention columns
for ResUsageIpma 237
Net queues columns, ResUsageIpma 238
Network traffic columns, ResUsageSpma 55
Network transport data columns, ResUsageSpma 57
Node agent columns, ResUsageSvpr 158
Node information
macros 183, 195
view 173
Node network traffic macros
ResMemMgmtByNode 208
ResNetByGroup 208
ResNetOneNode 208
Nonunique primary index 37
NUPI. See nonunique primary index

O
Occasional event data 38
Outstanding requests columns, ResUsageSldv 88
Overall resource usage information. See summary macros

P
Parameters, EXECUTE MACRO
FromDate 32
FromNode 33
MacroNameAllNode 32
MacroNameByGroup 32
MacroNameMultiNode 32
Node 33
ToDate 32
ToNode 33

267

Index

Partition assignments
listing 259
table convention 258
PE and AMP UDF CPU columns, ResUsageSvpr 154
PE information
macros 192
table 131
view 166
Point-to-point net traffic columns
for ResUsageSpma 57
for ResUsageSps 117
for ResUsageSvpr 150
Priority Scheduler columns, ResUsageSpma 64
Priority Scheduler resource usage 220
Process allocation columns, ResUsageSpma 60
Process block count columns
for ResUsageSpma 61
for ResUsageSps 110
for ResUsageSvpr 155
Process pending snapshot columns, ResUsageSpma 60
Process pending wait time columns
for ResUsageSpma 62
for ResUsageSps 111
for ResUsageSvpr 155
Process scheduling columns 151
chnsignal status tracking 151
CPU utilization 46, 63, 109, 151
cylinder read 151
for CPU utilization 46
PE and AMP UDF CPU 154
pending wait time 111, 155
process allocation 60
process block count 61, 110, 155
process pending snapshot 60
process pending wait time 62
scheduled CPU switching 235
work type summary 243
Purging old resource usage data 35

Q
Question marks, meaning in macro outputs 182

R
Raw disk drive traffic macros 200, 217
ResLdvByGroup 200
ResLdvByNode 200
ResLdvOneNode 200
Relational primary index
and resource usage tables 37
Relational primary index columns 233
for ResUsageIvpr 241
for ResUsageSawt 71
for ResUsageScpu 45

268

for ResUsageShst 79
for ResUsageSldv 85
for ResUsageSpdsk 93
for ResUsageSps 101
for ResUsageSvdsk 123
for ResUsageSvpr 131
Relational primary index columns, ResUsageSpma 49
ResAmpCpuByGroup macro
what it reports 188
ResAWT macro
what it reports 183
ResAWTByAMP macro
what it reports 183
ResAWTByNode macro
what it reports 183
ResCPUByAMP macro
what it reports 188
ResCPUByAmp macro
example of 33
ResCPUByAMPOneNode macro
what it reports 188
ResCPUByGroup macro
what it reports 195
ResCPUByNode macro
what it reports 195
ResCPUByPE macro
what it reports 192
ResCPUByPEOneNode macro
what it reports 192
ResCPUOneNode macro
what it reports 195
ResCPUUsageByAMPView, definition 164
ResCPUUsageByPEView, definition 166
Reserved columns
for ResUsageSawt 76
for ResUsageScpu 47
for ResUsageShst 83
for ResUsageSpdsk 98
for ResUsageSpma 66
for ResUsageSps 117
for ResUsageSvdsk 127
ResHostByGroup macro
what it reports 198
ResHostByLink macro
what it reports 198
ResHostOneNode macro
what it reports 198
ResIpmaView, definition 254
ResIvprView, definition 256
ResLdvByGroup macro
what it reports 200, 217
ResLdvByNode macro
input format example 200
what it reports 200, 217

Resource Usage Macros and Tables

Index

ResLdvOneNode macro
what it reports 200, 217
ResMemByGroup macro
what it reports 204
ResMemMgmtByNode macro
what it reports 204
ResMemMgmtOneNode macro
what it reports 204
ResNetByGroup macro
what it reports 208
ResNetByNode macro
what it reports 208
ResNetOneNode macro
what it reports 208
ResNode macro
what it reports 211
ResNodeByGroup macro
what it reports 211
ResOneNode macro
what it reports 211
Resource usage data
and what it covers 15
benefits of 15
deleting old data 35
saving old data 31
Resource usage macros. See macros
Resource usage tables
enabling Summary Mode 28
naming convention 37
primary index 37
reporting Summary Mode 42
types of 20
Resource usage views. See Views
ResPeCpuByGroup macro
what it reports 192
Response time columns, ResUsageSldv 87
ResPs macros 220
ResPsByGroup macro
what it reports 220
ResPsByNode macro
input format example 220
what it reports 220
ResSawtView, definition 168
ResScpuView, definition 169
ResShstView, definition 170
ResSldvView, definition 171
ResSpdskView, definition 172
ResSpmaView, definition 173
ResSpsView, definition 175
ResSvdskView, definition 178
ResSvprView, definition 179
ResUsageIpma table 20
housekeeping columns 233
mode 233, 234, 236, 237, 238, 239

Resource Usage Macros and Tables

spare columns 240


statistics columns 235
ResUsageIvpr table 20
housekeeping columns 241
mode 241, 243, 244, 245, 246, 247, 248, 249, 250
spare columns 252
statistics columns 243
Summary Mode 251
ResUsageSawt table
housekeeping columns 71
spare columns 76
statistics columns 73
Summary Mode 76
when enabling 20
ResUsageScpu table
reserved columns 47
spare columns 48
statistics columns 46
Summary Mode 47
when enabling 20
ResUsageShst table
housekeeping columns 79
spare columns 84
statistics columns 82
Summary Mode 84
when enabling 20
ResUsageSldv table
housekeeping columns 85
mode 85, 87
spare columns 89
statistics columns 87
Summary Mode 89
when enabling 20
ResUsageSpdsk table
housekeeping columns 93
mode 93, 97
spare columns 98
statistics columns 95
Summary Mode 98
when enabling 20
ResUsageSpma table
for housekeeping columns 49
relational primary index columns 49
spare columns 66
statistics columns 52
Teradata VS columns 65
when enabling 21
ResUsageSps table
housekeeping columns 101
spare columns 117
statistics columns 104
when enabling 21
ResUsageSvdsk table
housekeeping columns 123

269

Index

mode 123, 124, 125, 126


spare columns 128
statistics columns 125
Summary Mode 128
ResUsageSvpr table
allocation columns 155
file system columns 133
for housekeeping columns 131
memory columns 146
net columns 150
process scheduling columns 151
reserved column 158
spare columns 159
statistics columns 133
Summary Mode 159
when enabling 21
ResVdskByGroup macro
what it reports 223
ResVdskByNode macro
input format example 223
what it reports 223
ResVdskByNode macros
ResVdskByGroup 223
ResVdskByNode 223
ResVdskOneNode 223
ResVdskOneNode macro
what it reports 223
RSS logging
enabling from DBW 27

S
Saving old resource usage data 31
Scheduled CPU switching columns, ResUsageIpma 235
Segments acquired columns
for ResUsageSpma 52
for ResUsageSps 112
for ResUsageSvpr 133
Segments released columns
for ResUsageSpma 52
for ResUsageSps 114
for ResUsageSvpr 136
SET LOGTABLE command 28
SET RESOURCE command 27
SET SUMLOGTABLE command 28
Single-node. See One-Node
SQL statement
EXECUTE MACRO 31
Statistics columns 95, 125
allocation 125
file system 52, 133, 243
general concurrency control database locks 55, 146
general concurrency control monitor management 248
host controller 82

270

host controller channel and Network traffic 55


logical device 87
memory 56
memory allocation 116
Memory columns 146
migration 126
net 57, 116, 236, 249
process scheduling 60, 109, 235, 243
reserved 47, 66, 76, 83, 117, 127
Reserved columns 98
TASM 64, 73
Teradata VS 65, 95, 125, 155
user command 65
Stopped logging 34
Summary macros 211
ResNode 211
ResNodeByGroup 211
ResNodeByNode 211
ResOneNode 211
Summary Mode
benefits of 22
enabling 23
Synchronized full table scans columns, ResUsageSvpr 133
Syntax for
deleting old resource usage data 35
executing macros 31
System information macros 183

T
Table naming conventions 37
Task context segment usage columns, ResUsageSvpr 149
TASM columns
AMP Worker Task 64, 73, 109
file system 112
Monitor WD 104
Priority Scheduler 64
process scheduling 109
work type descriptions 74
Teradata Active System Management. See TASM columns
Teradata virtual storage. See Teradata VS columns 41
Teradata VS columns 41, 155
allocation 95, 155
extent driver I/O 156
for ResUsageSvdsk 125
I/O statistics 96
Migration 96
node agent 158
Types of resource usage tables 20
Types of statistics reported 39

U
User command columns
arrival 66, 83

Resource Usage Macros and Tables

Index

departure 66, 83
ResUsageShst 82, 83

V
Views
ResCPUUsageByAMPView 164
ResCPUUsageByPEView 166
ResIpmaView 254
ResIvprView 256
ResSawtView 168
ResScpuView 169
ResShstView 170
ResSldvView 171
ResSpdskView 172
ResSpmaView 173
ResSpsView 175
ResSvdskView 178
ResSvprView 179

W
WAL columns, ResusageSvpr 142
Work mailbox queue columns, ResUsageSvpr 150
Work type descriptions columns, ResUsageSawt 74
Write ahead logging columns. See WAL 142

Resource Usage Macros and Tables

271

Index

272

Resource Usage Macros and Tables

You might also like