Professional Documents
Culture Documents
Microsoft, Access, Windows, Internet Explorer, Word, Excel, Outlook and Exchange are registered trade-
marks or trademarks of Microsoft Corporation in the United States and other countries. BusinessObjects and
InfoView are trademarks of BusinessObjects. Other third party product names used herein may be the trade-
marks of their respective owners.
Cat MineStar and related written manuals and documentation contain copyrighted content owned by third
parties which has been used under license from those third parties. Use of Cat MineStar is subject to the
terms, conditions, limitations and disclaimers that apply to all third party content contained herein.
Information in this document is subject to change without notice. Companies, names and data used as
examples herein are fictitious unless otherwise noted.
Contents
Copyright Notices and Disclaimers 2
Contents 3
Introduction 44
Assumptions 45
Introduction 49
Chapter goals 49
Assumptions 49
Server 51
Client 51
Networking requirements 52
Failover requirements 52
Using Supervisor 53
Platform 53
System 53
General 54
Services 55
Server 58
Database 58
Jetty 60
GIS 60
Reporting 60
Email 61
Advanced 61
System Monitoring 61
Failover 61
Enterprise Backup 61
Auditing 62
Support Uploads 62
Backup 62
Snapshots 63
File Archiving 63
Data Archiving 63
Rman Configuration 63
System directories 64
Base directories 64
Advanced directories 65
General directories 65
Change management 65
Standby Directory 66
Data management 66
Local directories 67
General housekeeping 68
Shared directories 68
Central logging 68
Support communications 69
Web Sites 69
MineStar help 69
Restoration information 70
Miscellaneous shares 70
Disk monitoring 70
Setting up CyclesKpiSummaries 75
Configuring CyclesKpiSummaries 75
Site-specific configuration 76
Configuring Snapshots 77
Starting a Client 82
Site personnel 89
Safety items 91
Message types 92
Mining blocks 92
Configuring Health 93
Polled Channel Config Setup Command (VIMS 3G and TPI only) 109
Introduction 117
Assumptions 117
Introduction 118
Creating a Custom Time Usage Model (TUM) by modifying the KPI Summaries definition file 126
Creating a Custom Time Usage Model (TUM) using the office software KPI configuration tool 132
Prerequisites 132
Header 151
context 152
sourceManager 152
tablePrefix 152
date 152
entity 153
name 153
context 153
type 153
keyProperty 154
name 154
expr 154
type 154
Indexes 154
columns 154
name 155
name 155
type 155
missingValue 155
name 156
if 157
name 157
source 157
dimension 157
keyIfSourceMissing 157
if 158
name 158
lookup 158
expr 158
name 158
expr 159
Example 159
Details 160
if 160
name 160
expr 160
type 160
Measures 160
if 161
name 161
expr 161
unitType 161
category 161
name 162
expr 162
Summaries 162
name 162
fact 162
name 162
dimension 163
name 163
name 163
measure 163
stat 163
if 164
name 164
machineType 164
fact 164
Timestamp 164
Dimensions 164
Measures 165
KPIs 165
KPI 165
name 166
if 166
stat 166
label 166
unitType 166
active 166
rolledupKpi 166
name 167
rollupOf 167
label 167
calculatedMember(s) 167
name 167
if 167
expr 168
label 168
unitType 168
Terms 171
getYear() 172
getHalfYear() 173
getQuarter() 173
getMonth() 173
getWeek() 173
getDay() 173
getShiftName() 173
getShiftType() 173
getCrewId() 173
getShiftStartTime() 173
getShiftEndTime() 173
CycleActivityComponent(cycle.activities) 175
CycleDelay(cycle.allDelays) 177
CycleRoadSegment(cycle.roads) 178
recalcCyclesKpiSummaries 181
chunk 181
wait 181
dimension 182
Scheduling 194
Introduction 205
Security 206
Introduction 215
Guidelines 222
Introduction 227
Auditing 236
Introduction 239
Troubleshooting 255
Configuration 255
Introduction 261
Assumptions 261
Scope 267
Audience 267
Performance 284
Introduction 295
Common events that can impact the performance of the system 302
Frames 304
Lines 305
Boxes 306
Lag 307
Events 309
Notification 312
Caches 314
Graphs 316
Permissions 320
Assignment 326
Blending 326
Health 337
Telemetry 340
Audit 344
Destinations 345
Operator 369
Roads 371
Platform 407
System 407
Environment 417
Events 419
Jobs 421
Logging 421
Explorer 431
Production 444
Overmined 470
Undermined 470
Queues 482
What is the behavior of trucks not allowed to a delayed loading tool or processor 500
What is the behavior of trucks allowed to a delayed loading tool or processor 501
Introduction 513
py files 515
sh files 515
-b 515
-B 515
-c 515
-C 516
-d 516
-D 516
-e path 516
-j 516
-J 516
-pprogressFile 516
-PprofilerFile.jpl 516
-s system 516
-w 516
-W 516
argcheck 517
args 517
argformats 517
argpaths 517
background 517
closeWindow 518
_COUNTRY 518
foreground 518
_LANGUAGE 518
newWindow 518
output 518
passBusUrl 518
_TIMEZONE 518
usage 518
app.name 519
MSTAR_LOGS 519
MSTAR_TRACE 519
MSTAR_TEMP 519
MSTAR_HOME 519
MSTAR_CONFIG 519
MSTAR_ADMIN 519
MSTAR_XML 519
jdk.home 519
user.timezone 519
user.language 520
user.region 520
openorb.home 520
targetName 521
Usage 521
Description 521
Arguments 521
Options 521
Example 521
Notes 521
applySystemOptions 522
Usage 522
Description 522
checkDataStores 522
Usage 522
Description 522
checkUpdates 522
Usage 522
Description 522
checkScheduler 522
Usage 522
Description 522
Notes 522
cleanExpiredData 523
Usage 523
Description 523
Notes 523
cleanExpiredFiles 523
Usage 523
Description 523
Notes 523
createDataStores 523
Usage 523
Description 523
cycleRecalcUtility 524
Usage 524
Description 524
emptyDataStore 524
Usage 524
Description 524
Arguments 524
exportBIAR 524
Description 524
exportDataStores 525
Usage 525
Description 525
Options 525
-Z 525
-d 525
Arguments 525
Example 525
exportDataToXml 525
Usage 525
Description 525
Options 525
-d 525
-a 526
-o 526
FieldStats 527
Usage 527
Description 527
Options 527
-l 527
-d 527
-r 527
-n 527
-s 527
-u 527
-N 527
-M 527
-L 528
-D 528
-H 528
-T 528
-W 528
-U 528
-a 528
-h 528
-b 528
-x 528
-y 528
FileConverter 529
Usage 529
Description 529
Options 529
-OutputFile 529
-outputFileName 529
grabOnboardDiagnostics 529
Usage 529
Description 529
grabState 529
Usage 529
Description 529
GwmExport 530
Usage 530
Description 530
Options 530
-c 530
-b 530
-o 530
import 530
Usage 530
Description 530
Arguments 531
ImportDataFromXml 531
Usage 531
Description 531
Options 531
-f 531
-e 531
-i 531
-d 531
-p 531
-k 532
importExportedData 532
Usage 532
Description 532
initialiseTpiMachines 532
Usage 532
Description 532
Options 532
initProdSystemFromSnapshot 532
Usage 532
Description 533
Options 533
-a 533
-d 533
-e 533
initStandbyDbFromSnapshot 533
Usage 533
Description 533
Options 533
-a 533
Arguments 533
initSystemFromSnapshot 534
Usage 534
Description 534
Arguments 534
inspectModel 534
Usage 534
Description 534
listClients 534
Usage 534
Description 534
logConfigurationsEditor 534
Usage 534
Description 535
logspeedo 535
Usage 535
Description 535
Options 535
-png 535
-0 535
Notes 535
logspeedo2 535
Usage 535
Description 536
logspeedo3 536
Usage 536
Description 536
makeCatalogs 536
Usage 536
Description 536
Options 536
-b 536
Arguments 537
Example 537
makeDataStores 537
Usage 537
Description 538
Options 538
-q 538
-major or -m 538
-ConsistencyChecker 538
-verboseSchema 539
-riSchema 539
-verboseViews 539
-warnWhenShorteningViews 539
-skipViews 539
-skipHistoricalLookups 539
-skipUniverse 539
-skipReportingMetadata 539
-skipSummary 539
-numericsAsMeasures 539
-health 539
-legacyHealth 539
-Targets 540
Arguments 540
Example 540
Description 542
makeScheduledTasks 543
Usage 543
Description 543
Arguments 543
Options 543
Example 543
makeShortcuts 544
Usage 544
Description 544
makeSystem 544
Usage 544
Arguments 544
Options 544
-u 545
-k 545
-f 545
-v 545
--version 545
-h 545
migrateStandbyDataToProduction 545
Usage 545
Description 545
Option 545
-e 545
-n 545
printClassPath 546
Usage 546
Description 546
printPatches 546
Usage 546
Description 546
printSystemProperties 546
Usage 546
Description 546
profileTraceFiles 546
Usage 546
Description 546
Options 546
recentTravelTimes 547
Usage 547
Description 547
Options 547
<Days> 547
refreshBuild 547
Usage 547
Description 547
replaceDataStoresWithModel 547
Usage 547
Description 547
Options 548
<modelDumpFile> 548
<pitmodelDumpFile> 548
replaceDataStoresWithXml 548
Usage 548
Description 548
Options 548
<filename> 548
runSQLTrace 548
Usage 548
Description 548
Options 548
dataStore 548
sendAllToSupport 549
Usage 549
Description 549
Options 549
FTP 549
MSG 549
ATT 549
Example 549
sendCommand 549
Usage 549
Description 549
Options 549
sendLogging 550
Usage 550
Description 550
Options 550
showDbConnections 550
Usage 550
Description 551
Arguments 551
Example 551
snapshotDb 551
Usage 551
Description 551
Options 551
Example 551
snapshotOs 552
Usage 552
Description 552
Options 552
Example 552
snapshotSystem 552
Usage 552
Description 552
Options 553
Example 553
startScheduler 553
Usage 553
Description 553
Notes 553
switchActiveDatabase 553
Usage 553
Description 553
Notes 554
syncCycles 554
Usage 554
Description 554
Targets 554
syncStandbyInformation 555
Usage 555
Description 555
updateMaterialGroup 555
Usage 555
Description 555
Options 556
validateKpiSummaries 556
Usage 556
Examples 556
uploadHistory 557
Usage 557
Description 557
Options 557
validateWaypoints 557
Usage 557
Description 557
Options 557
This manual also covers such topics as troubleshooting and failover and recovery, database backups and the
correct methods for deleting aged data. The final chapter is devoted to the command line tools available for
Fleet, how they apply to the office software and how you can extend these tools and even develop your own.
This manual also covers some topics of interest for on site system administration personnel, discussing secur-
ity, permissions, network issues and other aspects related to the everyday operation of the office software.
Assumptions
Throughout the manual, it is assumed that you have a detailed knowledge of the following:
Only Fleet Consultants, Builders, or other suitably qualified personnel should make changes
to the settings in Supervisor.
Some chapters may assume you have knowledge specific to that chapter. These assumptions are described
at the beginning of those chapters.
In addition, each site is responsible for ensuring that adequate anti-virus measures are in place and updated
regularly. On-site information technology personnel should work with appropriate Fleet Consultants to ensure
that the selected anti-virus solution will not adversely affect the operation of the office software.
Any virus scanning and tape backup solution MUST exclude 'oradata' directories on the data-
base server.
Introduction
Before you start Fleet for the first time, there are some essential configuration steps that you need to follow.
You need to specify the name of the Application and Database Servers and the default ports to use, create the
required database instances and specify the directories to use for configuration and temporary files. You also
need to specify how the Fleet Clients connect to the server.
Much of the default configuration information is suitable for a variety of sites and does not need to be changed.
If your installation is non-standard, however, there may be further changes required before you can start the
office software.
When the initial configuration is complete, you need to create the mine model. This includes adding all of the
site travel network information, such as waypoints, destinations, road segments, etc., the grade and material
information and the details of the field equipment that the office software needs to track.
This chapter describes the initial configuration steps and how to perform them, how to start the office software
Server and how to connect to the server with a client.
The process of creating the mine model is described only; the actual procedures involved are covered in the
Fleet User Manual.
Chapter goals
By the end of this chapter, you should:
Assumptions
As well as the assumptions listed in the Introduction chapter, the instructions in this chapter assume a stand-
ard Fleet installation, and that all software installation steps described in the Fleet Install Manual were com-
pleted successfully.
Because it is a distributed system, Fleet relies heavily on being able to pass traffic over the network. It is there-
fore assumed that the on-site LAN is effective and is not a limiting factor in deciding which files or directories
to share.
The instructions in this chapter also assume that the Application server has an \mstarFiles directory
which can be exposed as a network share.
Server
The server should be configured with virtual memory (paging file) settings of 1.5 times the physical memory on
the server. The paging file should only be located on the C: drive.
Performance options should remain with the default settings, which allocate processor resources to back-
ground services.
The office software operates with Java 1.6, and will install a copy for its own use. If you are installing another
version of Java, it must be compatible with the office software requirements. Check the Java_Home directory
to see which version you are running.
The office software, as currently configured, does not support Java 1.7, and it should not be
installed, used or configured.
Client
The client should be configured with virtual memory (paging file) settings of 1.5 times the physical memory on
the client. The paging file should only be located on the C: drive.
Performance options should remain with the default settings, which allocate processor resources to Pro-
grams.
Networking requirements
The Application and Standby servers should each be equipped with one Network Interface Controllers (NICs),
which handles all communications. If you are using two NICs, they should be connected to different net-
works, such as the Field and the Office.
Failover requirements
If failover to the Standby server is required, the following procedure should be followed.
If the servers are maintained and managed by your internal IT group, it is critical that this group be involved in
the failover planning, as they may be required to provide this IP address switching.
Only Fleet Consultants, Builders, or other suitably qualified personnel should make changes
to the settings in Supervisor.
Most of the procedures throughout this manual rely on the use of Supervisor. This is the standard interface for
most administration and configuration tasks within the office software. Unless otherwise specified, all pro-
cedures in this manual are performed using Supervisor.
To start Supervisor
Double-click the Supervisor icon on the desktop, or at a command prompt, type mstarrun supervisor.
This section describes the requirements and procedures for configuring the essential aspects of the Applic-
ation and Database Servers in order to achieve a viable environment. Depending on the requirements of your
site, further configuration may be necessary. See the Advanced Fleet Configuration chapter for further inform-
ation.
Platform
In Supervisor, click Options, then click System Options. From the Product List, select Platform.
You use the Platform option sets to specify the names of servers, the port numbers to use, database
instance and user names and other site-specific information. It is important that this information be correct or
the office software will not function correctly, or data may be displayed incorrectly. You also use these options
to specify logging, auditing and other options.
For full details of each of the tabs and fields in this option set, refer to the Supervisor Page Reference chapter.
Some initial configuration details are provided below.
System
You use the System option set to specify the core configuration options for the office software, as described
below.
General
Use the General tab to specify general site identification details and location-specific information, such as the
unit set to use and the time zone you are in.
NOTE: The Time Zone field is an editable field. If the appropriate time zone does not appear in the
list, you can type it in directly.
l miningSI
l miningImperial
<units>
</unitset>
</units>
Once you have done this, the General tab in the System option set will have miningSurveyFeet in the Unit
Set drop-down list as per the screenshot below.
Services
You can use the Services tab to specify which server components to include and exclude during startup, and
the order in which they should start.
5. Select the mode you wish to start the Windows services in, either Automatic or Manual (as per the
screenshot above).
6. If you wish to configure the timeout for each service, change the value in each of the timeout fields.
If you clear the Run as Windows Services check box and click Apply, the following message displays.
7. Click Yes to continue. You can go back into this tab, select the text box and click Apply and the ser-
vices will be reinstalled.
When the server components are installed to run as Window services, only the Start Services (main) icon and
Stop Services (main) icon are shown on the desktop. Starting and stopping the server components in the
required order is automatically controlled by Windows.
Starting and stopping the server components in the required order is automatically controlled by Windows.
Windows Services are configured using files in the {MSTAR_TEMP} directory. These configuration files are
generated when the decision is made to use Windows Services. They are not regenerated each time the ser-
vice starts – unlike using the mstarrun targets where the configuration is generated on-the-fly as the target
starts.
1. If there is a need to regenerate the configuration files the Windows Services use, run the following two
mstarrun targets:
mstarrun windowsServices remove
2. Alternately, run Supervisor and navigate to the screen shown in the screenshot above.
3. Clear the Runs as Windows Services check box, click Apply, select the Run as Windows Ser-
vices check box, and click Apply again.
1. Click Start > Control Panel > Administrative Tools > Services.
2. You will find Fleet services in the list, prefixed with M*, e.g. M*CycleGenerator.
Server
Use the Server tab to specify the Application server, IP address, and middleware details.
Database
Use the Database tab to specify the database instance names and user names. You need to consider the
requirements of your site when specifying these options. This section describes a typical setup using two
database instances and the default user names.
1. Start Supervisor, and on the Contents menu, point to Setup and then click System Options.
2. From the Product list box, click Platform, then in the Option Sets list, select System.
3. Click the Database tab, and specify the Database Server name in the Database Server field.
NOTE: Refer to the Supervisor Page Reference chapter for full descriptions of field names on the
"Platform" on page 407 tab.
4. Specify the required database instance name for each of the Primary and Secondary instance fields
as shown in the screenshot below. The default selection is MINESTAR for all instances.
NOTE: You should not use the characters * . " / \ [ ] : ; | = , in your server names. A hyphen may be
acceptable depending on your platform.
5. Specify the name prefix for the user names and passwords which will be specified in the next step. If
the prefix is ms and the user name is model, then the real user name will be msmodel.
This does not apply to the Read Only user, or the optional Aquila and CAES databases.
6. Specify the user names and passwords for each of the datastores.
By default, the office software uses the naming scheme shown in the following table.
You can change the names and passwords if required, but it is recommended that you keep the default val-
ues.
NOTE: The Model, Historical, Summaries, and Template databases only require a single instance.
You can still deploy the Reporting, CAES, and Aquila databases to a second instance if required. This
is the default configuration.
7. You can set a Read Only User by entering a read only user name and password in the relevant fields.
These fields default to msread.
Setting the read only username and password gives the user read only access to tables and views in
the Historical database.
8. Click Apply to save the configuration. This is the information that createDataStores uses when cre-
ating the datastores.
Jetty
Use the Jetty tab to specify the home location for Jetty.
GIS
Use the GIS tab to specify the home location for GeoServer.
Reporting
Use the Reporting tab to specify the reporting options used by the office software, such as the URL for the
reporting cache, the default display formats, etc.
Use the Email tab to configure the office software’s email capabilities. If supported by the site infrastructure,
the office software can send email internally and externally.
Advanced
Configuration options on this tab are used for development purposes only.
Use the Enterprise Extensions option set to specify system monitoring, failover, enterprise backup and
auditing options.
System Monitoring
The office software creates a range of snapshots to assist in system monitoring and problem diagnosis. Use
this tab to specify the retention periods for each of the snapshot types.
Failover
In order to support failover in case of disaster, the office software creates regular system snapshots and sys-
tem updates. Use this tab to specify
l Snapshot Frequency
Select from the list how often you wish to snapshot the live system. The default is every hour.
l Standby System Update Frequency
Select from the list how often you wish to update the standby system configuration with a copy from
the live system. The default is every 15 minutes
NOTE: If you make changes to either frequency, you need to run makeScheduledTasks before the
changes take effect. See Setting up scheduled tasks for more information.
Enterprise Backup
Some sites can support the integration of data exports with their local tape backup system. If your site sup-
ports this, select this check box to automatically back up the data exports to tape.
Any virus scanning and tape backup solution MUST exclude 'oradata' directories on the data-
base server.
NOTE: This functionality is limited. See Configuring tape backups for more information.
Auditing
Use the Auditing tab to specify which topics you want to audit in the Client Interface. This is a troubleshoot-
ing feature only, and should be disabled under normal operating conditions.
The configuration options in these option sets relate to low-level office software functions, and should only be
changed by Fleet Consultants, or on advice from Fleet Customer Support. You need to be in Expert Mode to
view this option.
The configuration options in these option sets relate to low-level office software functions, and should only be
changed by Fleet Consultants, or on advice from Fleet Customer Support.
Use the Workgroup Extensions option set to specify options relating to backup frequency, which files to
include in backup sets, and how long to keep data files before they can be deleted.
Support Uploads
Use the Support Upload tab to specify how and when the office software should send information to Fleet
Customer Support.
Backup
Use the Backup tab to specify the schedule and retention periods for Fleet backups. You should configure the
backups and other CPU and network-intensive tasks to occur throughout the day to avoid overloading the
system.
Snapshots
Use the Snapshots tab to specify the schedule and frequency of System and Operating System snapshots,
as well as the lookback hours for the different types of snapshot. This determines how far back in time the
office software searches for files to include in snapshots.You can also enable the Client-Initiated Server Snap-
shot functionality, "To configure Fleet and operating system snapshots" on page 77in the "Configuring Snap-
shots" on page 77"Configuring Snapshots" on page 77section.
File Archiving
Use the File Archiving tab to specify the archiving schedule and retention periods for generated files.
Data Archiving
Use the Data Archiving tab to specify the archiving schedule and retention periods for different types of data.
You can specify retention periods and the types of data to delete or for short, medium and long terms.
Rman Configuration
Use the Rman Configuration tab to specify the backup schedule times for Oracle Rman. The settings on this
tab should only be changed by an experienced database administrator.
Use the Purge Recycle Bin Configuration tab to specify the frequency of the recycle bin purging. The settings
on this tab should only be changed by an experienced database administrator.
You use the Platform – Clients option sets to configure the behavior, appearance and other attributes of the
clients that connect to the office software. For full details of each of the tabs and fields in these option sets,
"Supervisor page reference" on page 294 Some initial configuration details are provided below.
When configuring system directories using SQL Server, all folders that are being used to
export data must be shared with read and write access to the network users
System directories
To configure system directories
1. Start Supervisor on the machine where you want to configure the directories, for example the Applic-
ation server or the Database Server.
2. On the Options menu, click System Directories.
3. Enter the appropriate details as described in the following sections.
Base directories
The office software uses two Base Directories as the starting point to define the directories that it uses for con-
figuration, maintenance and support purposes. Unless otherwise specified, all directories used by the office
software are relative to one of these two directories.
This is a directory local to the current machine, and is the basis for all other directories that need to exist on
the local machine, such as log and trace directories.
d:\mstarFiles\systems\<systemName>
On the Application server, this is typically a local directory, which is shared for all other servers and clients to
use. The Central Base Directory on other servers and on the Mine Controller machines, for example, should
all be configured to use the Central Base Directory on the Application server.
The Backup Server should have a local Central Base Directory, and not share the one on the Application
server.
The default path for this directory on the Application server is:
d:\mstarFiles\systems\<systemName>
NOTE: The drive letter you use (e.g. D:) may be different depending on which drive you installed the
office software on.
Advanced directories
The office software uses a number of additional directories for general administrative and support purposes.
These are typically relative to either the Local Base Directory or the Central Base Directory, with one or
two exceptions.
General directories
Change management
The office software uses two specific directories to store information about changes made to the default con-
figurations settings, and about the patches and service packs that are available for the base installation. This
makes system-wide updates and configuration changes easier.
This is a shared directory on the Application server. The files in this directory contain customizations of the
default office software settings.
{MSTAR_BASE_CENTRAL}/config
This is a shared directory on the Application server. This directory contains the patches and service packs
that are occasionally released to address issues with the base installation.
{MSTAR_BASE_CENTRAL}/updates
Standby Directory
If a standby application server is configured, a copy of all of the necessary application configuration data files
will be duplicated here so that a standby application server can be started.
Data management
In order to properly maintain and support your system, certain files are periodically sent to Fleet Customer Sup-
port for analysis. Further, periodic database exports are taken and stored in specified directories.
The existence of a patch in this directory does not enable the patch. You need to use the Software Updates
page in Supervisor to install and enable updates.
Each time mstarrun starts a Fleet program, it checks the updates directory and
MineStar.overrides, expands a local copy of the patch to {MSTAR_TEMP} and adjusts its internal
search paths to consider the enabled patches.
Part of the ongoing maintenance of your system is to send updated files to field equipment. These are created
in Fleet using the Data Transfer Assistant. The files created using this process are stored in the directory spe-
cified here. This can be a shared directory on any server.
{MSTAR_BASE_CENTRAL}/onboard
The office software periodically creates database exports as .dmp files. These are then zipped and stored in
this directory.
On the Application and Database Servers, this directory should be local. Client machines should share the
{MSTAR_DATA} path specified on the Application server so that all Fleet clients can access the same
TravelTimesData file generated by the recentTravelTimes task.
{MSTAR_BASE_CENTRAL}/data
The remote database directory to which the snapshot database will be exported to.
On systems deployed with MS SQL Server, this directory must reference a shared path
(\\MachineName\FolderName) with read and write access to the database service account and the office
software. This shared path will be used by the Database Sserver where database exports will be written
before they are copied across to the Application Server.
As a further precaution, the database export zip files can be copied to a backup directory. The default con-
figuration is to not enable this feature. If this directory is configured, it should be on the Standby Server.
Standby snapshots and zipped copies of database exports are copied to this directory.
Specifies the directory to which Fluid Analysis files are saved in subdirectories.
Local directories
In order to track system operation and to provide a history of operations should there be operational issues,
the office software keeps a number of log files in various directories.
This is a local directory on each computer that runs either a Fleet system or a Client. Log files record sig-
nificant events in the operation of a deployed Fleet system.
{MSTAR_BASE_LOCAL}/logs
This is a local directory on each computer that runs either an office software Server or Client. Trace files can
be created to varying depths of information, and contain information that are useful for debugging purposes.
These files are rarely of use to the system owner.
{MSTAR_BASE_LOCAL}/trace
General housekeeping
The office software uses a number of specific directories for temporary and administration files used during
everyday operation. Each computer must have their own local directories for these purposes; shared dir-
ectories cannot be used.
This is a local directory on each computer where you run either an office software Server application or a Cli-
ent.
{MSTAR_BASE_LOCAL}/tmp
This is a local directory on each computer where you run either an office software Server application or a Cli-
ent.
{MSTAR_BASE_LOCAL}/admin
Shared directories
Central logging
This directory is configured on all machines except the Application server. This is a shared directory on the
Application server, namely, the mstar_logs directory.
When a snapshot is initiated, the process captures logs from both the mstar_logs and mstar_add_logs
directories. This means that log files from both the server and the client where the snapshot was initiated are
included.
{MSTAR_BASE_CENTRAL}/logs
This is a shared directory on the Application server. The files in this directory contain actual user data which is
of value for diagnosis and monitoring of the system, and may be helpful in reconstructing a lost system state.
{MSTAR_BASE_CENTRAL}/messages
Support communications
This is a shared directory on the Application server. This directory contains the files that need to be sent to the
Fleet Customer Support.
{MSTAR_BASE_CENTRAL}/outgoing
This is a shared directory on the Application server. This directory contains the files that have been suc-
cessfully sent to the Fleet Customer Support. It is assumed that the Application server has FTP capabilities.
If not, this directory should be on such a computer.
{MSTAR_BASE_CENTRAL}/sent
Web Sites
MineStar help
This is a shared directory on the Application server. This directory hosts the Fleet Help Website, providing
access to the pdf files that comprise the Fleet Documentation Set.
{MSTAR_BASE_CENTRAL}/help
This is a shared directory on the Application server. This directory hosts the Fleet Reporting Website, provid-
ing access to the published reports that comprise the Fleet Report Set.
{MSTAR_BASE_CENTRAL}/help
Restoration information
A network share is also needed on the Standby Server so that the Application server can periodically copy the
restore information to that location. This share should be called \MstarBackup or \Backup and be mapped to a
drive with sufficient spare disk capacity.
To maximize network performance you should use substituted drives rather than mapped drives wherever pos-
sible.
Miscellaneous shares
Other network locations may be necessary based on local site integration needs, e.g., a network share may
be exposed on the Application server so that more control applications can copy files to that location.
Disk monitoring
You can specify disk volumes to monitor to help ensure that the office software always has sufficient space
for optimal performance. Disk space is a major consideration and insufficient space is one of the primary
causes of reduced performance or even failure.
4. Ensure that the volumes you want to monitor are listed in the Include list, and then click OK. For
example, on the Application server, this could be volumes C and D, while on a Database server it could
be volumes D, E and F.
5. Click Apply to apply the changes.
Databases are exported periodically according to the schedule established by the Windows Task Scheduler
on the Database server. Database exports are quite large, so the drive where these exports are stored should
have plenty of free space. For performance reasons, the directory where these files are stored {MSTAR_
DATA) should be local to the computer performing the export.
If these exports are configured on the Application server, the same space and directory considerations apply.
That is, the directory should be local and the drive should have sufficient space. This is in part because it is
used for Cycle Cache Recovery files, which are read when starting the office software. Performance could be
adversely affected if these files are not available locally.
Do not compress the Oracle data drive or directory. Directory compression can give a wrong
indication of available free space, it can cause CPU overheads and lead to serious disk frag-
mentation.
If free space becomes a concern for data storage then the site should immediately begin further capacity plan-
ning and upgrading to bigger drives.
You need to run createDataStores on the Database Server, and this requires that Oracle be correctly installed
and configured. The installation of Oracle is covered in the Fleet Install Manual.
You also require administrator privileges to run createDataStores. Refer to the section on User Accounts
and Permissions in the Fleet Install Manual to ensure that the correct domain users and groups have been
established.
At least two arguments are required for createDataStores: the database file size, and the Oracle home drive.
A third argument, additional data drives, can be specified if the Oracle datafiles are to be split over multiple
drives.
You can specify either LAPTOP or SERVER for the database file sizing. The LAPTOP specification creates
100Mb database files; the SERVER specification creates 1Gb database files.
The Oracle home drive is where the Oracle admin directories and system datafiles are created.
When you have determined the correct details for your particular installation, you can run the cre-
ateDataStores application. This is a command line application; there is no facility for running cre-
ateDataStores from within the office software GUI.
l Open a command shell, and from your mstar\mstarHome\bus\bin directory enter the following
command:
For example, on a server machine where the Oracle admin drive is D: and where two additional drives are
required:
createDataStores performs the following functions, using the information specified in Supervisor and the argu-
ments entered at the command line:
When you have successfully created the datastores, you need to perform the following steps to complete the
Oracle and database configuration.
1. In Microsoft Windows, click Start, point to Settings and then click Control Panel.
2. Double-click Administrative Tools and then double-click Services.
3. Locate the Oracle Listener Service for your system, typically called OracleOraDb10g_
home1TNSListener and set it to Start Automatically.
4. Start the Oracle Listener Service.
5. The tnsnames.ora file
The tnsnames.ora file defines the database instances that a client can communicate with in the database.
It works in concert with the listener.ora file that exists on the Database Server. The listener only defines the
service for a particular host, whereas the tnsnames.ora file exposes network services to clients for one or
more databases. The tnsnames.ora file is typically required for the Reporting Server and the Builder Work-
station. This provides Oracle with the details it needs for both the client and the database to which it connects.
1. Ensure that Oracle is installed on the Reporting server and the Builder workstation.
2. On the Database Server, navigate to the directory where you installed Oracle.
The default directory is \mstarOracle
3. In the Oracle directory, navigate to \product\11.2.0\dbhome_1
4. Locate the tnsnames.ora file in {OracleHome}\Network\Admin and copy it.
5. Paste the tnsnames.ora file to Oracle directory on both the Reporting Server and the Builder Work-
station.
The default directory is \dbhome_1\network\admin
Setting up CyclesKpiSummaries
CyclesKpiSummaries is a standard Fleet service which is the part of the Production sub-system that writes
records in the Fact and Dimension tables. Fleet ships with a standard library of Kpi definition files but you can
provide custom files that enable you to modify and supplement the standard definitions to suit your site's
requirements.
Configuring CyclesKpiSummaries
This step can be omitted if you are going to use just the standard definition files or if you choose to postpone
creating your custom files and updating the configuration until later.
1. To configure CyclesKpiSummaries
2. Place your custom file(s) in the mstarFiles\systems\<systemName>\config\xml\cycles
directory.
3. Start Supervisor and, on the Options menu, click System Options.
4. In the Product list, choose Production, then click KpiSummaries Configuration.
5. On the Cycles Definition Files tab in the CustomDefinition File(s) text box, enter the names of your cus-
tom files, one per line.
6. Click Apply.
The makeDataStores utility uses the standard and custom definition files to generate the database schema
required and create the necessary database objects.
If both steps execute without error and Fleet is started, CyclesKpiSummaries should start without error and
populate the Fact and Dimension tables as the system runs.
For detailed information about CyclesKpiSummaries, including how to create custom definition files to suit
your site's requirements and how to update the configuration, refer to Advanced Fleet Configuration.
Site-specific configuration
At this point you should perform any site-specific configuration tasks.
Configuring Snapshots
Use Supervisor to configure snapshots. You can optimize your system’s performance by scheduling snap-
shots to run at the most appropriate time.
The Client-Initiated Server Snapshots check box is NOT selected by default, and should be
enabled by either your Fleet Customer Support team, or your site administrator.
Selecting this check box, and then initiating a User snapshot on the Client initiates a User snapshot
on the Server. A log message displays in MineTracking telling you this. If you do another snapshot
within five minutes, the Server outputs a message letting you know that the previous snapshot was
too recent. If MineTracking cannot be contacted, the attempt to invoke a Server snapshot is stopped
after 20 seconds, so that the Client snapshot does not become locked up.
This functionality is unused while the Client-Initiated Server Snapshots check box is not selected. If
you decide you want to use this option, you can enable the functionality in Supervisor without restart-
ing the MineStar services.
4. Click Apply.
The office software uses Windows Task Scheduler to schedule the tasks required for each machine. Refer to
the Windows Task Scheduler chapter in the Fleet Install manual for details.
The office software creates the following scheduled tasks for an Application server:
l cleanExpiredFiles
l snapshotSystem
l snapshotOS
l snapshotSystemStandby
Database server
The office software creates the following scheduled tasks for a Database Server:
l exportDataStores
l snapshotSystem
l snapshotOs
l cleanExpiredFiles
l cleanExpiredData
The office software creates the following scheduled tasks for a Client:
l snapshotSystem
l snapshotOs
l cleanExpiredFiles
This section describes how to use Supervisor to configure Clients. The actual installation of the Fleet soft-
ware on the client machines is covered in the Fleet Install Manual, and is not addressed here.
To work around this, you need to add the Client application as an exception to your Windows Firewall applic-
ation.
3. In the Product list, choose Platform – Clients, then in the Option Sets list click Explorer
4. Use the Taskbar, Appearance and Behavior tabs to specify the default behavior of all Clients, includ-
ing Supervisor.
5. In the Option Sets list, click Explorer – Client to specify the default behavior for clients.
6. If necessary, specify the default behavior for each of the other options, and then click Apply to save
the settings.
The process for configuring local options is the same as for global options, but is carried out on the machine
where the client is going to run. The options that you specify here only affect clients on that machine; clients
on other machines use either the global options or any local options that have been configured.
1. Ensure that both the Application and Database Servers have booted correctly, and that Oracle is avail-
able. To test that Oracle is available, run the following command from the Application server:
mstarrun checkDataStores
This checks that the database configuration is valid.
2. On the Application server, log in using the preferred administrative domain account.
3. Navigate to \systems\<systemName>\shortcuts\ServerDesktop in the \mstar\m-
starFiles directory.
This directory contains the required shortcuts for starting the system. If you have a standard con-
figuration, this directory should contain the following shortcuts:
l Start Services.
l Stop Services.
It is a good idea to make a copy of this directory on the desktop for ease of access.
Starting a Client
When MineTracking has started successfully, you should test to make sure that you can connect suc-
cessfully with a client. You can do this from any machine that is required to run a Client.
To start a Client
1. Double-click the Client shortcut on the desktop. The Client Splash Screen and Login dialog box
appear.
2. Enter your username and password. If you do not know your username and password, contact your
Fleet site champion or consultant.
3. If necessary, enter the system that you want to connect to.
A properly configured client should display the correct details for the Application server.
4. In the Desktop field, choose the desktop that you want to use, or leave this field blank for a single
instance of the Client.
5. Click OK to log in and display the Client.
NOTE: The mine model may have been created before the implementation of your system.
The mine model is the office software’s view of your mine at a point in time. It includes all of the destinations,
equipment, personnel, materials, grades, etc., that make up your mine. In order to function correctly, the office
software needs an accurate picture of your mine, and so it is important that the mine model be correct. Before
you can use the office software for the first time, you need to provide an initial model to work with.
NOTE: The mine model should be built using either the Builder or Maintenance Engineer login.
The following sections describe the process of building the mine model. Only some of these processes are per-
formed in Supervisor. You should refer to the Fleet User Manual for the actual procedures and other inform-
ation for performing each step. A summary of the build process is illustrated below.
Create Machine
Create Site Travel Create Materials Create Machines
States and Job
Network and Grades and Fleets
Codes
Create Delay
Create Personnel Create Rosters Create Safety
Categories and
and Users and Shifts Items
Types
A further component of the site travel network is the Global Positioning System (GPS), which provides the
absolute position of machines. The office software uses this information to determine the position of
machines in the mine, based on survey information.
Import a .dxf file into the site map to begin creating your mine model.
Create Auxiliary
Create Trucks Create Shovels Create Processors
Equipment
Create Transport
Create Fleets
Vehicles
Create Machine
Create Activities Create Job Codes
States
Site personnel
Site personnel are those people who operate and maintain the field equipment, users are those people who
use the office software and are involved in ensuring that the office software runs correctly, such as the office
software Builders. Part of creating the mine model is entering the details of each person, their role and other
details. The following flow diagram illustrates this process.
Ensure Default
Calendars are Create Shifts Create Roster
Correct
Safety items
Safety items are used to ensure that field equipment is safe to operate before it begins a shift. Different safety
items and their associated actions need to be created for different types of field equipment. Each time an oper-
ator logs on, the appropriate safety checklist appears for that equipment.
Safety items consist of the actual safety checklist and the required actions should any check fail. You can
also associate a delay type with safety items. This means that whenever a piece of equipment fails a safety
check, it is automatically put on delay using the specified delay type. You use Supervisor to specify the delay
type to use for failed safety checks.
NOTE: The same delay type is used for all failed safety checks.
1. Start Supervisor, and on the Contents menu, point to Setup and then click System Options.
2. In the Product list, click Production and then click the Delay Categories option set.
3. In the Safety Inspections Name field, type the name of the delay type that you want to associate with
safety items. If the required delay type does not exist, create it using the Client.
Delay types created by the office software can be moved to different categories.
You are able to configure site-specific delay categories, whether they are categories for English speaking or
non-English speaking sites. The example given uses configuring delay categories in Spanish, but the pro-
cedure is the same whatever the language.
You can configure delay categories using the Delay Category Finder. See your Fleet User Manual for more
information. The names of the delay categories can be changed, but if you do change any names you must
remember to use those same names in your kpisummaries.xml file.
Message types
You need to add the message types that you intend to use for your site. Message types need to be suitable for
office and field equipment messages.
Mining blocks
If your mine site uses mining blocks, you need to either create them manually or import them. This is the last
step in the creation of the mine model.
Configuring Health
The steps for configuring Health are done in the following order.
The office software updates VIMS definitions after the initial database creation.
From time to time new VIMS definition data is available, usually as a result of new machinery or machine
options. This requires re-initialization of Health. The VIMS definition data is bundled in either an office soft-
ware .zip update or patch, and will include instructions on how to re-initialize Health. The are two approaches
that both produce the same result.
1. Run makeDataStores all. This can take a long time but ensures the entire database is consistent with
the requirements of the .zip upgrade or patch.
2. Run makeDataStores ReferenceData and
makeDataStores HealthViews. This takes a shorter amount of time by importing new VIMS definition
data and regenerating Health views.
See the post-installation instructions in the .zip upgrade or patch received for the recommended way to re-ini-
tialize Health.
Before increasing data retention periods, a good profile of current database performance and
anticipated database performance should be known. Any anticipated performance problems
should be addressed with the database and computing resources available to the database
first. Increasing data retention periods can have a significant impact on database per-
formance, and can result in poor performance in other areas of the office software.
Any changes made to data retention periods should be made gradually, and should be mon-
itored.
For example, if you wish to increase the data retention period from two weeks to three months, it is recom-
mended that you increase it to one month, and monitor the system for a few months. If the system runs sat-
isfactorily, but your data is still not being retained for long enough, increase it to two months, and continue
monitoring.
If you increase data retention periods by a large amount there is a bigger risk that your system will be
impacted as described earlier.
To add a retention period that is not standard, see "Adding extra retention period options" in the Defining data
retention periods section.
Health processes VIMS underscore files and stores the processed data in the historical database schema for
use by reporting. The volume of data stored is a function of:
This section highlights items to check associated with data retention periods.
You need to set up and run the Health Data Deletion job to delete actual Health ’underscore file’ details.
Data retention periods for health data imported from VIMS report files, also known as underscore files, is con-
trolled by the Health Data Deletion job. The job can be scheduled or run as a one-off job by selecting the job
from the Jobs menu in the office software.
Health processes VIMS report files and stores the processed data in the historical database schema for use
by reporting. The volume of data stored is a function of
This section highlights items to check associated with data retention periods.
You need to set up and run the Health Data Deletion job to balance ready access to data against system per-
formance.
It is recommended that mine sites and/or Dealers maintain their own archive of VIMS files. In this case, the
maximum retention for any option would be one year.
To configure the health data deletion job, click Jobs > Health Data Deletion.
The following settings, on the Standard Options tab, are recommendations only.
Datalogs and Snapshots No longer than seven days No longer than shift
Using this page you can tailor the length of time that heath data of various types will reside in the database
and therefore be available for reporting.
5. Make a selection from the Retention Period list. You can set a period either using calendar terms or
numbers of weeks. Once the jobs have been created and scheduled the jobs will be run in the Standard
Job Executor at the defined time.
By default, the office software retains one year’s worth of health event data. This can be adjusted in Super-
visor with the following steps.
1. Open Supervisor.
2. Click Options > System Options.
3. From the Product drop-down list select Platform, then in the Option Sets list select System - Work-
group Extensions.
4. Select the Data Archiving tab.
5. Adjust the retention period for HealthEvent.
The week-based retention period options as shipped in the product are limited to the last 1, 2, 3, 4, 12, 26, and
52 weeks, which in most cases will satisfy requirements.
When the defaults do not satisfy your needs it is possible to utilize the job configuration xml to make additional
periods available. This process is described in the following section.
NOTE: If the Client is running you should close and restart it.
8. Start the Client, go back into the Health Data Deletion job, and ensure the options you created are avail-
able on the Period Options tab.
9. If possible run a trial job to ensure it operates correctly.
<propertygroup name="periodOptions">
<uigroupdef layout="vertical"
layoutInfo='{rowSpec="fill:default:grow"}'/>
<propertygroup name="retentionPeriod">
<uigroupdef mode="region"/>
<propertydef name="retainedPeriod" label="">
<typedef>
<type code="string" choices='{
"Year":"Year",
"Half":"Half",
"Quarter":"Quarter",
"Month":"Month",
"Week":"Week",
"Day":"Day",
"Shift":"Shift",
"Last 1":"Last Week",
"Last 2":"Last 2 Weeks",
"Last 3":"Last 3 Weeks",
"Last 4":"Last 4 Weeks",
"Last 6":"Last 6 Weeks",
"Last 8":"Last 8 Weeks",
"Last 12":"Last 3 Months",
"Last 26":"Last 6 Months",
"Last 52":"Last 12 Months"
}' />
</typedef>
<uidef widget="combo"/>
<description>
Retain data for this period. Earlier data will be deleted.
</description>
Configuring machines
Health provides two broad categories of features; those that require interaction with Caterpillar VIMS or TPI
devices and those that are non-interactive and only require the VIMS report file. When configuring machines
for non-interactive features you can skip the steps for defining interface network addresses.
The following table illustrates the two categories, and how they relate to specific features.
Interactive Non-interactive
l Activate Snapshot
l VIMS File Import
l Activate Datalogger
l ASA Calculator
l VIMS File Download
l BusinessObjects Reports
l VIMS Clock Synchronization
l Oil Analysis Import
l Health Event Monitor
Machine setup
Each machine that Health is to monitor, interact with or report on must be identified within the office software.
Each machine identified in the office software must have the correct machine serial number, on-board health
platform and on-board interfaces populated correctly.
Refer to the Machine Tracking chapter in the Fleet User Manual for information on using Machine Editor to cre-
ate a new machine.
The recorded machine serial number in machine editor must match the serial number configured onboard
VIMS which should match the actual serial number of the machine.
Setting the correct on-board platform determines the protocol the office software uses to communicate with
the device. This is important for channel polling and health log download features.
The Onboard tab on the Machine Editor page provides a section for configuring interfaces addresses for on-
board health.
NOTE: For VIMS 3G and Third Party Interface (TPI) devices, the default port of the Health interface
needs to be changed from 17000 to 51889 or the correct custom port defined by the device.
Third Party Interfaces are interfaces that are connected to a non-Caterpillar VIMS system, such as
Komatsu VHMS.
There are three primary on-board interfaces used for providing all office software functionality.
Windows Server 2008 R2 has a built-in FTP Server, and there are other third-party options which Caterpillar
may not necessarily support. Regardless of the choice, you must configure the FTP Server to accept con-
nections from VIMS machines and configure the VIMS machines to use the account set up on the FTP server.
VIMS will connect to the FTP server using the IP, Port, Path, Username and Password as configured through
the VIMS Communicator and upload files to the specified path. The FTP server should be configured to map
the path to the local directory where VIMS unprocessed files are processed from, typically
{MSTAR_DRIVE}\VimsFiles\unprocessed.
There many health jobs available for configuration and scheduling to achieve the desired outcome. These
must be configured as desired for each site. At a minimum it is suggested a regular schedule for Health Data
Deletion, VIMS File Download and VIMS Data Import should be scheduled in such a way to keep data con-
tained to a short period, say three months, while ensuring the data is less than 24 hours old.
Health Jobs
Fluid Management Data Import Brings data into Health from other condition monitoring sources.
Health Data Deletion Deletes VIMS data from the office software database.
VIMS Data Import Imports VIMS files into the the office software database.
VIMS File Download Downloads VIMS data files from a VIMS ABL enabled machine.
For more information on these jobs, their templates and running the jobs, see the Concepts and Reference,
and Platform chapters in the Fleet User Manual.
If you use option 1 and it fails due to Fleet not supporting the file type, then use option 2.
When listing multiple machines, use a space between the machine names. For example,
mstarrun -b healthChannelSetup T01 T02
NOTE: In previous versions of the office software, "Limits" were referred to as "Targets".
VIMS limits are specified in the targets.xml file. Limits configuration has been updated to include the fol-
lowing.
Type Field: This field is freehand, so you may specify any description up to 32 bytes, i.e. "Very High", "OK",
"Unacceptably Low", "very low".
However, in the web-based client only targets of type "upper", "lower" and "default" will be dis-
played.
Multi-Fleet Support: Previously the limits could only be configured for one fleet. The addition of a fleet field
allows identical trends to have different limits for different sites, i.e. Boost Pressure at Site A can be set at 120
max, Boost Pressure at Site B can be set at 130 max.
Updated for New VIMS Machines: The xml file has been updated to include most VIMS machines.
NOTE: It can be further customized to include other VIMS IDs as they become available
\VIMSFiles\unprocessed
\VIMSFiles\processed
\VIMSFiles\badfiles
Files that are not set to be imported are left in the unprocessed folder. You need to ensure that you peri-
odically delete these files or move them to another folder so that issues with the system do not occur.
1. Ensure that the latest machine trend files have been imported for the machine models that you are
defining limits for.
2. Run mstarrun makeDataStores ReferenceData. This generates the latest template xml file
in mstar\mstarFiles\systems\main\config\xml\vims.
3. Go to mstar\mstarFiles\systems\main\config\xml\vims and open site_tar-
gets.xml.template with Notepad or something similar.
The rest of these instructions are included as part of the site_targets.xml.template file -
starting at step 2 within the file.
The name attribute can be any arbitrary text - it is there for your reference only. The available limit types
are as follows.
l extreme lower.
l lower.
l default.
l upper.
l extreme upper.
You must enter a unit for each target value, and the unit must be compatible with the trend, e.g.
’kilopascal’ and ’pounds per inch’ are valid units for pressure trends.
1. Ensure that the latest machine trend files have been imported for the machine models that you are
defining measure groups for.
2. Run mstarrun makeDataStores ReferenceData. This generates the latest template xml file
in mstar\mstarFiles\systems\main\config\xml\vims.
3. Go to mstar\mstarFiles\systems\main\config\xml\vims and open site_measure_
groups.xml.template with Notepad or something similar.
l For VIMS Trend, you will need the ID (e.g. Trend ’Lt Exh Temp (Maximum)’ has an ID of 7).
l For VIMS3/TPI trend, you will need the ID, SUB_ID (if applicable), method (MINIMUM /
AVERAGE / MAXIMUM) and condition name.
6. The examples provided in the file are commented out and will not be imported into the office software.
You need to use them as the starting point and fill in your own definitions.
7. Save the file.
8. After you have updated the file, run mstarrun makeDataStores ReferenceData again to import
the newly defined measure groups into the office software.
<var name="alarmType">VIMSEvent</var>
<var name="level">3</var>
<!--var name="eventNumbers">[192,195]</var-->
<var name="priority">10</var>
<var name="severity">10</var>
</sink>
4. Modify the line in bold text (the 5th command) so that it matches the line below e.g. replace ][192,195]
with the event number that you wish to include.
<var name="eventNumbers">[888,3007]</var>
In the above example, events 888 (Engine Cool Lvl.) and 3007 (Abusive Shift) will now sound the alarm as
well as all Level 3 alarms.
To prevent the office software sounding this audible alarm, repeat steps 1-3 above, but this time change the
level to 4 or higher. VIMS Levels are only 1,2,3 so by making the value > 3, the alarm will never sound.
Introduction
There are numerous configuration options available that can help you to tune your office software to suit your
exact requirements. These options can affect performance, how you use the office software itself, how and
when you perform backups, etc. You do not need to configure these options to get your office software run-
ning, but you should investigate how these options can optimize your use of the software, and especially how
to best set up the backup parameters to guard against disaster or system failure.
Chapter goals
By the end of this chapter, you should be able to:
Assumptions
As well as the instructions listed in the Introduction chapter, this chapter also assumes that for some topics,
for example, KPISummaries configuration, you have a good understanding of the associated file formats and
how to work with XML and SQL.
Introduction
A standard Fleet installation consists of a single system (main) which is created as part of the installation pro-
cess. It is possible, however, to create multiple systems on a single server. These systems operate com-
pletely independently of each other and do not interfere with each other’s operation.
The files for each system are created in their own directories in the {MSTAR_HOME}/systems directory.
NOTE: Even though you can create multiple, independent systems on the same server, you can only
run one system at a time.
1. Start Supervisor, and on the Contents menu, point to Setup and then click Setup Tools.
2. In the Setup group, select makeSystem.
3. Enter a system name and then click Run.
1. Create a mapped drive to the server that you want to connect to, for example, map the P: drive to
\\<serverName>\mstarFiles.
2. Open a command shell and run the following command:
mstarrun makeSystem ABC P:\systems\main
This creates a system called ABC with the {CENTRAL_BASE_DIRECTORY} already configured. If the sys-
tem already exists, the process updates this directory.
This process also creates all the required shortcuts for the office software and Supervisor in the \sys-
tems\<systemName>\shortcuts directory. You can then copy this directory to the desktop so you can run
applications against the required system directly from desktop shortcuts.
NOTE: The \mstar\MineStar.ini file contains a reference to the build number used for each
local system.The current build for each server is stored in the MineStar.overrides file in the
shared configuration area.
The document consists of standard XML files shipped with Fleet and merged with custom XML files tailored to
suit your site’s particular requirements. The standard file cycleskpisummaries.xml contains the basic
structure of the definition, and all subsequent standard and custom files are merged into that base file. This is
done dynamically whenever the server CyclesKpiSummaries or the utility makeDataStores is started. The res-
ulting merged file is written to the system’s temp directory as the file mergedkpisummaries.xml.
When the KpiSummaries definition is complete, the fact, dimension and lookup tables must be created and
the lookup tables loaded before a KpiSummaries server can be started.
The CyclesKpiSummaries server should be started before any cycle engine, for example, the CycleGen-
erator, so that it does not miss any cycles. When a cycle is created, the CyclesKpiSummaries server uses
the new cycle to locate any referenced dimensions (creating rows if necessary), look up entries in lookup
tables, create the details and measures and then write out a new row for each fact table. If a cycle is updated
after the cycle was created, for example by the arrival of a field message such as a Cycle Report, or by
someone editing a cycle using the Cycle Assistant, then the same process is repeated but the fact rows are
updated rather than created.
After the fact entries have been created or updated, they are passed to the RealtimeKpi engine for evaluation
of all the defined real-time KPIs. If this causes a KPI to change value, the KPIs are broadcast to the registered
listeners. Listeners include the Fleet Update Assistant, some Dynamic Mine Model Services and a real-time
KPI display page.
If the CyclesKpiSummaries server was not running while cycles were being created or updated, or if the defin-
itions need to be changed retrospectively, you can recalculate the fact table entries for those cycles by using
recalcCyclesKpiSummaries. See Working with KpiSummaries for details on working with this application.
Standard files
The Supervisor's KpiSummaries Configuration page shows you the list of standard files, which you cannot
change, as well as the list of custom files that you provide. The first file, cycleskpisummaries.xml, con-
tains the basic structure of the overall definition as described in the next section, KpiSummaries definition.
This base file also contains comments that allow you to better control where elements from subsequent files,
(standard and custom), are inserted into the merged document. A separate standard file contains a set of
standard realtimeKpis, and there are also separate standard files containing various system features, for
example, FuelMonitoringKpis.
Custom files
A custom file enables you to modify elements of previous files or to add new elements. It needs to have the
same basic structure as the base file, i.e. everything has to be contained in the root element
<kpiSummaries>, so that it is clear where the custom elements are to be inserted or where to find an ele-
ment to be modified. For example, if you wish to add a dimensionAttribute named newProperty to the
machine dimension, you would provide the following in a custom file:
<kpiSummaries>
...
<dimension name="machine">
</dimension>
...
</kpiSummaries>
Note that you need to provide the name attribute of a structure element in case there are multiple elements of
that type, for example, name="machine" to identify which dimension you wish to add to. You may also
have to provide an "if" attribute as well.
When inserting a new element, you may need to include an extra attribute insert="comment" to cause
insertion to be just before the "comment" within the parent element. This is particularly the case when insert-
ing a measure, of which there are many. If you don't, the measure will be inserted before the "custom meas-
ure" comment, and sometimes you need a measure to be defined before a measure which references it.
For example, you might define a new "timeBucket" measure and want to refer to it in a modified "time"
measure. If you don't include an insert="custom timeBucket" in your new "timeBucket" measure,
it will be inserted at the end of the measures, whereas it has to be defined before being referenced in the "time"
measure.
You can also include a replace="true" attribute in a custom element if you wish the custom element to
replace the element as-is, rather than merging it. This is designed to enable realtimeKpis to be re-imple-
mented.
An element can be marked with the reserved attribute to prevent it from being modified. This has been
used to ensure that some elements required to support some system services are not accidentally modified.
Notes on customization
1. Caterpillar does not support, and are not responsible, for any views, scripts, queries or customizations
created by customers over the "Standard" Historical database.
2. The success of the execution of the view script you create over the KPISummary depends on how the
script has been created.
3. Table additions need to be added to the xml and are not supported as script additions like additional
views. If you need to make changes to the KPI database, it is recommended that you update the
KpiSummaries.xml file so that the changes are saved after updating.
4. You should create a directory in which to store views, scripts, queries or customizations, for example
D:/mstarFiles/systems/main/config/<Site>SpecificConfigurations. This location is temporary, and all
updates should be done in consultation with Caterpillar, and should occur in the KPIsummaries.xml
file.
Merged File
It will now be obvious that you need to know what the base and other standard files look like in order to prop-
erly merge your custom files into them. So that you don't have to go searching for them and also that you can
see what happened in the merging process, the system writes out the resulting merged file whenever dynamic
merging occurs (i.e. whenever the CyclesKpiSummaries server or the makeDataStores utility starts). It is writ-
ten to the mstarFiles/system/main/tmp directory and is named mergedKpiSummaries.xml
The merged file is annotated with information showing you how each element was derived. This is done with:
1. a legend near the start of the file that lists all the files that took part in the merge and their positional
order.
2. special zzz attributes added to elements that were added or modified.
0 = ...\xml\cycles\standard\cycleskpisummaries.xml
1 = ...\xml\cycles\standard\standardRealtimeKpis.xml
2 = ...\xml\cycles\standard\fuelMonitoringKpis.xml
3 = …\config\xml\cycles\xxxkpisummaries.xml
which means that the trucksLoaded realtimeKpi was added by standardRealtimeKpis.xml and mod-
ified by xxxkpisummaries.xml. If the realtimeKpi definition in xxxkpisummaries.xml was the same as
in standardRealtimeKpis.xml, the zzz attribute would have been zzz="[+1,=3]". Elements
unchanged from the base file or those that arise because of the insertion of an ancestor element, do not score
a zzz attribute.
Difference File
To help upgrade an old KpiSummaries definition file to be a custom file suitable for use with the standard defin-
ition files, the system writes out a difference file that is the last custom file minus any elements that are the
same as in the previous files. All that remains to be done is to clean up the file, e.g. remove comments that are
no longer relevant), and you may add some insert attributes in order to fix up any badly positioned ele-
ments. The difference file is written to the mstarFiles/system/main/tmp directory and is named
diffKpiSummaries.xml.
This technique can also be used to factor out further common parts of custom files for different sites of the
same company, allowing a company-standard file to sit above each of the site-dependent files.
An mstarrun target has been developed to generate some commonly used segments of KpiSummaries
definition files. This is most useful when developing the custom definition files for a new site. This process
should be undertaken after the system model has been set up, as should be clear from the following table,
which shows what can be generated and where the information came from.
Since the generator gets some model information, (Grades and Machine activities), from the server,
MineTracking must be running to generate those types of entries. To run the generator, use the command:
where <type> is any of the Types in the above table and all generates all types of entries.
The mstarrun generateKpiSummaries command generates a measure name from the category name. There is
no strict limit to the length of the delay category name, but there are practical limits - a practical restriction of
32 characters is advisable.
Creating a Custom Time Usage Model (TUM) by modifying the KPI Sum-
maries definition file
As there is no single Time Usage Model (TUM) that satisfies all business requirements, customised TUMs
can be created for each customer. This is done by creating custom measure definitions based on the client’s
own TUM, which provides a site specific KPI Summaries definition that satisfies the clients specific require-
ment.
The existing Fleet measure definitions (stored in the compulsory definition file) should be ignored when cre-
ating a customised TUM. If a measure name occurs in both the existing Fleet TUM and the client’s TUM, it is
advisable to redefine the measure in a custom definition, and including a replace element in the measure defin-
ition, for example:
The above example uses operatingTime, this term has an immutable definition within the Fleet system
and is also a name often used in alternate TUMs. It is important not to redefine operatingTime when a con-
flict arises. A possible solution is to include the same name but prefixed with a site or customer specific code.
The following extract identifies additional measures from the compulsory definition that rely on previously
defined measures.
Problems may occur if the custom measure definitions contain expressions based on previously defined
measures (typically found in the Delay Measures Additional Fields and the Time Measures
Level xx.xx sections of the compulsory definition file), for example:
When adding additional measures it is important to make sure all measures have been defined before they are
referenced. For example, two measures that often require redefinition are availableTime and totalTime.
These measures appear in the Time measures Level 2.5 section of the compulsory definition. If they are
replaced in a custom definition, then any components of the expression must be inserted prior to the original
section, in this case in the custom timeBucket.The insert element is used to place a measure relative to a
comment in the compulsory file.
In general, time measure components that are going to be used in a replacement definition of a predefined time
measure should be inserted in the custom timeBucket section.
The measure timeScheduledMaintenance in the example above will most likely to be a component of the
complex measure delayTime, this measure in turn is likely to be redefined in terms defined by the site TUM
in the custom definition:
The general rule to apply in this situation, is to only define additional measures needed to support expressions
required in the Summary Table definition. Complex measure definitions add to the complexity of the defin-
ition files and can be constructed in reports as required.
</summary>
Suggested Steps:
1. Use the supplied utility to generate definition segments for all available types.
2. Insert the generated segments into a blank custom definition file, excluding any time bucket measures.
3. Validate all measures and dimensions in the custom definition except for the time measures.
4. Use IPretend to test the custom definition.
5. Add the generated time measure segments to the custom definition file. Only include derived measures
to support entries in a summary definition.
Creating a Custom Time Usage Model (TUM) using the office software
KPI configuration tool
This feature is designed for creating a time usage model for NEW sites, not for updating the
model at existing sites. As this feature has a direct impact on production recording it should
be configured by MineStar deployment or support personnel only.
From release 4.0, you are able to use the Time Usage Model page in the office software to build the model
graphically and then export the model as a generated piece of XML. This can then be incorporated into and
used by the CyclesKpiSummaries engine.
Prerequisites
Before creating a new model you need to define the name of the XML file that will hold the XML representation
of the Time Usage Model to be built using the Time Usage Model Editor, set up delay categories, and set up
all machine activities.
6. In the Custom Definition Files text box type the name of the XML file you will be exporting the model
into.
7. Click Apply to apply and save your changes.
8. Start the MineStar Services.
You must ensure all delay types and categories are configured properly with their associated delay types
using the Delay Category Editor and the Delay Type Editor. See your Fleet User Manual for information on
these pages.
You must ensure that all machine activities, (machine states), are configured properly using the Machine
State Finder and Machine State Editor, including all Auxiliary machine states. See your Fleet User Manual for
information on these pages.
Time usage models are created using the client office software. The Time Usage Model Finder is an Expert
Mode page, and can be found by clicking Contents > Platform > Time Usage Model Finder.
When you have created some time usage models, they will display on the Time Usage Model Finder page.
You can click Apply to save you changes and continue, or Save to save your changes and exit the Time
Usage Model Editor at any time.
l The top left panel is the Model Hierarchy Editor, which shows the model as a tree view.
l The top right panel is the Model Attributes Editor.
l The bottom panel shows a graphical representation of the structure of the time usage model. This only
updates when a node in the tree is selected.
When you expand the New Model icon in the Model Hierarchy Editor panel, the default structure displays as
shown below.
Once a node in the tree is selected, the visual representation of the model is displayed in the bottom panel.
1. In the node tree, right-click Category Names, then click Add Category.
2. Enter the Name of the category. The new category displays in the tree.
Any node in the tree that has a ’category’ attribute now shows this category in the drop-down list. New cat-
egories are represented by .
NOTE: Categories are not stored in the database on their own, but as part of the element they are
attached to, so unless the category is selected on a grouping or composition node it will be lost after
closing the model.
New groupings, represented by , can only be added either to the model (root node) or to other grouping
1. Right-click on either the model or a sub-grouping node and click Add Grouping.
A new node is created in the model directly under the node you selected. The attributes for the node are dis-
played in the right panel. Double-clicking an attribute allows you to edit the value.
2. The unitType defaults to duration, but you can click in the Attribute Value cell and on the drop-down
list select either unitless or volume.
3. Click the categoryName attribute value field and select a value from the drop-down list to attach the
Group node to the relevant category.
4. Change the description to be something meaningful, as this is displayed in the model representation
in the bottom panel.
5. Change the name of the grouping to something meaningful.
l Prefix the name with at least two uppercase letters. It is a good idea to use the customer / mine
site code.
l If you are generating SMU times the grouping name cannot be greater than 23 characters.
l If you are not generating SMU times the grouping name cannot be greater than 25 characters.
These restrictions are because the names will be suffixed with Count or Smu Time.
The new grouping may cause the model display in the bottom panel to render incorrectly. This is because the
model must fulfill the following criteria.
l Each grouping must have more than one sub-grouping beneath it or a sub-composition.
l Each composition must have at least one measure (delay or activity type) beneath it.
implicitly added to the model and cannot be deleted. Only Activity compositions can be added.
The composition displays as selected in the tree and its attributes display in the top right panel. Double-click-
ing an attribute allows you to edit the value.
2. The unitType defaults to duration, but you can click in the Attribute Value cell and on the drop-down
list select either unitless or volume.
3. Click the categoryName attribute value field and select a value from the drop-down list to attach the
Group node to the relevant category.
4. Change the description to be something meaningful, as this is displayed in the model representation
in the bottom panel.
5. Change the name of the grouping to something meaningful.
l Prefix the name with at least two uppercase letters. It is a good idea to use the customer / mine
site code.
l If you are generating SMU times the grouping name cannot be greater than 23 characters.
l If you are not generating SMU times the grouping name cannot be greater than 25 characters.
These restrictions are because the names will be suffixed with Count or Smu Time.
1. Activity measures, represented by , correspond to the activities or machine states in the system.
As you build up the model with new Groupings and Compositions, the nodes can be dragged and dropped onto
each other to build up the hierarchy with the following restrictions.
l Grouping nodes, , can be dropped onto other Grouping nodes or the root model node, but not onto
l Composition nodes can be dropped onto other Grouping nodes,i.e. moved from one Grouping node to
another.
The screenshot below shows three Grouping nodes at the same level, and all Composition nodes under the
’ABC_Total’ node.
By dragging the Composition nodes and dropping them onto either the ABC_Available or ABC_Down nodes
you can begin to build the structure to resemble the following.
To modify the ABC_Total grouping so that it is composed of ABC_Available and ABC_Down times, drag and
drop ABC_Available on to ABC_Total. Then drag and drop ABC_Down onto ABC_Total. The structure should
now resemble the following.
Note that the hierarchy displays in the bottom panel and reflects the changes you make. Colors have been
added to the Groupings for clarity.
When the KPI Summaries XML is generated this structure now indicates the following.
This hierarchy can be subdivided down to a level of one Activity Composition per Activity + one Delay Com-
position per Delay Category if necessary to allow for quite fine-grained time data capture.
Within CyclesKpiSummaries there are certain embedded variables that need to be present so that the system
can function correctly. This includes the embedded dashboard displays. These embedded variables must
have values assigned to them from the custom time usage model so that the system functions correctly. This
is effectively just mapping the custom model across to the embedded model. This is done simply by dragging
and dropping Grouping , Delay Composition Activity Composition or Activity Measures
onto the nodes under 'Embedded Variable Mappings' to create a pointer to the original element of the hier-
archy. e.g. ABC_Total is obviously equivalent to 'totalTime' so if you drop ABC_Total onto totalTime you get
the following.
KPI entries are additional entries that can be calculated entries based on other parts of the time usage model
hierarchy.
1. Right-click on the KPI Entries node and select Add KPI Entry.
2. Change the name to be something meaningful.
3. Change the description to be something meaningful.
4. Add a categoryName if necessary.
The following process gives an example of building an expression for the KPI entry.
1. Double-click the Attribute Value cell beside the expression field in the right pane to display the
expression builder.
The expression builder lists all of the Functions, Internal Variables, Groups, Activities, KPIs and Custom
Measures that are currently available to the model.
2. When building a function, drag a function from the left pane to the Expression pane. The expression
will expand to a Jython template as in the following when the 'if' function is dragged across
Each $ prefixed variable is a place holder for a variable. In the screenshot above, if $var1 is greater than $val1
then the result is the calculation represented by $calc.
5. Select a Grouping (in this case ABC_Total) and drag it across to the Expression pane. Drop it just
after the if leaving a space. The ABC_Total grouping expands to its expression name of main.ABC_
Total, as shown below.
6. Highlight the $val1 text and replace it with the value 0 (zero).
7. Highlight the $calc text and delete it.
8. Select another grouping (in this case) ABC_Available Grouping and drag it to the Expression pane.
Drop it just after the = leaving a space as shown below.
9. If you wish to divide the Available time by the Total time, add a / just after ABC_Available then select
and drag the ABC_Total Grouping to the right pane and drop it just after the / as shown below.
10. Click OK. The expression displays in the expression cell of the attributes pane. These expressions will
be added to the expr attribute within the XML.
KPI Custom Measures are additional entries that can be calculated entries based on other parts of the time
usage model hierarchy.
1. Right-click on the KPI Custom Measrues node and select Add KPI Custom Measure.
2. Change the name to be something meaningful.
3. Change the description to be something meaningful.
4. Add a categoryName if necessary.
The expression can be built in the same was as described for KPI entries.
The KPI summaries xml that is generated is a subset of the KPI Summaries xml. This xml is merged with the
other standard KPI .xml files to produce a single definition that is loaded when CyclesKPISummaries is star-
ted. When the xml is generated from the model, the validateKpiSummaries script is invoked which merges the
files and displays any validation warnings on the merged content.
You are asked for the file name to export to. This must be the name of the file you specified
earlier in Supervisor, otherwise you will receive a warning message and validation will not
occur.
When the export is complete, a screen similar to the one below displays.
l The left pane displays the Generated xml that corresponds to your model definition.
l The middle pane displays the Merged xml.
l The right pane displays any Validation messages that have occurred as a result of running val-
idateKpiSummaries.
You must correct all error messages so that the right pane displays 0 errors.
The Reverse Validate button allows for the reverse validation of the stored generated XML against the current
state of the Time Usage Model. This is to highlight any problems where the XML may have been inadvertently
modified directly rather than modifying the model and regenerating the XML.
Ensure you have defined the xml file name as described in the Prerequisites section, "Defin-
ing the XML file name" on page 132 procedure before exporting to Excel.
Since the Time Usage Model is reliant on the definitions of Delay Categories and Machine Activities it is
necessary, when modifying these entities in the system, to revisit the Time Usage Model and possibly regen-
erate the KPISummaries.xml along with running mstarrun makeDataStores.
When deleting a Delay Category or Activity, references to these entities in the Time Usage Model will also be
removed and you are warned that the Time Usage Model must be revisited.
If new Delay Categories are added to the system, the next time the model is opened the new Delay Com-
position will appear in the model in its default position under the root Grouping. This can then be placed in the
correct position within the hierarchy and you can regenerate KPISummaries.xml and re-run mstarrun
makeDataStores.
If the name of the Delay Category is changed it will automatically be reflected in the Time Usage Model but
this will not be the case for CyclesKpiSummaries until the XML has been regenerated from the model and an
you run mstarrun makeDataStores.
The custom xml file is based off the reference implementation tcmkpisummaries.xml file and XML fragments
for editing can be generated from the following mstarrun target:
mstarrun generateKPISummaries
This file needs modifying to reflect customer requirements in the following areas:
The mining block hierarchy can contain between one and four levels of hierarchy within Fleet.
The system-generated fragment for this contains three levels of hierarchy as shown in the table below, the low-
est (child) level of hierarchy being first in the xml block (line 2) and its parents following on subsequent lines.
NOTE: Line numbers (left column) have been included for reference only.
Line XML
Line XML
5 </dimension>
In order to edit the mining block hierarchy, the user must first define the number of levels of hierarchy used for
that mine. Based on the levels of hierarchy that exist in the mine, the following edits should be made to the
file:
Hierarchy
Modification required
Levels
2 Delete line 4.
3 No change.
Once the fragment has been edited to reflect the correct number of hierarchy levels, the
dimensionAttribute name for each level in the hierarchy should be edited to reflect that particular site.
The mining block hierarchy fragment has now been successfully edited.
Time breakdown
The reference implementation fragment for time breakdown contains a number of definitions, which are
required by the system. These should not be deleted, however if they are not to be used by the customer, they
should be bundled together in an area within the file which is highlighted by appropriate comments. The
reference implementation fragment also contains the base level definitions so that all other levels of the time
breakdown hierarchy can be aggregated from these.
<measure name="delayTime"
expr="main.scheduledOperationsTime + main.unscheduledOperationsTime"
unitType="duration" category="delay" insert="custom timeBucket"/>
The unitType can be defined as "duration" for a time duration or "unitless" for a count of events of that
nature. The category can also be used to group categories together. Any levels of time breakdown hierarchy
should be made clear within the system by appropriate use of comments.
Materials and material hierarchy should also be defined within the custom KPISummaries.xml file for the cus-
tomer. These include the following:
l Grade Values
A grade value that is associated with the Mining Block being extracted and is then associated with the
truck cycle, e.g. 1.5 grammes of gold per tonne.
l Grade Products
The Grade Value multiplied by the Payload.
l Parent Product
If Grade Value is populated then this equals the payload, else if Grade Value is not populated then this
value is zero.
KpiSummaries definition
KpiSummaries are defined in an XML file, which defines the data source, the various tables (dimension, fact
and lookup tables), calculated members, summaries and real-time KPIs. The definition file used is the result
of dynamically merging a set of standard definition files and custom definition files, as described in the pre-
vious section.
\mstarFiles\systems\<systemName>\config\xml\cycles
Description attributes
The XML definition allows the use of the descr attribute in each element, to enable an optional description of
that element to be provided.
Reserved attribute
An element containing the reserved="true" attribute cannot be modified or replaced by a custom element.
Header
The kpiSummaries element defines the data source and some of its characteristics.
where:
context
Specifies the prefix to use in a fact table expression when referring to a source item. For example, cycle.pay-
load refers to the payload field of the source cycle.
sourceManager
Specifies the name of the manager that creates the source entities. The CyclesKpiSummaries engine listens
to this manager for creation and change events.
tablePrefix
Specifies the prefix to use when creating database table and view names. For example, the tables are named
CYCLE_DIM_<name>, CYCLE_FACT_<name> and CYCLE_LOOKUP_<name>.
The sourceClass element defines names for different classes of source entity. These names are used in if
attributes (see below in Fact Tables).
The inherit=false attribute means that the if attribute only fires for cycles that match the full classDef.
For example, with the above definition of SelfLoader, a SelfLoader cycle will trigger
if="selfLoader" elements but not if="loader" or if=prod" elements.
Dimension tables
A dimension element defines a dimension table that is linked to the fact tables via its OID column. There are
two types of dimension tables defined:
date
Based on the Shift table and has two base attributes defined internally – startShiftTime and endShiftTime.
entity
Based on any Fleet Entity and has one base attribute defined internally – sourceEntity (the universalOID of
the base entity).
A row in a dimension table is created when a dimensionRef element of a fact table is evaluated (see below)
and the referenced dimension row does not exist.
A row is also created when it is detected that a base attribute of the source entity has changed value, thereby
causing the expression for a dimensionAttribute to change value. The new row has the new values of any
changed attributes and a special value (epoch) in an internal updated column. The old row has the current
timestamp written to the updated column, so that a recalc of any cycles completed up to that time can refer to
the old row.
Each dimension table contains a special row called the NULL (0) row that contains the special value NULL for
its sourceEntity column and "unknown" for each of its columns of type String. This row is used as a target
for fact table dimension references that evaluate to null.
where:
name
Used to form the name of the dimension table (e.g., CYCLE_DIM_CALENDAR). It can also be used as a pre-
fix in a dimensionAttribute expression, to refer to a previously defined dimensionAttribute of that dimen-
sion (provided the name of the dimension is different from the name of the context).
context
Specifies the prefix to use in a dimensionAttribute expression when referring to an item in the dimension’s
sourceEntity (or the DateEvaluationBean that wraps the Shift bean for a date dimension). For example,
given the above definition of a block dimension, b.hierarchy refers to the hierarchy field of a miningBlock.
type
Used with the keyIfSourceMissing feature of the dimensionRef element (see below). The default value is
name.
Dimension attributes
A Dimension can contain a number of dimensionAttributes which define the columns of the dimension table.
where:
name
Specifies the name of the column in the dimension table. It can also be used in the expression of a sub-
sequent attribute, with a prefix of the dimension name, to refer to the attribute’s value; for example, block.cut.
expr
Specifies the expression to evaluate when a dimension row is being created. See Expression definition.
type
Indexes
An index element can be specified in the definition of a lookup or fact table to cause the creation of an index in
that table.
where:
columns
Specifies a comma-separated list of table columns to include in the index. For a fact table, a column can be a
dimension reference, a detail or a measure.
Lookup tables
A lookup table is defined in the XML file, created by the KpiSummaries system but loaded externally, for
example by sqlldr.
<lookup name="loadFactor">
where:
name
Determines the name of the lookup table in the database; in this case, CYCLE_LOOKUP_LOADFACTOR.
The lookup element can contain a number of column elements. There are usually one or more “lookup”
columns, whose values are supplied when a lookup is performed, and one or more “output” columns, whose
values can be referenced in subsequent expressions. Refer to the section "Lookup references" on page 158
where
name
type
missingValue
(Optional) Designed for an “output” column, the value provided for this column if the lookup fails.
Lookup updated
Every lookup table has an internal updated column that behaves like the updated column in a dimension
table. A row with NULL in the updated column contains the current value of the output columns. If you want to
store different output values that apply only from a specific shift forward, create a new row with the new output
values, the same lookup values and a NULL in the updated column. Then set the updated column in the old
row to the start time of the last shift for which the old values are valid.
Fact tables
A fact element defines a fact table. A row in a fact table is created or updated whenever a source entity (e.g., a
cycle) is created or updated respectively. For cycles, if the cycle spans a shift boundary, there is a row cre-
ated in each fact table for each shift that the cycle is in. Refer to the section "Cycle splitting rules" on
page 180 for details on how the cycle is split into shift segments.
<fact name="main">
where:
Determines the name of the fact table in the database; in this case CYCLE_FACT_MAIN. It can also be used
as a prefix in a detail or measure expression, to refer to a previously defined detail or measure, in either this or
a previously defined fact table.
l Indexes
l Dimension references
l Lookup references
l Details
l Measure Categories
l Measures
Each reference (dimension or lookup) adds the referenced row to the evaluation context, allowing following
expressions to refer to their columns using the reference name as a prefix. A dimension reference also defines
a column in the fact table that contains the OID of the dimension row as a foreign key.
Measures are usually summable; their type is Float and their unit type must be specified.
An if attribute can optionally be used in each of these four types of fact table content. When specified, the
value of this attribute must be a name of a sourceClass (see above). If the current source entity is not of the
specified class, no action is taken for this element. If it is of the specified class, the element is processed as
normal.
This behavior of the if attribute means that multiple elements can have the same name provided they have if
attributes. This allows different expressions to be used for different classes of entities when defining the same
column. For example,
Measure categories allow grouping of associated measures that are assigned to these categories. These are
used in the generated reporting universe.
Dimension references
A dimensionRef element defines a column in the fact table that links to a row in a dimension table. If the ref-
erenced dimension row is null, the column value points to the NULL row of the dimension table.
where
if
Causes the behavior as explained above. If the current source cycle is not of the specified class, the column
value is set to point to the NULL row of the dimension table.
name
Specifies the name of the column in the fact table. It can also be used as a prefix in a following expression to
get the value of a column in the referenced dimension.
source
(Optional) Specifies the field of the source entity that points to an entity of the type the referenced dimension
is based on. This defaults to the value of the name attribute. For example, if in the above dimensionRef ele-
ment, the name had been sourceLocation, there would have been no need for the source attribute.
dimension
keyIfSourceMissing
(Optional) Used only when the referenced source is null, in which case the value of the specified field is used
to locate a row in the dimension table with a matching value for its keyProperty. For example, in this case,
when cycle.sourceLocation is null, the destination dimension table is searched for a row with a keyProperty
value of cycle.sourceLocationName. If it is not found, a dummy row is created with that keyProperty value.
Lookup references
A lookupRef element causes a lookup table to be searched and a row to be returned with specified values in
certain columns. This row can be referenced in following expressions details and measures using the name of
the lookupRef as a prefix. If the lookup fails, a row is constructed where a column has either a null value or
the missingValue if specified in the column’s definition.
where
if
(Optional) Specifies whether the lookup is performed for the current source entity class, as explained above. If
it is not performed, no row is added to the evaluation context. This means that any following expressions that
refer to the lookup row should have the same (or more restrictive) if attribute.
name
Specifies the name of the lookup row that is added to the evaluation context, so that following expressions
can use it as a prefix to refer to columns of the lookup table.
lookup
Specifies the name of the lookup element that defines the lookup table.
expr
Specifies the where clause to be used to search the lookup table. The ‘?’s in the expression correspond to fol-
lowing parameter element expression values.
The lookupRef element contains parameter elements that provide the values of the bind variables specified
by ‘?’s in the where clause.
where:
name
Is evaluated in the usual way (see Expression definition) and the resulting value is used as the value of the cor-
responding bind variable of the lookup table query.
Example
"What is the complete list of values that can be used for lookupRef in KPIs? What is allowable in the ’ where’
clause?"
You can use whatever columns are defined in the lookup table. A column is of type String by default, oth-
erwise Float, Double or Boolean.
Here is an example:
<lookup name="loadFactorTable">
<column name="pit"/>
<column name="material"/>
</lookup>
Here rehandle, pit and material are the input columns and factor is the output column.
In this case, a lookupRef can refer to any of the input columns in its "where" expression and parameters.
For example:
<lookupRef name="loadFactor" lookup="loadFactorTable" expr="where rehandle=?
and pit=? and material=?">
<parameter name="rehandle" expr="cycle.rehandle"/>
<parameter name="pit" expr="sourceBlock.pit"/>
<parameter name="material" expr="loaderMaterial.groupLevel1"/>
</lookupRef>
Here, the parameters correspond to the '?' in the "where" expression, in the same order (the binding is by pos-
ition, not name).
The "where" expression can be any valid SQL (or Hibernate Query Language (HQL)) "where" clause.
The lookupRef causes a record called "loadFactor" to be defined, so subsequent expressions can refer to
"loadFactor.factor" to pickup the output column value.
You are able to refer to any cycle attribute or dimension attribute in the lookupRef parameter value expres-
sions. Rehandle (Boolean variable) is included in the above example.
Details
A detail element defines a column in the fact table that is usually non-numeric.
where
if
name
Specifies the name of the column in the fact table. It can also be used in following expressions, with a prefix of
the fact name, to refer to the detail’s value; for example, main.previousSinkDestination.
expr
Specifies the expression to evaluate to obtain the value of the column. See"Expression definition" on
page 171.
type
Measure categories
Measures
A measure element defines a column in the fact table that is numeric and usually summable.
where:
if
name
Specifies the name of the column in the fact table. It can also be used in following expressions, with a prefix of
the fact name, to refer to the measure’s value; for example, main.payloadDry.
expr
Specifies the expression to evaluate to obtain the value of the column. See "Expression definition" on
page 171.
unitType
The the office software unit type of this measure. The unit type determines two units that are used for the
value for the measure:
User-preferred unit
Used when the value of a measure is used in a following expression, since every term in an expression must
be in user-preferred units.
Storage unit
category
Calculated members
A calculatedMember element is used to define a KPI as an expression of aggregated measures. These are
not yet completely supported, but the intention is to export them into the reporting universe, Business Intel-
ligence and other supported presentation/query or analysis tools.
where:
name
expr
Is an arithmetical expression in terms of aggregations of measures. The exact syntax has yet to be finalized.
Summaries
A summary element defines a view over a fact table and dimension tables, and enables measures to be
aggregated over the selected dimensions.
where:
name
fact
A summary element can contain one or more dimensionRef elements and one or more measureRef ele-
ments, all of which refer to elements of the specified fact.
The dimensionRef elements specify the dimensions to aggregate over. The generated view for this summary
will contain a “group by” clause with these dimensionRef columns.
where
name
Is the dimensionRef element name, which must match the name of a dimensionRef element defined in the
fact table being used for the summary.
name
Is the name of the dimensionAttributeRef element, and specifies a dimensionAttribute to include in the
select-list of the generated view. It must match the name of a dimensionAttribute of the referenced dimen-
sion. The name of the column to be included in the generated view will be the dimensionRef name con-
catenated with the dimensionAttribute name.
The measureRef elements specify the measures to aggregate and the aggregation functions to use.
where:
name
Specifies the name of the column to be included in the select list of the generated view.
measure
Specifies which measure of the fact table to aggregate. It is optional and defaults to the name of the meas-
ureRef.
stat
Specifies which aggregation function to use. The following values are supported:
l sum (default)
l count
l average
Realtime KPIs
A realtimeKpi element defines a real-time KPI that a client can register to receive notifications of changed KPI
values.
where:
(Optional) Causes the behavior as explained above. If the current source cycle is not of the specified class,
this realtimeKpi is skipped.
name
Specifies the name of this realtimeKpi. This name is used when registering as a listener.
machineType
(Optional) Specifies that any KPIs defined with a dimension of machine name will be passed to the FUA. The
allowable values are: "truck", "loadingTool", "aux" and "processor", corresponding to the FUA tabs.
fact
(Optional) Name of the fact table that measureRefs refer to. Defaults to the first fact table (conventionally
"main").
timestamp - used to determine whether these measures should enter a time-constrained population asso-
ciated with a KPI (for example, a shift-based or period-based KPI).
dimensions (optional) - KPIs are calculated for each set of dimension values.
measures - specifies the measures that the KPIs are based on.
kpis - specifies the KPIs that are defined within this realtimeKpi.
Timestamp
<timestamp>
<detailRef name="endTime"/>
</timestamp>
In this case, the time stamp is the cycle's endTime – this is the most common case.
Dimensions
<dimensions>
<dimensionAttributeRef name="name"/>
</dimensionRef>
</dimensions>
In this case, the one dimension is primaryMachine.name. More than one dimensionAttributeRef may be spe-
cified for the one dimensionRef.
Measures
<measures>
<measureRef name="payloadNominalDry"/>
<measureRef name="loadingDuration"/>
</measures>
KPIs
<kpis>
<kpi . . .
</kpi>
<rolledupKpi . . . />
...
</kpis>
Either kpis or rolledupKpis can be defined. Each has a name, which has to be unique within the realtimeKpi.
The fully qualified name, <realtimeKpi name>.<kpi name>, is unique within the XML file. If the name starts
with an underscore character (_), the KPI is hidden from the Fleet Update Assistant.
KPI
</kpi>
where:
name
if
(Optional) If specified, a boolean expression that, if it evaluates to false, causes the KPI to be skipped.
stat
Specifies which statistic this KPI is based on. Currently supported statistics are:
weighted - if n weights are specified, the KPI is the weighted average of the last n measures
label
Specifies the column label used if this KPI is displayed in the Fleet Update Assistant.
unitType
active
rolledupKpi
A rolledupKpi is a convenient way of rolling up over a dimension. It inherits all the attributes of the KPI it rolls
up, except for the dimensions, where it loses the last dimension. For example, the following rolls up the
primaryMachine.name dimension so is left with no dimension.
where
name
rollupOf
Specifies the KPI (or rolledupKpi) that this rolls up over the last dimension.
label
Specifies the column label used if this KPI is displayed in the Fleet Update Assistant.
calculatedMember(s)
calculatedMembers allow the definition of a KPI in terms of previously defined KPIs (or rolledupKpis). Extern-
ally, a calculatedMember behaves just like a KPI (although they are listed separately in the Real-
timeKpiMetadata).
<calculatedMembers>
<calculatedMember name="byLoadingToolForPeriod"
expr="kpi._tonsMinedByLoadingToolForPeriod * 3600 /
kpi._loadingDurationByLoadingToolForPeriod"
unitType = "rate"/>
</calculatedMembers>
where
name
if
(Optional) If specified, it is a boolean expression that, if it evaluates to false, causes the calculated member to
be skipped.
Is an expression for the value of the calculated member. Its terms will usually include a prefix "kpi" to refer to a
KPI (or rolledupKpi) previously defined in this realtimeKpi.
label
Specifies the column label used if this KPI is displayed in the Fleet Update Assistant.
unitType
There is a new option available in the KPI Summaries.xml file for persisting KPIs. Below is an example of a
real-time KPI in the KPI Summaries.xml file that has been configured for storage in the database.
Run makeDataStores
makeDataStores must be run after inserting or modifying KPISummaries.xml with the new KPI Summaries
entries. Running makeDataStores creates the necessary database tables and views to allow third party soft-
ware to access real-time KPI's. Below is an example of a view that will allow access to the real-time KPI's.
PRIMARYMACHINENAME VARCHAR2(1020)
HOLESBYDRILLFORSHIFT NUMBER
HOLESBYDRILLFORSHIFT_U VARCHAR2(4000)
HOLESPERHOURBYDRILLFORPERIOD NUMBER
HOLESPERHOURBYDRILLFORPERIOD_Q NUMBER
HOLESPERHOURBYDRILLFORPERIOD_U VARCHAR2(4000)
FEETBYDRILLFORSHIFT NUMBER
FEETBYDRILLFORSHIFT_Q NUMBER
FEETBYDRILLFORSHIFT_U VARCHAR2(4000)
FEETPERHOURBYDRILLFORPERIOD NUMBER
FEETPERHOURBYDRILLFORPERIOD_Q NUMBER
FEETPERHOURBYDRILLFORPERIOD_U VARCHAR2(4000)
Review KPIGen.properties
After makeDataStores is run, the KPIGen.properties file contains entries that can be used to update the KPI
Summaries universe at the customer site to expose real-time KPI's for use in reports and dashboards. Below
is an example of realtime KPI's shown in the KPIGen.properties file.
After the KPI Summaries universe has been updated, reports and dashboards can now be implemented that
expose the Realtime KPI's.
KPI display
1. The Statistics Bar in the Fleet Update Assistant can display a small number of KPIs - the current
default is to show the tons mined (prime and rehandle) since the start of the current shift.
2. The Fleet Update Assistant tabs can show KPIs per machine - these KPIs appear in the list of columns
that can be shown in each tab.
You use Supervisor to specify which KPIs appear in the Statistics Bar. You use the Client to specify which
KPIs are displayed in the columns in the Fleet Update Assistant.
Refer to Fleet Update Assistant (FUA) in the User Manual for more information about these settings.
Refer to the Fleet User Manual for information on how to configure the display of columns in the Fleet Update
Assistant.
Also refer to the Fleet User Manual for information on how to configure KPI push.
The labels used for displaying either type of KPI and calculated Members are specified by the label attribute.
Expression definition
Expressions are used to define the values of dimension attributes, lookup parameters, details and measures.
The syntax is the usual arithmetic grammar of terms and the standard operators ‘+’, ‘-‘, ‘*’, ‘/’.
If an expression is enclosed in braces ({}), it is understood as Jython code that must return a value in a special
variable _res. To help with Jython’s use of indentation, the text can include the two-character sequence "\n",
which is replaced with a newline character.
Terms
A term starts with a prefix that identifies the term’s context. The prefix is followed by a period (.) and then
either a field name or a special method call; what is valid is dependent on the context.
Name of a previoiusly
defined dimension ref- Referenced dimension row
erence
Special methods
isZero(Number measure)
When the context refers to a source entity (e.g., cycle or a dimension entity) and a field name is used, the
value is one of the following:
Shift wrapper
The DateEvaluationBean wrapper that is defined over the Shift table provides the following methods to the
expressions in a date type dimension attribute:
getYear()
getHalfYear()
getQuarter()
getMonth()
getWeek()
getDay()
getShiftName()
getShiftType()
getCrewId()
getShiftStartTime()
getShiftEndTime()
When the context refers to a dimension or fact row, the field name must be the name of a dimen-
sionAttribute, a detail or a measure. There are, however, additional methods provided in a date dimension
bean that add extra time points for the shift.
Returns the start time of the period specified by the periodKey, as a Date.
getPeriodEndTime(String periodKey)
Returns the end time of the period specified by the periodKey, as a Date.
l getYearStartTime()
l getYearEndTime()
l getHalfStartTime()
l getHalfEndTime()
l getQuarterStartTime()
l getQuarterEndTime()
l getMonthStartTime()
l getMonthEndTime()
l getWeekStartTime()
l getWeekEndTime()
l getDayStartTime()
l getDayEndTime()
Collection wrappers
Separate collection wrappers are provided for the various mapped property sets of interest; you use Super-
visor to enable alternative wrappers.
1. Start Supervisor, on the Contents menu, point to Setup and then click System Options.
2. Choose Production in the Product list and then click KpiSummaries Plugin Wrappers in the Cat-
egory list.
3. For each available Entity Class, specify the appropriate Wrapper Implementation Class, and then click
Apply.
CycleActivityComponent(cycle.activities)
NOTE: A no-argument getter method can be invoked more conveniently as a field access; for
example, cycle.activities.travellingEmptyDuration instead of cycle.activ-
ities.getTravellingEmptyDuration().
getDurationByActivity(String name)
Returns the total duration (in user-preferred units) of activities with the specified name.
l Travelling.Empty: getTravellingEmptyDuration()
l Travelling.Full: getTravellingFullDuration()
l Spotting.At.Source: getSpottingAtSourceDuration()
l Spotting.At.Sink: getSpottingAtSinkDuration()
l Queuing.At.Source: getQueuingAtSourceDuration()
l Queuing.At.Sink: getQueuingAtSinkDuration()
l Loading: getLoadingDuration()
l Dumping: getDumpingDuration()
l Hang.Time: getHangTimeDuration()
getSmuDurationByActivity(String name)
Returns the total SMU duration (in user-preferred units) of activities with the specified name.
l Travelling.Empty: getTravellingEmptyDuration()
l Travelling.Full: getTravellingFullDuration()
l Spotting.At.Source: getSpottingAtSourceDuration()
l Spotting.At.Sink: getSpottingAtSinkDuration()
l Queuing.At.Source: getQueuingAtSourceDuration()
l Queuing.At.Sink: getQueuingAtSinkDuration()
l Loading: getLoadingDuration()
l Dumping: getDumpingDuration()
l Hang.Time: getHangTimeDuration()
Returns the start time (as a Date) of the first activity with the specified name.
l Travelling.Full: getTravellingFullStartTime()
l Spotting.At.Source: getSpottingAtSourceStartTime()
l Spotting.At.Sink: getSpottingAtSinkStartTime()
l Queuing.At.Source: getQueuingAtSourceStartTime()
l Queuing.At.Sink: getQueuingAtSinkStartTime()
l Loading: getLoadingStartTime()
l Dumping: getDumpingStartTime()
l Hang.Time: getHangTimeStartTime()
getEndTimeByActivity(String name)
Returns the end time (as a Date) of the last activity with the specified name.
l Travelling.Full: getTravellingFullEndTime()
l Spotting.At.Source: getSpottingAtSourceEndTime()
l Spotting.At.Sink: getSpottingAtSinkEndTime()
l Queuing.At.Source: getQueuingAtSourceEndTime()
l Queuing.At.Sink: getQueuingAtSinkEndTime()
l Loading: getLoadingEndTime()
l Dumping: getDumpingEndTime()
l Hang.Time: getHangTimeEndTime()
l getWorkingDuration()
l getIdleDuration()
Returns in seconds the sum of the SMU duration of all the cycle's activities with the given name.
For example:
<measure if="loader" name="hangTimeSmuDuration" expr-
='cycle.activities.getSmuDurationByActivity("Hang.Time")' unitType="duration"
category="activity" desc='The smu duration of the "Hang.Time" activity for
loader cycles'/>
l getTravellingEmptySmuDuration
l getTravellingFullSmuDuration
l getSpottingAtSourceSmuDuration
l getSpottingAtSinkSmuDuration
l getQueuingAtSourceSmuDuration
l getQueuingAtSinkSmuDuration
l getLoadingSmuDuration
l getDumpingSmuDuration
l getHangTimeSmuDuration
CycleDelay(cycle.allDelays)
getDurationByCategory(String name)
Returns the total duration (in user-preferred units) of activities that are marked as Working or Idle in the activ-
ity definition.
getSmuDurationByCategory(String name)
Returns the total SMU duration (in user-preferred units) of activities that are marked as Working or Idle in the
activity definition.
getCountByCategory(String name)
Returns the total number of delays of the specified category. If a delay spans a shift boundary, only the pro-
portion of the delay that lies within the shift is included in the count. This makes the count summable over
shifts.
getDurationByClass(String name)
Returns the total duration (in user-preferred units) of delays of the specified class. If a delay spans a shift
boundary, only that part of the delay that lies within the shift is included in the total.
getSmuDurationByClass(String name)
Returns the total SMU duration (in user-preferred units) of delays of the specified class. If a delay spans a
shift boundary, only that part of the delay that lies within the shift is included in the total.
Returns the total number of delays of the specified class. If a delay spans a shift boundary, only the proportion
of the delay that lies within the shift is included in the count. This makes the count summable over shifts.
getIdleDuration(String name)
Returns the total duration (in user-preferred units) of delays that are not marked as Engine Stopped in the
delay type definition.
Returns in seconds the sum of the SMU duration of all the cycle's delays with the given category name.
For example:
<measure name="standbySmuTime" expr='cycle.allDelays.getSmuDurationByCategory
("Standby")' unitType="duration" category="delay" desc='The smu duration of
the "StandBy" delays'/>
Returns in seconds the sum of the SMU duration of all the cycle's delays with the given class name
CycleRoadSegment(cycle.roads)
Each of the following methods returns the total of the specified measure (in user-preferred units) over all road
segments where the traversal start time lies in the shift.
l getEmptyRiseHeight()
l getEmptySlopeLength()
l getEmptyPlanLength()
l getEmptyEfhLength()
l getEmptyExpectedTravelDuration()
l getEmptyTargetTravelDuration()
l getEmptyTravelTime()
l getEmptyTravelTimeWithoutDelay()
l getFullRiseHeight()
l getFullSlopeLength()
l getFullPlanLength()
l getFullEfhLength()
l getFullExpectedTravelDuration()
l getFullTargetTravelDuration()
l getFullTravelTime()
l getFullTravelTimeWithoutDelays()
l getTravelTime()
l getTravelTimeWithoutDelay()
GradeInformation(cycle.sourceGradesMined, cycle.sinkGrades)
getGradeValueByName(String name)
Returns the grade value (in user-preferred units) for the specified grade name.
getGradeFractionByName(String name)
Returns the grade fraction (in the range [0,1]) for the specified grade name.
haveGradeFractionByName(String name)
Returns 1 if the grade information has an entry for the specified grade name. Returns 0 otherwise.
SMU Interpolation
The following API calls are available for use in expressions defining the values of measures or details.
Returns in seconds the interpolated SMU for a machine and a time. For example:
<detail name="cycleSmuStart" expr="kpy.lookupSmu(cycle.primaryMachine.entity,
main.startTime)" type="java.lang.Double" desc="The service meter reading at
the start of the cycle."/>
For example:
<measure name="cycleSmuDuration" expr="kpy.calcSmuDuration
(cycle.primaryMachine.entity, main.startTime, main.endTime)" unitType-
e="duration" category="time" desc="The smu duration of the cycle."/>
recalcCyclesKpiSummaries
An alternative to running this command is to use a feature of the Bulk Cycle Update page, which is to - in the
Update Mode panel, select Recalc Reporting Data and click Run.
where:
from and to
l Specify which cycles to process; if the to arguments is not supplied, the current time is used. A cycle
is included if from <= cycle.endtime < to.
The from and to arguments are numeric strings in the format yyyymmddhhmmss where hhmmss is padded
out with 0s if not fully specified.
chunk
Specifies the size of the query chunk in hours (the default is 12 hours).
wait
Specifies the number of seconds to wait between each cycle (the default is zero (0) seconds. This can be
used to decrease the load that recalc puts on the CyclesKpiSummaries server.
1. An underlying model entity is changed so that the value of a dimension attribute will change.
2. The expression defining the value of a dimension attribute is changed.
In the first case, the CyclesKpiSummaries server will normally be notified of the change and the relevant
dimension row is automatically recalculated.
In the second case, the CyclesKpiSummaries server will have to be restarted for the expression change to be
seen and normally all dimension rows are recalculated on a restart.
This means that normally this command does not have to be used, but it can be useful in exceptional cir-
cumstances. It does provide the ability to choose the update mode:
1. New, the default, means that a new dimension row is created with the new values of the attributes but
the row with the old values is marked with the datetime up to which it is valid, i.e. the new values apply
only to cycles created after that datetime.
2. Replace means that the new values are written into the current dimension row, so they apply ret-
rospectively.
where:
dimension
or, all.
updateMode (optional)
the datetime from which the change is effective, in the format yyyymmddhhmmss, where hhmmss is padded
out with 0s if not fully specified.
If you do specify an updateTime in the past, you may have to do a recalc of fact entries created since that dat-
etime, to make sure that the fact entries are pointing to the correct version of the dimension rows.
The introduction of KpiSummaries now enables consultants and customers to define how data can be stored
for use by reporting. Earlier sections of this document describe the configuration of KpiSummaries XML, and
this XML configuration is used to create the database views, for access to information, and tables for the stor-
age of the information.
There is an association between the definition of the XML, the database tables for information storage, the
database views for access to the information by external applications and the design of the Business Objects
Universe for reporting of information.
Information on the above topics can be found in the Using BusinessObjects with Cat MineStar manual.
Standard reports
Custom reports can be built on KpiSummaries views and tables, although this can only be done after a cus-
tom universe has been created. As with Fleet’s Object Model-based universe, the intention is for the KpiSum-
maries universe to be automatically generated. Refer to the Using BusinessObjects with Cat MineStar
manual for information about the creation of a KpiSummaries-based universe and the rules used to create
such a universe.
Total Time
Delay time
The table on the following pages shows the Reference Implementation Time Breakdown definitions, as used
in Reference Implementation reports.
Total time
Delay time Standby time Scheduled down time Unscheduled down time
Operating
time
Operating delay time Non-operating delay time
Activity "Operating delay" "Non-operating delay" "Standby" "Scheduled down" "Unscheduled down"
A SHIM CHANGE General Non-Oper Delay General Standby General Sch. Down General UnSch. Down
BIT CHANGE EQUIPMENT MOVE HOLIDAY PREVENTIVE MAINTENANCE FAILED SAFETY INSPECTION
BLAST FUEL AND LUBE RELEASED FROM MAINTENANCE S-AIR COMPRESSOR SYSTEM U-ACCIDENT DAMAGE
CABLE MOVE MEETINGS STANDBY WITH NO OPERATOR S-AIR CONDITIONING U-AIR COMPRESSOR SYSTEMS
CLEANUP BED CLEANING STANDBY WITH OPERATOR S-AUTOLUBE U-AIR CONDITIONING
CRUSHER CLOSED GEO TECH DEL-EQUIPMENT NOT REQUIRED S-BRAKE & FRONT SPINDLE U-AUTOLUBE SYSTEMS
DRILL CHANGE TOOL INCLEMENT WEATHER DEL-INCLEMENT WEATHER S-DRILL WATER INJECTION U-BRAKES & FRONT SPINDLE
EMERGENCY SAFETY OBSERVATION DEL-NO CRUSHERS AVAILABLE S-ELECTRICAL SYSTEM U-DRILL WATER INJECTION
EQUIPMENT INSPECTION TAKING ON WATER DEL-NO DUMPS AVAILABLE S-FIRE SUPPORESSION SYSTEM U-ELECTRICAL SYSTEM
EQUIPMENT STUCK TRACK CLEANING DEL-NO EXCAVATORS AVAILABLE S-FRAME & BODY/BED SYSTEM U-ENGINE SYSTEMS
DEL-NO TRUCKS AVAILABLE S-HYDRAULIC SYSTEM U-FIRE SUPPRESSION SYSTEM
DEL-RELEASED FROM ELECT.MAINT. S-IMPLEMENT SYSTEMS (GET) U-FRAME & BODY/BED SYSTEM
Total time
Delay time Standby time Scheduled down time Unscheduled down time
Operating
time
Operating delay time Non-operating delay time
Activity "Operating delay" "Non-operating delay" "Standby" "Scheduled down" "Unscheduled down"
TIRE COOLDOWN TRAINING DEL-RELEASED FROM FIELD MAINT. S-MAST & DRILL STEEL SYSTEM U-HYDRAULIC SYSTEMS
TRAMMING LONG DEL-CLEANUP DEL-RELEASED FROM SHOVEL PM S-MINESTAR U-IMPLEMENT SYSTEMS
TRAMMING SHORT DEL-DRILL PIPE FALLEN OUT DEL-RELEASED FROM SHOVEL/DRILL M S-PM INSPECTION U-MAST & DRILL STEEL SYSTEM
WAIT ON PATTERN/SURVEY DEL-ELECTRICAL SHUT DOWN DEL-RELEASED FROM TIRE SHOP S-RADIO MAINTENANCE U-MINESTAR
WAITING FOR SHOVEL DEL-FUELING DEL-SHOVEL OUT OF MATERIAL S-ROTARY HEAD SYSTEM U-OUT OF FUEL
WAITING FOR WATER DEL-MINESTAR S-STEERING SYSTEMS U-RADIO MAINTENANCE
TRUCK DEL-OFFSHIFT TIEDOWN S-SUSPENSION SYSTEMS U-ROTARY HEAD SYSTEM
Operating Delay DEL-OILER SERVICING DRILL S-TIRES RIMS & LOCKRINGS U-STERRING SSTEMS
DEL-OPERATOR DELAY S-TPMS & VIMS U-SUSPENSION SYSTEM
DEL-OVERLOADED TRUCK S-TRACKS & UNDERCARRIAGE U-TIRES RIMS & LOCKRINGS
DEL-PLUGGED BIT S-TRANS & TRANSFER CASE U-TPMS & VIMS
U-TRACKS & UNDERCARRIAGE
U-TRANS & TRANSFER CASE
Total time
Total time
Delay time Standby time Scheduled down time Unscheduled down time
Operating
time
Operating delay time Non-operating delay time
Activity "Operating delay" "Non-operating delay" "Standby" "Scheduled down" "Unscheduled down"
Business Intelligence
Business Intelligence cubes can be built on KpiSummaries views and tables. Refer to the Fleet Information
Access User Manual for more information on this topic.
Configuring production
Cycle configuration
To ensure that cycle data is correctly recorded against loading tools that work in multiple modes, you can spe-
cify the actual operating mode for a loading tool at any given time. For example, a loading tool working in LHD
mode in a stockpile has different cycle recording and assignment requirements than when it is loading trucks
in Production mode.
In order to track the different ways that these loading tools operate, you can specify an LHD threshold level,
which applies to loading tools operating in LHD mode. This threshold level specifies the number of Begin.Load-
ing messages received before a loading tool automatically changes back to Production mode. This ensures
that a minimal number of production cycles are lost when an LHD starts working in Production mode without
the operator notifying the Mine Controller.
The default threshold level is specified in Supervisor, and is set to four (4).
You set the LHD Threshold Level in Supervisor by doing the following.
1. Click Options > System Options, then from the Product drop-down list select Production.
2. In the Option Sets list, click Cycle Configuration, and then click the Loading Tool Loading tab.
3. In the LHD threshold level field enter 4.
4. Click Apply.
This means that when the fourth Begin.Loading message arrives in the office, the office software auto-
matically changes the loading tool to Production mode.
l The three truck cycles created while the loading tool was still set to LHD mode will not have had their
loading activities reconciled, possibly causing a slight overlap of loading activities.
You can disable this functionality by specifying an LHD Threshold Level of zero (0).
When the on-board hardware includes the current SMU value in messages, the values will be the same until
the next hardware polling operation takes place, so a graph of SMU values (for a particular machine) observed
in the office looks like steps (with a width of the polling period) rather than a smooth incline.
Since real SMUs accrue at the same rate as clock time or not at all, a graph of real SMU values against time
always has a gradient of 1 or 0. This fact, and the knowledge of the on-board polling period are used by the
code which extracts SMU values from messages and writes entries in the ServiceMeterReading table, essen-
tially calculating and storing just the points of the graph where the gradient changes. This code runs in the
MineTracking server and you need to enable the Supervisor configuration option by going to System Options
> Production > SMU Interpolator > General and select the Smu Enabled check box.
To improve the performance of the SMU interpolator, an in-memory cache has been implemented. You can
configure the cache in Supervisor by going to System Options > Production > SMU Interpolator > Inter-
polator. It is not expected that these values will need to be changed.
Configuring backups
To ensure that production data, system configuration and other information can be retrieved in the event of dis-
aster, it is important that a sound backup policy be developed and implemented according to the needs of
each site. The following sections describe the types of backups, their frequencies and suggested locations for
a typical Fleet installation. These may need to be modified to suit site requirements.
Backup sizes
The following table details the typical sizes of the various backups.
If your backup software supports it, you can exclude the \mstarFiles\sys-
tems\<systemName>\trace directories from the backup procedure. This can save up to 3–4GB. The fig-
ures quoted above reflect the fact that the Standby Server also functions as a Test Server, and so the typical
size might be much less.
Scheduling
Database direct exports are scheduled for 1:45am. Tape backups should begin at 4am.
Implies Weekly.
All data backed up. Normally run on Tapes retired from cycle as their
Weekly Saturday evening. 5 weeks turn to be the monthly backup
The BackupToTape features relies on application calls which are hard-coded into the database export
scripts. If changes are made to the local storage management software configuration, you should advise Fleet
Customer Support.
Enabling BackupToTape
To enable BackupToTape
1. On the Contents menu, point to Setup and then click System Options.
2. In the Product list, choose Platform and then click System – Enterprise Extensions in the Option
Sets list.
3. Display the Enterprise Backup tab. Select Integrate Data Exports with Tape Backups and then
click Apply.
Disabling BackupToTape
To disable BackupToTape
l Follow the same procedure as above, but ensure that Integrate Data Exports with Tape Backups is
not selected, and then click Apply.
EFH management
You can use the office software to manage Effective Flat Haul (EFH) curves. This allows for the recalculation
of the EFH distances and factors whenever a road segment is updated.
This feature is not enabled by default. This gives the local builder the chance to set up the EFH curves within
the office software and time to observe the changes that would be made before enabling the feature.
Updates can be selectively enabled for both recalculation of EFH factor and EFH distance, although normally
these would be enabled together. Similarly, updates can be selectively enabled for final road segments and
haulage roads. Distance is specified in meters.
You can also specify that updates only occur if the changes exceed a nominated threshold.
Configuring EFH
Use Supervisor to specify the various EFH options which control how road segment information is updated.
This includes enabling automatic update of EFH factors and distance, and also specifying the thresholds
below which changes are ignored.
NOTE: You need to restart the system for these changes to take effect.
Recalculated Information
The following information is recalculated whenever the spatial attributes of a road change:
l Plan Length
l Slope Length
l Rise Height
l Fall Height
l Travel Information
l EFH Factor (Optional)
l EFH Distance (Optional)
l Design Travel Times (Optional)
l Dynamic Travel Time (Optional. Only changed if the Design Time changes)
l Waypoints
l Route point
l Road Segments
l Start Waypoint
l End Waypoint
l Interim vertices being added, moved or deleted
l Rolling Resistance
l Road Speed Limit
l Travel Information
l Truck Classes
l Average Speed Unloaded at Destination
l Average Speed Loaded at Destination
l EFH Specification
l Max Speed Loaded
l Max Speed Unloaded
l Member
l Controller
l Builder
l Maintenance Engineer
l Maintenance Technician
You can also create custom roles, and assign your own permission preferences to each. When these custom
roles and permissions are in place, you can assign users to these roles in the same way that you assign users
to the pre-configured roles. This is especially useful for large sites, or sites with special requirements, where
the pre-configured roles are insufficient.
You use a Client to assign roles to users. Refer to the Fleet User Manual for further information.
The type of data described in the files varies from datatype to datatype, but generally includes an ID number, a
name for each data type, units and scaling information.
The contents of the files are immediately available to application components, but must be imported into the
database using
makeDataStores Health to be available for reporting. This is described in more detail below.
The target definition files may require customization to suit the requirements of the mine. This is performed
when the office software is first installed, and again whenever new equipment models are added to the site or
when changes to reporting parameters are required.
The target definition files are customized by the Builder or other suitably qualified personnel.
\mstar\mstarHome\xml\vims\targets.xml
The default trend and histogram settings reflect the VIMS Data Application Guide ver. 1.06. These settings
should be reviewed according to your site’s operating conditions and modified accordingly (see below).
l HE420d-VIMS PM Planner
l HE020d-Transmission-By Class, Truck, Date Range-Detail
\mstar\mstarHome\xml\vims\payload_histograms.xml
The default histograms provided are 5, 10 and 20 tons/tonnes, but these can modified as required.
l VC520d-10-10-20 Exceptions+Histograms
MSTAR_BASE_CENTRAL\config\catalogs\
appName.propertyKey = valueForAnyHost
appName.propertyKey@hostName1 = valueForSpecificHost1
appName.propertyKey@hostName2 = valueForSpecificHost2
...etc
The first of the three lines above specifies a value that applies to any host.
The second and third only apply to the host with name hostName1 and hostName2 respectively.
Therefore, if you are running mstarrun on the machine hostName1, then valueForSpecificHost1 will be
used; if you are running on hostName2, then valueForSpecificHost2 will be used; and if you are running on
any other machine, valueForAnyHost will be used. For example,
# MineStar Client
client.filename=client.eep
... etc
Here, if the client is running on MN26C001162330 or MN26C001162331, then max mem is set to 1024M. For
any client running on any other machine, max memory is 768M.
Introduction
Fleet system administration involves keeping your system safe and secure.
This chapter describes ways in which you can ensure your system is kept safe and secure, providing inform-
ation on using remote access, assigning permissions and roles and using Expert Mode.
Chapter goals
By the end of this chapter, you should:
l Have an in-depth understanding of the security and related requirements of a Fleet installation
l Be able to set up appropriate security, authorization and authentication for your Fleet installation
Security
Remote access
Fleet relies on remote access to perform administration and support tasks, which necessitates the installation
of the appropriate software on site computers. For support and remote monitoring purposes, either a Windows
Terminal Services Client (Windows 2000) or Remote Desktop Connection (Windows XP) will be used to con-
nect to the appropriate server.
This assumes the "remote desktop" option is enabled and that the office software user is part of the allowed
list of administrators for remote desktop.
Fleet security
Fleet security refers to the assigning of permissions to roles, so that anyone who has a particular role can per-
form the tasks associated with that role. In addition to roles, the office software uses Expert Mode to expose
further controls and functionality. Expert Mode is available in both the office software Client and Supervisor.
Using a combination of role permissions and Expert Mode, you can tightly control which functionality is avail-
able to which the office software users.
Expert mode
You use Expert mode to expose functionality in the office software Client and Supervisor that is not typically
needed on a day-to-day basis. Expert Mode is password protected, and this password should only be granted
to authorized or qualified personnel so that they can perform certain lower-level functions. A typical office soft-
ware user, for example, does not have the ability to delete entities, but in Expert Mode this and other func-
tionality becomes available.
NOTE: Expert Mode functionality is displayed in red in the office software menus.
1. On the View menu, click Expert Mode. The Expert Mode Password Prompt dialog box displays.
2. Enter the appropriate password, and then click OK. A plus (+) symbol in the Status Bar indicates that
you are now in Expert Mode.
1. Start Supervisor, and on the Contents menu, point to Setup and then click System Options.
2. Select Platform – Clients in the Product list and then click the type of client whose password you
want to change, for example Explorer – Client or Explorer – Supervisor.
3. On the General tab, enter the required password in the Expert Mode Password field, and then click
Apply.
You need to restart the Client or Supervisor for your change to take effect.
To simplify the application of permissions, you can specify global permissions to grant or deny access to
pages, jobs and tools and also for many page actions for each of the defined office software roles. You can
then elect to inherit the global permissions for any page action, rather then specifying permissions for each
role at the page level.
For example, the Edit action is typically only available to Mine Controllers and Builders, and this is specified
in the global permissions. When you specify the permissions for a page that has an Edit action, you can
choose Inherit as the permission level, and Mine Controllers and Builders are automatically granted that per-
mission.
1. Start Supervisor, and on the Contents menu, point to Management and then click Permissions.
2. In the Product list, choose Global to display the global action permissions page.
3. For each of the permissions tabs, specify the required permissions for each role, and then click Apply.
Specifies which user roles can access and use the vari-
Job Access ous jobs in the office software, such as activating data
loggers and importing VIMS data.
Specifies which user roles can access and use the vari-
Tool Access ous tools in the office software, such as importing per-
sonnel data and validating roads or waypoints.
Page-level permissions
Page-level permissions provide more granular control over the permissions of each role. You can override the
permissions specified at the global level by adding or removing roles and permissions. There is a permissions
entry for each page.
Page-level permissions also provide access to the permissions of each action and field on individual pages.
Due to the diverse nature of the office software pages, only a subset of the actions and fields are available for
global configuration; the remainder must be specified at the page level.
NOTE: In the Global Category, there is a Road Segment Editor permission, and a Road Segments
Editor position.
The Road Segment Editor is the Editor permission for a single road. It allows you to open the Road
Segment Assistant, Site Monitor or Site Editor, select a single road, and click Open.
The Road Segments Editor permission is the Editor permission for multiple roads. It allows you to
open the Site Monitor or Site Editor, select multiple roads, and click Open.
NOTE: In the Global Category, there is a Display Monitor permission. This allows the user to view the
KPI pages in both the MineStar client and web client.
1. Start Supervisor, and on the Contents menu, point to Management and then click Permissions.
2. Use the Product and Category lists to find the page whose permissions you want to change.
3. Select or clear the check boxes for each action and field for the required role, then click Apply.
Permission options
The permission options vary according to the actions and fields on each page. Some pages have very few
options, such as view and edit, while others have multiple actions and fields, such as activate, delete, open,
rename, etc.
Role options
You can specify permissions for each of the defined roles in the office software, such as Mine Controller,
Builder, etc. Use this method of assigning permissions if the higher level methods (Anyone and Inherit)
provide insufficient control.
For any action or field, you can specify that permissions be inherited from global settings. If you select Inherit
for an action or field, then the permissions defined in the global settings apply. You can extend the global per-
missions by selecting further roles as required. If you clear the Inherit check box, the global settings are
ignored.
Some pages, such as certain monitors, finders, etc., should be accessible by anyone. In these cases, use a
permissions setting of Anyone.
NOTE: For security purposes, you should limit the use of the Anyone permission.
You can include all the required login details in Supervisor so that the office software Clients start auto-
matically. This is the easiest way to set up automatic login but also the most insecure, since it provides
access to the system for anyone who has access to the computer. It is recommended that this procedure not
be used on a production system.
You can start the office software Client from the command line and specify login details and other options to
suit your immediate needs. This bypasses the security risk of having a user name and password pre-
configured in Supervisor, and avoids the extra time required to create configuration files for single-page cli-
ents.
where:
<userName> and <password> are required for a valid office software user account.
You can add further options to the command line as required, for example, the office software page to open
and the page configuration to use. To open the Cycle Assistant using the Drill Cycle configuration, enter the
following command:
mstarrun client -ptryAutoLogin:true -puser:<userName>
-ppageConfiguration:Drill
Refer to the mstarrun Command Reference chapter for more information on the available command line
options for mstarrun.
You can also specify any login parameters, the default system, etc., so that the user does not need to know
specific details about the system. This is useful for casual users of the system who may not have regular
login accounts or who have other specific requirements.
Two steps are required to configure a single page client: creating the office software Explorer preferences file
(eep file); and creating the appropriate command to start the office software Client using that file.
The office software Explorer should start and display the Delay Monitor with the Contents menu disabled.
The easiest way to make use of the single-page preferences file is to create a desktop shortcut. We have
used Delay Monitor in this example.
For examples of the types of configurations that you can create, refer to the ExamplePage.eep file in the fol-
lowing directory:
\mstar\mstarHome\explorer\bin
Introduction
The Fleet databases are an essential and integral part of the office software system, and require regular mon-
itoring and periodic maintenance to ensure optimum performance of the system as a whole.
This chapter describes and details the various databases involved and how to perform them essential mon-
itoring and administration tasks.
Chapter goals
By the end of this chapter, you should:
Fleet datastores
During the course of its normal operation, reporting and maintenance procedures, Fleet uses the following
datastores, named as follows:
l Model
l Historical
l Summary
l Reporting
l Template
The default configuration creates all datastores in a single database instance (MINESTAR). Although you can
change this to use up to three database instances (MODEL, MINESTAR and SUMMARY), it is not recom-
mended, and you should follow your site support procedures first for consulting Fleet Customer Support.
Model datastore
The Model datastore stores information about the Mine Model. It defines all static entities, such as fixed and
mobile equipment, fleets, operators and crews, waypoints, roads, calendars, shifts, etc.
Historical datastore
The Historical datastore records all information generated by the office software. This includes all field net-
work events that are sent to and from machines, field equipment cycles and their components, such as activ-
ities, road segments traveled and any delays that occurred during the cycle. Certain types of historical entities
have an associated retention period, which specifies how long the entities should be maintained in the data-
store before being archived.
Summary datastore
The Summary datastore records KPI information that is calculated by the KPI Summaries component of
MineStar. This information is used mainly for reporting purposes.
Reporting datastore
The Reporting datastore contains the Business Objects Reporting repository, which defines all standard
reports that are available in the office software. These reports are run against either the Historical or Summary
datastore connection.
Template datastore
The Template datastore is an empty data schema that defines the physical structures of database objects,
such as tables and indexes, as defined in the office software metadata.
Database monitoring
Monitoring your databases is an essential part of ensuring that your office software system continues to per-
form correctly and efficiently. The office software provides several tools to help perform these monitoring
tasks.
snapshotDB_<COMPUTER>_H_<HHMM>.txt
Snapshot contents
The database snapshot reports a number of vital statistics to the Fleet DBA:
The database administrator can monitor these statistics and estimate the growth of the database. This
provides opportunity to identify any potential storage or security issues and to schedule database reor-
ganization procedures (see following sections).
Database upgrades
Database upgrades are performed as part of an application upgrade. Use the makeDataStores utility to per-
form this upgrade. This utility is the only mechanism available for making changes to the database structures.
NOTE: You should discuss any proposed database configuration changes with Fleet Customer Sup-
port.
Changes to the database settings of an existing system require the migration of the following datastore entit-
ies to a new datastore:
l Model – This is to preserve the correct mine model configuration of the installation.
l Historical – This is to preserve the production data and associated information.
In situations other than a failure (e.g., a hardware or software upgrade) it may be necessary to switch the
office software to run on Standby databases. This should always be planned well ahead of time and the Data-
base Administrator contacted for assistance.
l As part of the office software administration tasks that are automatically run on a daily basis. These
tasks can also be run manually if desired.
l By running export Datastores.
Regardless of the method used to back up the database, backup options are configured using Supervisor.
1. On the Contents menu, point to Management and then click Admin Tools.
2. In the Tool list, click exportDataStores.
3. If you want to select all databases, select Database Name.
4. If you do not want all databases, select the check box for each of the databases that you want to
export.
5. If you do not want to zip the exports, select Leave Exports Unzipped. The default action is to zip the
export and then delete the .dmp file.
6. Specify the required output directory, and then click Run. The default output directory is
{MSTAR_BASE_LOCAL}\data.
NOTE: If you are storing your backup files on a remote, or different, server you can add the location to
the Temp DB Output Directory field and move the files at a later time to another location.
1. Start Supervisor, and on the Contents menu, point to Setup and then click System Options.
2. In the Product list, select Platform, and then click System – Workgroup Extensions.
3. Use the various tabs to specify the backup options for your database.
The following guidelines provide a general outline of what to do in the case of a disaster.
Guidelines
l Contact Fleet Customer Support as soon as a problem is noticed. You should not attempt to restore
lost data until you have spoken with Fleet Customer Support personnel.
l If the recommended backup procedures have been followed, data can usually be restored at least up
until the time of the last backup, and frequently beyond this point by using gateway and other saved
files. How much data can be recovered varies on a case-by-case basis.
l Data restoration should only be attempted by a qualified database administrator under close super-
vision from Fleet Customer Support. Upon notification of a problem, a Fleet database administrator will
contact the site and begin the process of recovery.
Archiving data
Archiving data is important for the continued good health of the office software. Disks can fill up over time if
data is not archived correctly and regularly. Archival of data is handled by the cleanExpiredData script.
The archiving process deletes the archived data from the database, and stores it in zip files in the event that it
is ever needed.
One of the scheduled tasks that are generated as part of a standard installation and configuration includes the
cleanExpiredData script, which performs data deletion and archival actions for historical entities as specified
in Supervisor.
1. Start Supervisor, and on the Contents menu, point to Setup and then click System Options.
2. In the Product list, select Platform, and then click System – Workgroup Extensions.
3. Click the Data Archiving tab, and use the various fields to specify the data archival options for your
system.
4. In the Data Retention Policy table, click on the rows in the Retention Policy and Retention Period
columns to change the policy and period.
NOTE: The default retention is Delete, as using archive requires you to manually maintain files, and
archived files require a lot of disk space.
The defaults set on the Data Archiving tab are shown in the screenshot above.
Deleting Data
Data should only ever be deleted via the data archiving process as documented in "Archiving data" on
page 222.
Data should only be manually deleted in extenuating circumstances, and then only under the close super-
vision of a Fleet database administrator.
Introduction
This chapter describes some of the tools and processes that you can use to monitor the operational efficiency
of the office software, and to ensure that issues are diagnosed quickly. It also covers the auditing capabilities
of the office software, and how it can be used to track changes to the mine model.
Chapter goals
By the end of this chapter, you should:
l Understand the different types of snapshot, and be able to submit a snapshot for analysis.
l Understand and be able to configure auditing of the office software.
l Monitor the performance of your system.
Types of snapshot
Fleet generates the following types of snapshots:
System Snapshot
There are three types of system snapshot: User, System, and Standby.
When running Snapshot System from the Tools menu, you have the option to also include DXF files and
include Onboard files in the snapshot.
You can also set a temp directory for the System Snapshot.
There are two types of Operating System snapshot: User and System.
Database Snapshot
System snapshots
System snapshots capture the current state of the office software and the collected files as a compressed zip
file. The contents of System snapshots and User snapshots are the same, however System snapshots are ini-
tiated automatically by the System Scheduler, while User snapshots are initiated manually. The age of the
files collected for System snapshots versus User snapshots is defined by the Lookback Hours value spe-
cified in Supervisor.
Standby snapshots are also initiated automatically by the System Scheduler, and contain a different data set,
as their purpose is to provide enough information to quickly bring up a standby system in case of failure.
System snapshots are saved to the directories specified in Supervisor. See Configuring system directories for
more information on setting up directories.
Operating System snapshots contain the list of Operating System processes that were running when the snap-
shot was initiated, and a list of statistics about any additional volumes that the office software monitors for
available space.
The contents of the System snapshots and the User snapshots are the same, however System snapshots
are initiated automatically by the System Scheduler, while User Snapshots are initiated manually.
Operating System snapshots are saved to the Process Logs Directory specified in Supervisor. "Configuring
system directories" on page 64for more information on setting up directories.
Database Snapshots
Database snapshots contain an export of the specified database, and a log file which provides details of the
export.
The contents of the System snapshots and the User snapshots are the same, however System snapshots
are initiated automatically by the System Scheduler, while User Snapshots are initiated manually.
Use Supervisor to configure the retention period of snapshots and other files. Two values are specified: the
time before marking any files (indicates that the files are ready to be deleted); and the time before actually
deleting any files.
This configuration applies to numerous file types, not only snapshots. Ensure that the values specified here
are suitable for all file types that are affected. This configuration affects the following file types:
Extension Description
Extension Description
zip Compressed.
1. Start Supervisor, and on the Contents menu, point to Setup and then click System Options.
2. In the Product list, choose Platform and then click System Workgroup Extensions.
3. Click the File Archiving tab, and specify the required periods in the Old Generated Files field.
4. Click Apply to apply the changes.
The main cause of incomplete snapshots is incorrect configuration. You should check the settings in Super-
visor to ensure that all directories are correctly specified for the type of server where the snapshot is being
executed, for example, a Database Server, Application Server or client.
If snapshots are not being sent to Fleet Customer Support, there may be a problem with the connectivity con-
figuration or with the scheduled task that triggers the transfer of the snapshots. There may also be a problem
with the creation of the snapshots; if snapshots are not created correctly then they are not copied to the appro-
priate directory ready for transfer to Fleet Customer Support.
1. Start Supervisor, and on the Contents menu, point to Setup and then click System Options.
2. In the Product list, choose Platform and then click System Workgroup Extensions in the Category
list.
3. On the Connectivity tab, ensure that the correct values are specified.
1. Ensure that the appropriate Microsoft Windows Scheduled Task has been created and is enabled.
2. Ensure that the Splinterware System Scheduled is running, and that the appropriate events have been
created. There should be an event with the title sendAllToSupport.
l Right-click the Windows Task Bar and then click Task Manager.
The example screenshot above shows Windows Task Manager displaying various performance data
For information and help on how to use the Windows Task Manager to monitor performance and other aspects
of your system, refer to the Microsoft Windows Task Manager Help.
l On the Contents menu, click Diagnostics Pages, and then click Office Processes in the Platform
group.
The example screenshot above shows Office Process Monitor displaying the server processes and a client
connection
Auditing
The office software includes an auditing function that keeps track of changes to the mine plan for troubleshoot-
ing and security purposes. Audit files can be viewed in any standard text editor, and include a rolling cryp-
tographic hash which detects or changes deletions.
/mstarFiles/systems/main/logs/audit/audit_YY_MM_DD_HHMMSS_mmm.log
New files are generated whenever the file size reaches 15 MB, or after 24 hours, whichever is sooner. These
settings can be configured in Supervisor. Refer to the Supervisor Page Reference chapter for details.
The Audit File viewer displays a list of the actions in the log file on the left side of the screen. Select an entry
to view the details on the right of the screen.
Introduction
This chapter describes the Oracle failover and failback process, types of failover, and the tools used by the
failover process. It also covers the processes for configuring the office software for failover and failback, and
various processes for performing controlled and uncontrolled failovers on various servers.
Fleet uses an N+1 failover architecture, whereby a single failover server is used as the standby server for all
running application servers. The two major sub-systems which may need to be transitioned to the failover
server are the application services and the database.
Chapter goals
By the end of this chapter, you should:
When you start the office software services, the application server name is determined, and shown as a read-
only field in Supervisor on the Server tab, as shown in the screenshot below. This setting is then used by other
processes when they need to determine which machine the office software is running on.
To ensure that the office software is only started on approved servers, and to allow features like snapshots to
determine which machines are valid servers (and so contain the proper information), you can enter all of the
valid server names, (for application, database, standby, and test roles), in the Allowed Server Names field.
If you do not enter any server names, the office software will run on any server.
NOTE: Some roles may be shared (e.g. standby and test may be the same server). It is only neces-
sary to enter the distinct physical server names which will be used. The screenshot in "Failover con-
figuration requirements" on the previous page shows the Server tab in Supervisor:
Database configuration
The office software database configuration reflects the logical role of the given instance, i.e. PRODUCTION,
STANDBY, TEST.
l The Production server is your main server, and is where your production database is stored.
l The Standby server is the server you will use should your production database fail.
l The Test server is the server you will use for testing data, e.g. before a software upgrade, so that you
do not affect your production database.
When configuring the office software's database settings in Supervisor, you need to define server names for
each of the roles which are to be used for that installation (there is one "Exception to the rule" on page 243
which is described later in this section). Which database server the office software actually uses is determ-
ined by which role is marked as active. The screenshot below shows the database configuration tab in Super-
visor.
l Beside each Database Role, double-click the Server column, and enter the server name.
l Once you have defined your database servers, you can select the Active Database Role from the
drop-down list.
NOTE: You should only change the Active Database Role if you want to failover to the Standby
server.
Each role with a defined (non-empty) server is included in the Oracle tnsnames.ora file, which is generated
whenever these settings change. In addition, a MINESTAR_REPORTING entry is generated and is set to the
server defined by the currently active role. This allows the Business Objects universes to be configured to use
an Oracle TNS name which remains constant regardless of which database is being used.
The Oracle tnsnames.ora file is generated on the application server. You must copy the file to
the database server to enable the office software to function correctly.
l In some cases, the same database server is used for both standby and testing. In this case, you only
need to define one role-server entry. To ensure the office software uses the required database, you
need to keep the currently defined active role setting the same, but alter the user name for the four
Oracle instances.
l To make this easier, there is a username prefix setting you can use. The default is ms, as shown in the
Name Prefix field in "Failover configuration requirements" on page 240. This prefix is prepended to
each defined username and password to form the actual values used. Thus only the prefix needs to be
changed in order to use a completely different database. For example, where the standby and test
instances are hosted on the standby server, the active role would remain as STANDBY and the name
prefix changed to either std for standby or tst for test to switch between the two databases.
You can use the command line utility, mstarrun switchActiveDatabase, to change the database settings out-
lined above quickly without the need to start Supervisor. The general syntax is
mstarrun switchActiveDatabase [-r role] [-p prefix].
One or both of the -r and -p options needs to be specified. The role will be one of PRODUCTION, STANDBY,
or TEST.
Standby Directory
You set up the Standby Directory on the System Directories page in Supervisor as shown in the screenshot
below.
l Configure the Standby Directory to point to the Systems directory on the standby server. This is done
using a share set up specifically for that purpose.
Both the standby and production systems must use the same drive letter to map the standby
directory share. The production system points to the standby system and the standby system
points to the production system. This ensures that when running the standby application
server, the production server mirrors the standby application server. Therefore, any con-
figuration changes made, and failback, is simplified.
The Standby system update frequency sets the frequency of when the standby system configuration is
updated with a copy of the configuration information, as well as cycle and delay information, from the live sys-
tem.
Set the standby system update frequency on the Failover tab in Supervisor, as shown in "Failover con-
figuration requirements" on page 240 by
1. Point to Options > System Options, and from the Product list select Platform.
2. From the Option Sets list, select System - Enterprise Extensions.
3. Click the Failover tab, and select the frequency for the standby server synchronization from the
Standby System Update Frequency list. The default is to update the standby system configuration
every 15 minutes.
Setting the Snapshot Frequency generates a standby snapshot file, which is saved and copied to the
standby server. It then unpacks the model database, cycle and delay information, as well as the configuration
information. The snapshot generates everything that the standby system update generates, as well as refresh-
ing the model database with a copy from production, and refreshes the kpisummaries dimension tables.
Selecting the Snapshot frequency for the live system from the Snapshot Frequency list. The default is to run
a snapshot every hour.
A controlled failover is when the process is controlled, there is the opportunity to create a standby snapshot at
the point of cutover to ensure that the latest production information can be copied to the standby server. In this
case, the creation of the snapshot and its unpacking on the standby system is performed manually rather than
via scheduled tasks. This process is also to be used for system upgrades.
The application server is where the office software is running. If the server fails, you need to do an application
server failover.
1. If the office software is still running, shut down the office software on the production server.
2. Disable the field comms network interface on the production server, and start the field comms network
interface on the standby server.
3. On the production server, stop the system scheduler and disable the Windows scheduled task which
restarts the system scheduler if it is not running.
4. Start the office software on the standby server, and run
mstarrun makeScheduledTasks AppServer.
5. Start the office software on the standby server.
The database server holds the office software database. If the database server, or the database disk drive or
other database failure occurs, you need to do a database server failover.
If both the application and the database servers fail, or both are on a single server that fails, you need to do a
system failover.
1. If the office software is still running, shut down the office software on the production server.
2. Disable the field comms network interface on the on production server, and start the field comms net-
work interface on the standby server.
3. On the production server, stop the system scheduler and disable the Windows scheduled task which
restarts the system scheduler if it is not running.
4. Switch the active database role using the command
mstarrun switchActiveDatabase -r STANDBY on the standby server.
5. Start the office software again on the standby server.
6. On the standby server run mstarrun makeScheduledTasks AppServer DbServer.
The main difference between the uncontrolled and controlled failover is that during a controlled failover you
have the opportunity to take a standby snapshot after the office software has been stopped. The sequences
for controlled failovers follow.
1. If the office software is still running, shut down the office software on the production server.
2. Run a standby snapshot using the command
mstarrun snapshotSystem STANDBY on the Production server.
If the standby directory has been correctly configured, and the failover database is running, the standby
system will be fully updated with the latest information from the production system. At this point, the
procedure is identical to that for an uncontrolled failover.
3. If you are performing a controlled application server failover, continue to "Controlled Applic-
ation server failover" below. If you are performing a controlled database server failover, go to
"Controlled Database server failover" below. If you are performing a controlled system failover,
go to "Controlled system failover" on the next page.
4. Disable the field comms network interface on the on production server, and start the field
comms network interface on the standby server.
5. On the production server, stop the system scheduler and disable the Windows scheduled task
which restarts the system scheduler if it is not running.
6. Start the office software on the standby server, and run mstarrun makeScheduledTasks
AppServer.
4. Disable the field comms network interface on the on production server, and start the field
comms network interface on the standby server.
5. On the production server, stop the system scheduler and disable the Windows scheduled task
which restarts the system scheduler if it is not running.
6. Switch the active database role using the command
mstarrun switchActiveDatabase -r STANDBY on the standby server.
7. Start the office software again on the standby server.
8. On the standby server run mstarrun makeScheduledTasks AppServer DbServer.
1. For application server failback, it extracts configuration information from the standby snapshot and
applies it to the office software on the production system.
2. For database server failback, it
l replaces the current model database with the one from the snapshot on the standby system.
l merges historical and summaries data from the standby system back into the production data-
base using the mstarrun migrateStandbyDataToProduction task.
1. Shutdown the office software and create a standby snapshot on the standby server. The snapshot file
(.zip file) is put into \mstarFiles\systems\main\admin.
2. Copy the snapshot (.zip) file from the standby server to the production server.
3. Disable the field comms network interface on the standby server.
4. On the standby server, stop the system scheduler and disable the Windows scheduled task which
restarts the system scheduler if it is not running.
NOTE: You should also now start the comms server process on the production server in order to avoid
the field network being flooded by retry messages.
Troubleshooting
This section explains the procedure you must follow before calling Fleet Customer Support.
Configuration
The following checks must be made during the configuration of the Failover STANDBY database envir-
onment:
l The tnsnames.ora file must have correctly defined service names for MINESTAR_PRODUCTION
and MINESTAR_STANDBY on both database servers and these must be identical to each other.
NOTE: A problem can occur when using text editors other than Notepad.Exe to manually change the
contents (e.g. Emacs will replace <end-of-line> with another <ctrl> character and this renders the file
useless to the Sql*Net client and/or the Oracle server.
Database Failover
The following checks should be made when validating the operation of the snapshotSystem STANDBY task:
1. Connect to the STANDBY Model using Sql*Plus and execute the following:
2 group by object_type;
where the times indicated are those of when the 'snapshotSystem STANDBY' was run.
2. Obtain the number and latest endtime for a CYCLE, which should be close to the time the 'snap-
shotSystem STANDBY' was run:
COUNT(*)
----------
57
MAX(ENDTIME)
------------------
30-JAN-09 09:41:36
3. Obtain the number and latest starttime for a DELAY which should be close to the time the 'snap-
shotSystem STANDBY' was run:
COUNT(*)
----------
80
MAX(START1)
------------------
30-JAN-09 09:36:50
Database Failback
The following checks should be made when validating the database failback operation:
COUNT(*)
----------
77
COUNT(*)
----------
77
1. Click Start, and in the Search Programs and Files box, enter dcomcnfg.
3. Expand the Console Root folder, and then the Component Services > Computers > My Computer
> Distributed Transaction Coordinator folders to display Local DTC as shown in the screenshot
above.
4. Right-click Local DTC and click Properties.
5. Click the Security tab.
6. Select all check boxes, as shown in the screenshot below.
Introduction
Fleet uses calendars to divide time into periods, which are further divided into shifts. Different mines might
use different start times for different periods, and different numbers of shifts for each period. You can also com-
bine 8-hour shifts and 12-hour shifts in the same period.
The periods and shifts that comprise Calendars are used to create rosters, and the various crews defined in
the office software are then assigned to these rosters.
Chapter goals
By the end of this chapter, you should:
Assumptions
This chapter assumes that you know and understand what the site requirements are for configuring periods
and shifts.
Understanding calendars
Calendars are comprised of periods, which are further divided into shifts. Calendars have the following char-
acteristics:
l Each calendar can define only one type of time period, but you can define any number of calendars
l Each time period has only one start and one end, with no gaps between these times
l Time periods are defined with millisecond precision; the display precision depends on the specified
format
l Time periods in any single calendar can not overlap
Defining calendars
To define a Calendar you need to define the time periods that comprise that calendar. To do this, you specify
the start time for each of the required periods, and from them derive the end times. From an operational point
of view, this can be extended to creating the shifts and then the rosters that are used by the office software in
everyday operation.
The process for setting up calendars and reporting periods in the office software can be summarized as fol-
lows:
1. Specify the default number of shifts per day and the names for those shifts
2. Specify the start parameters for periods and the names for those periods
3. Specify a range of dates for which to create shifts
4. Create rosters for the shifts
The first two steps are performed in Supervisor and are described below. Steps 3 and 4 are performed in the
office software. Refer to the Fleet User Manual for instructions on specify date ranges and creating rosters.
Defining periods
The office software provides default values for all period properties, but you should check that these default
values are suitable for your mine site, and update them as necessary.
1. Start Supervisor, and on the Contents menu, point to Management and then click Calendar
Defaults.
2. Display the Periods tab, and specify the various period values to suit your mine.
3. Click Period Naming and specify the required format for each of the periods, and then click Apply.
Defining shifts
The office software uses 12-hour shifts by default when creating calendars, but you can also create 8-hour
shifts for any number of days of the week. For example, you can create 12-hour shifts for Monday to Friday
and 8-hour shifts for the weekend.
1. Start Supervisor, and on the Contents menu, point to Management and then click Calendar Defaults.
2. Display the Shifts tab, and specify the various shift values to suit your mine.
3. Select the minimum amount of months ahead the shift will be scheduled for.
4. Click Shift Naming and specify the required format for each of the shifts, and then click Apply.
com.mincom.util.calendar.Config.Default_Months.offset=-3 days
com.mincom.util.calendar.Config.Default_Months.name=Month {1, date,MMM yyyy}
You can use the following numbers and change the format to extract the relevant parts of the date for forming
the name of the time period.
0 - start time
1 - finish time
2 - 1 millisecond after the finish time (in case the shift ends at midnight)
CONTENTS=/explorer/bin/client.eep,minestar.pitlink.service.awareness.Config,
/explorer/bin/supervisor.eep,
com.mincom.util.calendar.Config,
com.mincom.works.cc.cycle.test.Config,com.mincom.integ.cat.equip.Config,
/MineStar.properties, /Versions.properties
This section is not required for a successful install of MineStar or Oracle. Please contact
your Caterpillar Support team for more information and assistance with installing this
product.
Scope
This section contains information and instructions for implementing Oracle RMAN.
Audience
This section is for the following:
This is based on the 21 days backup retention policy and 150GB DB size, with the archive log enabled for the
database. Having this much disk size may be change depending on your database size.
1. Click Start > All Programs > Oracle - OraDb11g_home1 > Configuration and Migration Tools >
Database Configuration Assistant. The Welcome screen displays.
2. Click Next.
5. In the Global Database Name enter RMAN. The SID field is dynamically filled.
6. Click Next.
A dialog will display, as shown below, indicating that the password does not satisfy Oracle
recommended password complexity. You can ignore this message and click Yes.
14. In the Flash Recovery Area field, enter or browse to the path for the flash back recovery area.
15. In the Flash Recovery Area Size field enter 3696 and ensure M Bytes is selected.
17. Ensure the Oracle Text check box is cleared, and the Enterprise Manager Repository check box is
selected.
18. Click Standard Database Components.
19. Ensure only the Oracle JVM and Oracle XML DB check boxes are selected. Clear the other check
boxes.
20. Click OK to close the dialog.
21. Click Next.
This screen is for informational purposes only. You do not need to do anything on this screen.
The Database Configuration Assistant screen displays, showing the creation progress.
Once the database has been created the following screen displays. Note that this screen is an example only.
You will have a different Database Control URL.
NOTE: Archiving the REDO logs should only be enabled once an Oracle backup solution has been
installed and configured. It is the customer’s responsibility to configure and administer this
backup.
Set db SID
DOS>SET ORACLE_SID=MINESTAR
NOTE: This is the Default path before enabling the archive log. You need to change to the path where
you want to store the archive logs.
OR
LOG_MODE
------------
NOARCHIVELOG
Now that you have checked that the database is in "NOARCHIVE LOG" mode, enter the commands below to
change the database into ARCHIVE LOG mode and set the location for saving the archive logs.
Database closed.
Database dismounted.
Database mounted.
System altered.
NOTE: This path can be changed according to where you wish to store your archived log. Ensure that
you have created this directory before you run the alter statement above.
When entering the alter statement the folder path should not contain any spaces, otherwise
when you shutdown you will get the following error.
Check whether or not the INIT<dbname>.ORA file has been created. Open this file using notepad and
check it contains the entry
Database altered.
Database altered.
LOG_MODE
------------
ARCHIVELOG
OR
Create the RMAN repository in the Catalog (RMANDB) and register the target database (MINESTAR)
in the repository
Log in to the RMAN database and follow the steps below to configure the database.
1. Create tablespace for the RMAN user in the catalog database (RMAN).
SQL> CREATE TABLESPACE RMANTS
DATAFILE 'D:mstarData\oradata\RMAN\RMAN01.DBR'SIZE 200m REUSE AUTOEXTEND ON
EXTENT MANAGEMENT LOCAL
Tablespace created.
User created.
Grant succeeded.
4. Connect the catalog database using the RMAN user created above.
SQL> exit
No rows selected.
NOTE: "No rows selected" is correct because you have not created a catalog yet.
5. Connect the catalog database using the RMAN user created above
D:\> RMAN CATALOG RMAN/RMAN@rmandb---Password is case sensitive
RMAN> Exit
no rows selected.
NOTE: "No rows selected" is correct because the minestar database is not yet registered in the
rmandb catalog database.
System altered.
RMAN>LIST BACKUP
This command provides output only when you have done a backup.
Performance
In the Oracle System Global Area (SGA), the "large_pool_size" should be kept high for optimum performance
of the RMAN backup. Since RMAN runs on multiple executions of parallel processes, the large pool allocation
heap is used in shared server systems for session memory, by parallel execution for message buffers, and by
backup processes for disk I/O buffers.
The following processes are ways in which you can improve the RMAN backup process.
You can modify the large_pool_size by entering the following commands if you have not already set the pool
size.
Open the initSID.ora file and modify the large_pool_size to what you require, and save the file. It is recom-
mended that you set the large_pool_size to half of what your physical RAM is (up to a maximum of 8-12Gb).
Enter:
Sql>startup;
NOTE: This command is only available if you are using Oracle Enterprise Edition.
Enter
using file
D:\oracle\product\11.2.0\dbhome_1\oradata\minestar\minestarrman_change_
track.f' reuse;
Enable parallelism
If you are using Oracle Enterprise edition you are able to add multiple channels to complete the backup pro-
cess as quickly as possible.
a. SessionVariables.bat
b. Run_Full_Weekly.cmd
c. Run_INCR_Daily.cmd
d. Run_Archive_Backup.cmd
e. Run_FULL_RMAN_Weekly.bat
f. Run_INCR_RMAN_Daily.bat
g. Run_Archive_RMAN_Backup_Every04Hrs.bat
D:\oracle\product\11.2.0\admin\MINESTAR\rman\Full_RmanWeekly
_Log\
D:\oracle\product\11.2.0\admin\MINESTAR\rman\Incr_RmanDaily_Log\
D:\oracle\product\11.2.0\Rman_Backup_Files\FULLDB_with_Archivelog_Backup\
D:\oracle\product\11.2.0\Rman_Backup_Files\Control_File_During_FullDB_Backup\
Stores the control file backup which is taken during a full database backup.
D:\oracle\product\11.2.0\Rman_Backup_Files\IncrimentalDB_With_Archivelog_Backup\
D:\oracle\product\11.2.0\Rman_Backup_Files\Control_File_During_IncrDB_Backup\
D:\oracle\product\11.2.0\Rman_Backup_Files\Controle_File_During_Archivelog_backup\
D:\oracle\product\11.2.0\Rman_Backup_Files\Enabled_Archive_Files\
ECHO OFF
REM /********************************************
REM
REM
REM *
REM *
REM ********************************************************
SET timestr=%d:~6,4%%d:~3,2%%d:~0,2%%t:~0,2%%t:~3,2%
SET timestr2=%d:~6,4%%d:~3,2%%d:~0,2%%t:~0,2%%t:~3,2%%t:~6,2%
SET datestamp=%d:~3,2%%d:~0,2%%d:~6,4%
REM /********************************************
SET ORACLE_SID=MINESTAR
SET TARGET_UID=sys
SET TARGET_PWD=mine1star
SET EXEDIR=D:\oracle\product\11.2.0\admin\MINESTAR\rman\Run_Full_Weekly.cmd
SET RMANLOG=D:\oracle\product\11.2.0\admin\MINESTAR\rman\Full_RmanWeekly_
Log\FullWeekly_Bkp_%ORACLE_SID%_%timestr2%.log
SET EXEDIR_INCR=D:\oracle\product\11.2.0\admin\MINESTAR\rman\Run_INCR_
Daily.cmd
SET INCRRMANLOG=D:\oracle\product\11.2.0\dmin\MINESTAR\rman\Incr_RmanDaily_
Log\IncrDaily_Bkp_%ORACLE_SID%_%timestr2%.log
set EXEDIR_ARCHIVE=D:\oracle\product\11.2.0\admin\MINESTAR\rman\Run_Archive_
Backup.cmd
set ARCHIVELOG=D:\oracle\product\11.2.0\admin\MINESTAR\rman\Archive_Log_
Every4hours\ArchiveLog_Bkp_%ORACLE_SID%_%timestr2%.log
SET RMAN_CATALOG_DB=rmandb
SET RMAN_UID=RMAN
SET RMAN_PWD=RMAN
RUN_Full_Weekly.cmd
run {
crosscheck backup;
report obsolete;
Run_FULL_RMAN_Weekly.bat
REM Oracle RMAN Weekly FULL Backup Started,review backup log after successful
completion.
CALL SessionVariables.bat
Run_INCR_Daily.cmd
run {
report obsolete;
Run_INCR_RMAN_Daily.bat
REM Incr back up started, Review the logs once after backup get completed
call SessionVariables.bat
Archive_Backup.cmd
run{
Archive_RMAN_Backup.bat
REM Oracle RMAN Every 04hrs Archive Log Backup Started,review backup log after
Successful completion.
CALL SessionVariables.bat
Enabling RMAN
To enable RMAN functionality on your system, do the following.
1. Open Supervisor.
2. Click Options > System Options.
3. From the Product drop-down list select Platform.
4. From the Option Sets list select System - Workgroup Extensions.
5. Click the RMAN Configuration tab.
6. Select the Enable RMAN on this System check box.
7. Click Apply.
Configuring RMAN
Configuring RMAN directories, backup type and schedule settings, is done in Supervisor.
1. Click Options > System Options, from the Product drop-down list select Plaftorm, and in the Option
Sets list select System - Workgroup Extensions.
2. Select the RMAN to be scheduled as part of makeScheduledTasks check box.
3. Click Apply.
Scheduled on Every Alternative days, Tuesday, Thursday, Saturday at 05PM Local Time,
Scheduled at every 6hours from Monday 12.01AM Local time to Sunday 01PM Local Time
1. Click Options > System Options, and from the Product list select Platform.
2. Click System - Workgroup Extensions.
3. Click the Rman Configuration tab.
4. In the RMAN Full DB Backup Schedule Time field, enter the time you wish the full backup to occur.
5. From the RMAN Full DB Backup Schedule Frequency drop-down list select whether you wish to do
the full backup weekly or fortnightly.
6. In the RMAN Incremental DB Backup Schedule Time field, enter the time you wish the incremental
backup to occur.
7. From the RMAN Incremental DB Backup Schedule Frequency drop-down list select how often you
wish the incremental backup to occur.
8. In the RMAN Full Archivable Backup Schedule Time field, enter the time you wish the archive
backup to occur.
9. From the RMAN Archivable DB Backup Schedule Frequency drop-down list select how often you
wish the archive backup to occur.
10. Click Apply.
1. Click Tools > Toolkit, and from the Group list select All.
2. From the Tool list, select rmanTools.
3. Select the Rman backup type.
4. If you want to have the files copied to another location, click the ellipses button beside the RMAN
Base Directory field, select the location and click Open. The files are copied here as an additional pre-
caution.
5. Click Run.
Introduction
This chapter describes the Supervisor pages and the various configuration options available. Because Super-
visor does not run against a system bus, most configuration changes require a restart of the office software
before any changes take effect. This means that you can make numerous configuration changes without
affecting the system that is currently running. Some configuration options require additional changes or pro-
cesses before they take effect, such as updating datastores or recreating on-board machine files. These addi-
tional processes are discussed as part of the configuration options that require them.
Chapter goals
This chapter aims to provide a reference for the pages and configuration options in Supervisor, examples and
default values where applicable, and basic usage information.
Only Fleet Consultants, Builders, or other suitably qualified personnel should make changes
to the settings in Supervisor.
Diagnostics pages
The Diagnostics Pages of Supervisor provide various monitoring and troubleshooting tools to help in the ana-
lysis of any problems that may arise with the office software.
Field Communications
Use the Field Communications Monitor to monitor all inbound and outbound field communications. This is a
useful tool for analyzing field communications problems.
l In Supervisor, on the Contents menu, point to Diagnostics, and then click Field Comms.
The following table describes the information displayed on the Field Communications Monitor.
Actions Menu
Toggle Machine Switches between displaying the name and the IP address of the
Some of these Address machines displayed.
actions are also
Replays the current gateway file. Used mainly for troubleshooting
available on the Replay
and testing.
toolbar.
Displays the Set Time Zone dialog box. Used to specify the time
Locale zone of the gateway files being used in Replay mode. Used
mainly for troubleshooting and testing.
Save messages Saves the selected communications messages to an XML file for
to an XML File analysis. Used mainly for troubleshooting and testing.
Grey - the data has been received and the time received in is
good.
Blue - the data has been received and the time received in is a bit
out of spec, but fine.
Red - the data has not been received. This could mean that the
office time and the field times are NOT synchronized.
The percentage of messages sent from the office to the field for
No Response
which no response was received. This can indicate areas of poor
(% total)
coverage or failing radios.
The round trip time for messages sent from the office and acknow-
ledged by the field. This can help understand coverage and con-
RTT (seconds) gestion issues with the network as a whole or identify individual
machines with communications issues.
To help identify and diagnose problems with communications, these statistics can be aggregated by a num-
ber of factors, including those described below.
Include
Display will show responses given to filtered messages.
responses
Page Con-
Displays the currently selected page configuration.
figuration
Field Comms Indicates the time that the message passed through the gateway,
Arrival Time
tab either inbound or outbound.
The time that the message was generated, either onboard the
Timestamp
machine or in the office.
Process charts
NOTE: The Process Charts page is intended as a diagnostic tool for Level 2 and 3 support staff.
Use the Process Charts page to display the various charts that are produced by the office software. These dis-
play performance and other aspects of the office software over time. This data comes from log files, so the log
files you keep dictate what can be displayed. You can open specific records for the various office software
components or all of the available graphs for the current day. These graphs are useful analysis tools when per-
forming general system troubleshooting.
l Start Supervisor, and on the Contents menu, point to Diagnostics and then click Process Charts.
l A standard report run at a time of peak resource usage by the system, such as at shift change.
l A custom report run at any time that does not take advantage of appropriate indexes.
Toolbar buttons
The following table describes the buttons displayed on the Process Charts toolbar.
Item Description
Open able to open log files, xml files produced by the Per-
formance Test Bench (PTB), and system log files.
Item Description
Open graphs for Opens and displays all log files for the current day.
today
Open graphs for Opens and displays all log files for MineTracking
MineTracking and merges them.
Open All Opens all of the syslog files. These are files which
record every command that has been run. More
detail is given in "System Log frame" on page 305.
Item Description
NOTE: When you initially open a log file, if the directory that Supervisor was started in is called "logs",
the system will look there. If not, it will go to a directory where log files are being written by the current
system. However, once you have a log file open, opening further log files will default to that directory.
This facilitates its use as a diagnosis tool "after the fact".
If you open a .zip file, and that .zip file is a system snapshot, the page will extract the logs directory
and show the MineTracking log. Further opening of files will then work against the temporary dir-
ectory that the log files were extracted to.
Frames
This section describes the frames, or windows, that you can display on your screen. Some of them have been
basically described in the "Toolbar buttons" on page 302 section.
Click the Open All (system log) button on the toolbar. The system log frame shows the lines from the system
log files. You can use the time synchronization features across frames to see what behavior in MineTracking
for example, corresponded to which particular command, such as for example, cycle recalc.
The check boxes on the left of the frame allow you to turn on and off the particular things you want to display.
Legend frame
Click the legend button on the toolbar. The legend frame has the following tabs at the time of writing.
l Lines
l Boxes
l Lag
l Events
Lines
Type Description
Dark red. The maximum heap size set for the pro-
cess. This is approximately the highest value total
memory can go to.
Type Description
Boxes
Type Description
Type Description
Lag
Type Description
Type Description
ECF is used mostly to cache Machine and Person entities of the mine model in the system.
ECF lag most commonly occurs when an ECF object is saved or read from the database server. ECF lag
occurs less commonly (very infrequently) when there is lag within the ECF system itself, between the primary
cahe in MineTracking and secondary caches associated with other services, or clients, accessed over the
server-side bus. ECF lag is almost always indicative of lag/latency occurring at the Database layer, or a delay
occurring in the RMI Bus.
Events
Type Description
Events in context
There an be a slowServer event, for example, from the JobService talking to MineTracking , which triggers a
Hibernate Event in MineTracking. If the Database server is slow, this causes a slowHibernate event, and as
the response from MineTracking back to the job service event is slow, this will also show as a slowServer
event.
Keyboard shortcuts
The following table describes the keyboard shortcuts available in the Legend frame.
Item Description
Item Description
The PTB logs are used to try to locate performance problems in the system.
NOTE: You can only be look at PTB files if you have actually had the Performance Test Bench run-
ning in the past.
When you open a PTB log, the window has the following tabs.
l Hibernate Collections
l Hibernate Entities
l Hibernate Queries
l Notification
l RPC (Remote Procedure Call) Invocations
l Caches
Hibernate Collections
The lines on this tab show variances in the number of objects used to represent various properties of domain
objects. When a line goes suddenly above then suddenly below, that means the value increased then
decreased. Being continuously above the baseline is a sign of an increasing collection. As the variations can
be extremely large, there is a dampening factor which can be set with the Zoom In and Zoom Out buttons – a
higher number means the lines have less variation (are less "wiggly"). This is so you can make graph read-
able.
Tooltips on the row headers tell you exactly what number that row represents, as the short headers can be
ambiguous.
Hibernate Entities
This tab is representative of domain objects. The graph on this tab shows how many domains we are working
with in a time period, big circles represent more memory. Time is increasing towards the right. Each circle has
a tooltip giving the exact numbers and time.
Hibernate Queries
This tab is a summary of queries which were made to Hibernate across the entire PTB monitoring period. The
histogram represents the total time taken for that sort of query, and a vertical tick across the bar, usually at the
very left side of the bar, shows the average. A tooltip on the histogram gives the summarized details.
Notification
The notification system queues items for delivery in various buffers. These graphs show the size of those
queues. They are almost always empty, which is why you will not see much on your screen. Sustained
queues of any type are a problem.
MineTracking exposes various services. The PTB monitors how long those services take, how much data
they return, and how much time was spent in the database.
Each RPC has three graphs. The name of the RPC is in bold under the triplet. Each graph is a histogram of
good numbers (green, at the top) to bad numbers (red, down the bottom).
Red result size values suggest the RPC might be returning more information than it needs to, red call dur-
ation and JDBC duration values suggest the RPC is taking a long time because it is monitoring a lot of ser-
vices in the database.
Tooltips on the histograms give summaries, including average, minimum and maximum values.
Caches
The office software has many caches which are managed by Hibernate. These vary in size greatly, so they
are hard to graph.
The names down the left are the names of the caches. The three towers next to the name represent the min-
imum, median and maximum sizes of the cache in memory on a log scale, so a slightly taller tower indicates a
lot bigger cache in memory. When the numbers get very big the tower goes red at the top. Tooltips on the
name and towers tell indicate the exact minimum, median, and maximum values.
To the right of the towers are icons representing the number of accesses to the cache for each of the sample
periods, with the height of the icon representing the number of accesses on a log scale. The icon is colored red
and green in proportion to the number of hits and misses on the cache – so a large icon that is all green means
a lot of "things" are being looked up, and they are always there.
A small icon that is all red means that not many "things" are being looked up, and they are never there. A gray
block means the cache was not used.
The log file frame has two interrelated parts - the "Graphs" on the facing page and the "Log file lines" on
page 317. The splitter bar between them is one-touch collapsible. The option sets (check boxes) on the left
control which graph components are drawn, and which log file lines are displayed respectively.
Graphs
The graphs consist of a horizontal axis (time), vertical axis (scale) and a lot of colored information in the
middle.
The vertical scale of maxmem, used and total are all set to the same value so that you can compare them
meaningfully. Otherwise, the vertical scale is set to that of the line whose check box is selected on the left.
This must be a tag which is drawn as a line (see the "Lines" on page 305 table to see which ones they are).
Usage tips
l Left-clicking on a tag turns it on or off, but if you right-click on a tag, you can set it as the scale without
changing whether it is on or off.
l The line for which the vertical scale is set is the one for which tooltips will show the current value when
you hover the mouse over the graph. To make sure you know what you're looking at, a pink dot (oth-
erwise known as the sparkle) displays on that line in the position of the value being displayed.
l Hovering over the horizontal axis, displays a tooltip showing you the time that position corresponds to.
This helps you place the mouse when the X axis divisions are not detailed enough.
l A single click on the graph selects that time in the log file display. Double-clicking sets all log file
frames in the window to that time. Triple-clicking looks in the log directory and opens all files which
have anything for that time, and then sets them to that time.
l You can drag on the graph to select a period of time. This area will have a gray background.
l If you right-click on the graph displays a context menu with the following options.
l Show log file for this date. This is the same as simply clicking on the graph except it will not
have any effect on any selection you have made.
l Export selected lines. If there is a selected area, export those lines to a file.
l Copy selected lines. If there is a selected area, copy those lines to the clipboard.
l Go to next error. This is the same as if you clicked on the next error dot chronologically.
l All charts show this date. All open log file frames select this date on the graph and in the log
file.
l Open all log files for this date. Looks in the logs directory, finds all logs which include this
date, opens the file and displays all the logs as at this date.
l Complete logs for this process. Opens all other log files for this process and merges them
with this graph.
l Zoom on selection. Changes the graph so that only the selected portion of the graph is shown.
The scroll bar below the graph enables you to to move along the graph. When you are zoomed
in, clicking on a log file line outside the zoomed area will move the zoom to that area.
l Zoom out. If you are zoomed in, zooms back out to see the whole graph.
The log files display shows every line from the loaded log files.You may find that there are files too big to be
loaded. If you have difficulty fitting files into memory, close down all the internal windows and load the big
ones first.
Usage tips
l Clicking on a line in the log file selects it and shows the location of that line on the graph above.
l You can use shift-click to select a range of lines. The first selected line will be indicated on the graph.
l Hovering over the log file lines displays a (potentially multi-line) tooltip with the full content of the line.
l Right-clicking on the lines displays a context menu with the following options.
l Go to next error. Moves you to the next error. You can also just press e on your keyboard.
l Export all lines to file. If this graph came from multiple files, and you want to make a single
physical file for this process, you can save all lines to a file using this command.
l Export displayed lines to file. Exports only the selected lines to a file.
l All charts show this date. All charts will show the date of the first selected line.
l Open all log files for this date. Opens all files whose timespan crosses the date of the first
selected line and sets them to that date.
l Complete logs for this process. The same as typing c on your keyboard.
l All files to beginning of lag. If this line represents a lag, (see the legend, or "Lag" on
page 307, for which ones they are. They will mostly be ones that say “took 3,214 millis to...”),
then all open graphs are set to the timestamp when the lag started. This is useful because
although you can see when a lag ends and what the consequences were in the middle, it is more
difficult to see exactly what was happening at the start.
System log
The System Log page displays a log of recent office software activities, such as when a page was opened,
when an upgrade was performed or when various mstarrun targets were executed, etc. The log displays
details for the current month.
l Start Supervisor, and on the Contents menu, point to Diagnostics and then click System Log.
Diagnostics Tools
The Diagnostics Tools page provides access to a number of mstarrun targets that are useful for performing
system diagnostics. Refer to the "" on page 512 chapter for details and usage of these targets.
Management pages
The Management Pages of Supervisor provide various tools for setting up default values for the system, and
for performing routine maintenance tasks.
Software updates
Use the Software Updates page to activate software builds and updates.
l Start Supervisor, and on the Contents menu, point to Management and then click Software
Updates.
The following table describes the information displayed on the Software Updates page.
Lists the service packs available for the install build of the office
Service Packs
software.
Lists the installed updates that are available for the office soft-
ware which are not included in the currently selected service
Updates
pack. Details of each update, including whether or not they have
been applied, are listed below the Updates table.
Source Details Specifies the file the update comes from, the machine the update
tab was built on, the time of the build and who built the update.
Lists the office software extensions that are available in the cur-
Extension Selec-
Extensions rent system. Only those extensions that are listed in the Include
tion Panel
panel are actually started when the system starts.
Opens a Browse dialog box so that you can install further service
Install Update Button
packs or updates to the system.
Permissions
Use the Permissions page to assign different permissions to different roles.
l Start Supervisor, and on the Contents menu, point to Management and then click Permissions.
A list of all available pages in the selected Product for which page
Category list
permissions can be specified.
Specifies which roles can access the various pages within the
Page Access
office software.
Specifies which roles can run various jobs within the office soft-
Job Access
warethe office software.
Specifies which roles can run various tools within the office soft-
Tool Access
ware.
Calendar defaults
Use the Calendar Defaults page to specify how the office software should split time into periods which are rel-
evant to the operation of the mine.
l Start Supervisor, and on the Contents menu, point to Management and then click Calendar
Defaults.
The following table describes the information displayed on the Calendar Defaults page.
Click this check box to change the default shift setting to 8 hours,
Days With 8 then select the check boxes next to the individual days to change
Days
Weeks
Period Naming Specifies the names and date formats of the various periods.
Output formats
tab and Input Do not edit fields on either of these tabs.
formats tab
Setup pages
The Setup Pages of Supervisor provide various tools for configuring the system and for specifying various dir-
ectories.
System directories
Use the System Directories page to set up base directories and directories for configuration and data files.
The following table describes the information displayed on the System Directories page.
Local Base Directory The local the office software base directory.
Base Dir-
ectories The network drive path to the base directory on a
Central Base Directory
shared server.
Advanced Dir- MineStar Help – The directory to which the Help web
ectories (con- Recommended Shared site is deployed.
tinued)
MineStar Published Reports – The directory to which
the Published Reports web site is deployed.
System options
Use the System Options pages to specify the various configuration options for the office software.
The following tables describe the various system options that are available. The options are described by
Product in the order that they appear in the Option Set list. The Products and Option Sets are shown in the
left-side panel on the screen.
If you select All from the Product list, all of the options are listed in alphabetical order, not in product order.
Assignment
Blending
Blending Target target may go out of range by the percentage specified before
Default = 10%.
Specifies the colors for the targets that are within the range
Default = Green.
Blending
Specifies the colors for the targets that are outside the range
specified, but the control quantities have not yet been
Warning Colour
reached, >10% and < =20%.
Default = Amber.
Specifies the colors for the targets that are outside the range
and over the control quantity, or the range is not achievable
Error Colour
over the control quantity >20%.
Default = Red.
NOTE: You are able to change the colors to whatever suits your site warning system standards.
Decision Support
Production Requirements
Reset require-
Production Select this check box if you want the production requirements
ments on shift
Requirements to be reset at every shift change.
change
Shift Change
Enable Shift Change Select this check box if you want to enable shift
Groups change groups.
Schedule
Automatic Tie Down Select this check box if you want to enable auto-
Assignments matic assignments for tie down.
Assignment Delay
The default delay type for the lineup.
Type
Arrival Delay Type The name of the default delay type for arrival.
Default = –5min.
Default = –5min.
Logout Delay Type The default delay type for the logout.
Automatic Logout and Select this check box to enable automatic login of
Automatic Login
Login machines that have no onboard hardware.
Default = 5.
Login At
Avoid overlapping the logout and login time, as it res-
ults in the incoming operators being logged on to the
machines before the outgoing operators have logged
out.
The time that the end of shift actions can begin, rel-
Default = 60 minutes.
Assignments Cleanup ments. This value is in minutes from the start of the
Time shift.
Default = 30.
Other
The amount of time to store who is actually logged in
Store Actual Login for each machine. This value is in minutes before the
Default = 60.
Default = 30.
General
Indicates the grouping of traveling trucks. Valid options
Annotations Loaded Trucks Panel Specifies the annotations to display in each panel.
Trucking Indicator
Trucking Information
Default = 1s.
Refresh Period
It is recommended that if you have a lare mine site that
you change the refresh rate to between 30 and 60
seconds.
Text color for load- Specifies the color in which to display text for loading
Trucking Information ing tools with no tools that have not had any trucks allocated to them by
Health
Channel Finder
Channel Monitor
Default = 10.
Default = 30.
Default = cleared.
Item Description
Item Description
Select this check box if the left and right mouse buttons should be
reversed when using the chart. If you select this check box, the right
Reverse Mouse Buttons
mouse button will display a chart label, and the left mouse button will
enable panning.
Event Report
Health Reporting
Telemetry
System Open Channel The upper limit of channels open across all
Max macines for the system.
Polling
Disable Select to disable System Polling/
Item Description
VIMS Import
Default = 2 hours.
file import Selecting this check box also selects the same
option in the office software when running the
VIMS Data Import job.
VIMS File Upload
Tab Description
Item Description
Set Clock Specifies that the VIMS clock be set from the the office software server at each download.
Machine Tracking
Audit
Enter the
Enter the
General CSV Audit Con-
l rollover strategy for the CSV file.
figuration
l location of the CSV audit file.
Select whether or not to capture the audit log of user cycle edits
Cycle audit log
and splits.
Services
Machine audit Select whether or not to capture the audit log of user machine
log edits.
Destinations
Item Description
Default = 500m.
This option is available in both Supervisor and the office software. Changes made in one application are reflec-
ted in the other, although changes made in Supervisor do not take effect until next time you start the office soft-
ware application. Changes made in the office software take effect immediately.
Updates are only written if the new values are within the
Nominal Time and Current Time thresholds (see below).
Default = 10 minutes.
Default = 0.5.
Default = 1.5.
Default = 1.5.
Default = 10 seconds.
Default = 5 minutes.
Default = 12 hours.
Estimated Fuel Con- fuel consumption to update the machine fuel levels.
Default = selected.
Select this check box if you want to use the fuel burn
Burn Rate Updates rate to update the machine fuel burn rate.
Default = selected.
Default = selected.
Thresholds l/hr.
Absolute Burn Rate The absolute maximum allowable working burn rate in
Thresholds l/hr.
Fuel Usage Monitoring estimates. When the machine refuels, if the estimated
— Refuel Estimate Tol- amount of fuel required differs from the actual amount
Efficiency Slider
Item Description
Item Description
Specifies the major tick spacing for the slider. The number represents the distance
between each major tick mark. For example, if you have a slider range of 0 to 50 and
Slider major spa-
the major tick spacing is set to 10, you will get major ticks next to the following val-
cing
ues: 0, 10, 20, 30, 40, 50.
Default = 25.
Specifies the minor tick spacing for the slider. The number represents the distance
between each minor tick mark. For example, if you have a slider range of 0 to 50 and
Slider minor spa-
the minor tick spacing is set to 10, you will get minor ticks next to the following val-
cing
ues: 0, 10, 20, 30, 40, 50.
Default = 25.
Slider “snap to Select this check box if you want the slider to snap to the tick mark closest to where
ticks” you have positioned the slider.
Update Inter- Specifies the time interval, in seconds, to gather change events and
val update them in a single batch.
Maximum
Specifies the maximum time between updates, in minutes. You
Time
General should set this field to 0 if you do not want to use time-based updat-
Between
ing.
Updates
Show Last Select this check box to have the last update time shown in the status
Update Time bar.
Allow Quick
Select this check box to allow editing of the Truck Loading Tool and
Truck Table Truck Lock
processor Locks.
Changes
NOTE: To monitor fuel in the office software you must open Fleet Update
Assistant, click Configure Properties on the toolbar, and on the Pages tab
include Fuel. Restart the office software after making any changes in Super-
visor.
If the fuel level falls below the percentage entered, the value is dis-
played in the FUA in the selected Warning color. Values greater than
Warning %
the number entered are displayed in the selected "Ok" color.
Default = 40%.
If the fuel levels fall below the percentage entered, the value is dis-
Fuel Warn-
Select a display color for Warning fuel percentages.
ing
Show only Select this check box to flter the machines on the fuel tab to only
VIMS 3G those with a VIMS 3G onboard platform. If this check box is not selec-
enabled ted, the FUA fuel tab displays fuel information for all machines, includ-
machines ing estimated values.
Allow user Select this check box allows you to hide zero percent entries. A
to hide 0% check box displays on the FUA in the office software. Clearing this
entries check box removes the option from the FUA.
Exception SMU Excep- Displays the age of SMU records in hours. This will display in colored
Rendering tion cells in the Fleet Update Assistant. Colors are set in Options > Sys-
tem Options > Graphical Display > Exception Rendering.
TKPH Displays the percentage level of the TKPH the tire can handle before
Percent overheating. This will display in colored cells in the Fleet Update
Exception Assistant. Colors are set in Options > System Options > Graphical
Display > Exception Rendering.
Final Roads
Item Description
Time between Specifies the time, in milliseconds, to wait before deleting final road segments. This
road removals value should be approximately twice the cycles cache time-out value.
Specifies the default EFH for final road segments, and is used for all final road seg-
EFH value for
ments.
FINAL roads
Default = 1.0.
Maximum Geo- Specifies the maximum grade, as a percentage, that is allowed for final road seg-
metric Grade for ments. This maximum gradient percentage applies to all final roads.
GIS Server
This menu item allows you to specify the location, URLs and other information for the GIS server.
Graphical Display
Road Display Displays the Width of roads. If the value is 0 the roads will dis-
Width play as a single line.
Exclude From Enter layers to exclude from a search in Site Monitor or Site
Search Editor to improve performance.
Use Z Values When the surface definition is not maintained, clear this check
from Surface box to interpolate the Z values from the currently loaded DXF
file. When the check box is cleared, and no DXF is loaded,
moving an entity will not change the Z value.
General
Edit Box Size Select the size of the radius of the box when selecting a point
to edit in graphical editors.
l Expert - 6 pixels
l Normal - 8 pixels
l Large - 10 pixels
l Huge - 12 pixels
Show Truck Select to display icons for situations such as loss of com-
Overlay Icons munication or stopped.
Show Speed Select to display the speed of the machine in graphical dis-
plays such as Truck Assistant and Site Monitor. The speed will
display for all machines traveling over 3kph (1.8mph).
Zoom Level Enter the zoom levels for Icon and Text display size.
Switches
Exception Ren- Severity Colors The color to display for Errors, Warnings, and Information noti-
dering fications. Colors are used on various pages, e.g. Road Seg-
ment Assistant, Waypoint Assistant, Fleet Update Assistant.
Standard colors set are Red for Error, Orange for Warning, and
Yellow for information. Click the ellipses to select different col-
ors.
Maximum Road Specifies the maximum gradient for roads. If a created road
Gradient exceeds this gradient on any part of the road, a warning is
shown when the road is created.
Slow Speed Enter the speed limit that will trigger the error, warning or inform-
Limit ation severity coloring.
Missed Way- Enter the surface distance that will trigger the error, warning or
points information severity coloring.
Surface Distance Enter the surface distance that will trigger the error, warning or
information severity coloring.
Lane Distance Enter the lane distance that will trigger the error or warning
(relative to truck severity coloring.
width)
Incident Service
Item Description
Lists the incident report message types to capture. These message types are saved
Message Filter for all machines as long as the messages are in the incident event time range and con-
tain location information.
Specifies the duration of when to search for related incident events within the same
Related Event event handler when building an incident report. The search looks for events within this
Default = 10 minutes.
Server Yield dur- Specifies the yield minimum, increment, maximum, reset and threshold values.
ing Capture
Max loadout loadout unit. If there is no response from the loadout unit within
Communications
response time this time, an error will be logged.
Parameters
Default = 10.
Script Settings
Specifies the command to use to manually load from the loadout
Manual Load
unit.
Command
Default = ManualLoad.
Default = StopLoad.
Machine Assistant
Use Last Win- Select this check box to have the Machine Assistant windows
dow Size open at the same size as when they were last opened.
Connection Pro- Settings File Specifies the file name for the defined connection programs to be
grams Name stored in the Client LOGS directory.
Enter the name and location of the VNC Program and associated
VNC
arguments.
Cat Remote Enter the name and location of the Cat Remote Program and
Program associated arguments. The location field now allows spaces in
the path text.
Cat Remote
Arguments The -h argumentindicates the Device IP Address which will be
used by Remote Tools to connect automatically. Note that you
can also see this in the various Machine Assistant's Configure
Display Properties screen.
Enter the name and location of the Phindows Programe and asso-
Phindows
ciated arguments.
Machine Control
Item Description
Selecting this check box causes additional information to be sent with every assign-
Send Arrival
ment to the trucks, providing arrival times for the truck at each waypoint. It also
Time Info
provides information to
Item Description
Cancel Assign- Enter the ID of the waypoint to give to a truck when the truck cannot be assigned. The
ment Waypiont value you enter should match the ID of a waypoint wth a name such as "Unas-
ID signed". Entering 0 disables the feature.
Send Destination Select this check box to have the name of the assigned server sent to the truck and
Server Name displayed as the destination rather than using the last waypoint name.
Max Loaders Per Enter the maximum number of loaders to be sent the assignment message.
Message
Send Destination Select this check box to have the description of the destination server’s waypoint
Description sent to the truck and displayed as the destination, rather than using the name of the
assigned server or the last waypoint name.
When selecting this check box you must also have the Send Destination Server
Name check box selected.
Leaving Loaded As a truck leaves a loading tool loaded and starts traveling loaded, this duration in
Leeway seconds is the leeway allowed before the truck is shown as not having received an
assignment in the Travel Progress Monitor.
As a truck leaves a processor and starts traveling empty, this duration in seconds is
Leaving Empty
the leeway allowed before the truck is shown as not having received an assignment in
Leeway
the Travel Progress Monitor.
Machine Manager
Item Description
Time Before Specifies the length of time the system should wait for a reply to a message that it
Resend has sent to a machine. The time is specified in milliseconds.
Number of Specifies the number of times the system should try to resend a message if the oper-
Retries ator has not replied to it within the time specified by the Time Before Resend field.
Machine Nodes
Maximum Num- Specifies the number of waypoints missed. A value less than 1
ber of Missed means that the largest possible count will be kept.
When Missed value is more than the maximum value of the missed count, alarms
Machine Services
Cause Onboard to
If selected, the on-board hardware beeps every time an assign-
Beep for Every Assign-
ment is sent.
ment
Tool Positions on ment message. Otherwise, only waypoint information for load-
Default = Selected.
Mine Boundaries
Foreground and Back- A hexadecimal number representing the RGB color value, for
Default = #FFAF00.
Operator
Roads
Haulage Roads
Final Roads
Untravelable Roads Specifies the color in which to display the various road
types.
Partially Travelable
Roads
Archived Roads
Properties
Enter the tax class for the road.
Tax Class
Default = In_Pit.
Default = 5000.
Initial Type Code Specifies the initial type code for the waypoint.
Valid Type Codes Specifies the valid type codes for the waypoint.
_E = Loading Tool.
Name Extension
Shovel, Loader, _L = Loadout Unit.
Loadout Unit, Pro-
_P = Processor.
cessor & Surface
Not available for shovels or loaders.
Miner Waypoints
tabs
Color Code Specifies the color code to use for the waypoint.
Flags A list of the attributes that can be enabled for the waypoint.
Other Waypoints:
Initial Type Code Specifies the initial type code for the waypoint.
Destination
Truck
Valid Type Codes Specifies the valid type codes for the waypoint.
Auxiliary
Site Editor
Specifies the width of the lane if the lane has been created
General Lane Width
automatically.
Crosshair size (in Sets the diameter of the crosshair tool used in Site Editor for
Cross Hair pixels) editing objects.
Settings
Reticule size (in pixels) Sets the area covered by the reticule of the crosshair tool.
Maximum Individual the number, the smaller and faster the queries will be, but there
Default = 512.
Maximum Total Result results in the time period being halved to reduce the total
Maximum Query Run reports to load. If the time taken is too long, the process will
Default = 3 minutes.
Activate Events Select this check box to show Activated Events by default.
Deactivate Events Select this check box to show Deactivated Events by default.
Activate Stopped
Stopped Machine Select this check box if you want the office software to look
Machine
Node for stopped machines, and try to put them on delay.
Determination
Only consider Select this check box if you want trucks to be considered
travelling trucks stopped when they should be traveling.
Maximum speed Specifies the fastest allowed speed of a truck still considered
Maximum dis- Specifies the largest distance a truck can move and still be
Surface Management
Age raster
The size of the tiles, in meters, which the age raster covers.
resolution
Material Tracking
Default = 20.
Max Loaders Per Mes- message will be sent. The setting in this field depends on how
sage many mining blocks are likely to be active for each loader, and
Settings
therefore sent out with the messages.
Default = 3.
Default = 5.
Specifies how long the system will wait for a response before
Note: The asterisk beside the fields means that this setting can be changed in a live system and the
change takes effect immediately without having to restart the system.
Item Description
Disable TSS Color File Select this check box to disable the generation of a new TSS Color File when a
generation material changes.
Enter any additional details to be added to the extended text beyond recom-
Custom extended text
mended and required text.
Pit Link
Comms Monitor
l dd MM HH:mm:ss a z.
where:
l dd = date in month.
Date Format l MM = month as a number or text.
l HH = hour in 24-hour time.
l mm = minute.
l ss = second.
l a = am or pm.
l z = time zone.
General
Select this check box to have responses to messages dis-
Include Responses In
played on the Field Communications Monitor when using fil-
Filters
ters.
Max Delta Specifies the maximum latency before being rendered in blue.
Speed
Satellites
Message Size Colors representing each statistic listed in the item column
Statistics
Duplicate Count Minimum = BLUE
Rendering
No Response Maximum = RED
Latency
Comms Server
Default = {MSTAR_MESSAGES}.
Logging
If you set an alternate directory, it will not be cleaned up
by the scheduled clean-up task. You will have to set up
manual cleanup procedures for the directory.
Do not bind to Select this check box if you do not want to bind these set-
Field NIC tings to the field NIC.
Default = 4.
Default = 30000.
Default = 2001.
Comms Services
Enable Waypoint Select this check box to start the Waypoint Update Server
Update when the office software is started.
Enable Mining Block Select this check box to start the Mining Block Update when
Update the office software is started.
Send Mining Block First machine to receive the message is the loading tool the
Additional Second machines to receive the message are the trucks cur-
Comms Ser- rently assigned to the loader.
vices
Third machines to receive the message are all of the trucks in
the assignment group.
Default = 2000.
Enable Automatic Select this check box to automatically downloads the TMAC
Download of TMAC Message Logs on receipt of a File Status message indicating
Message Logs the availability of a log file.
Internationalize Field Select this check box if you want the field names to display in
Names another language.
Maximum number of second, per messgae type, before messages are discarded.
messages per second This is used to protect MineStar from message spikes.
button
Specifies the minimum time, in nanoseconds, over which the
Default = 5.
Force Sends
Note: If machines are not Cat Fleet enabled you must
select this option to support MineStar communication
with the machines.
Enter the length of time, in minutes, that the SMU Gadget will
poll. This option is only available if Enable SMU Gadget Util-
SMU Poll Period
ity is selected.
Default = 60.
Display Waypoint
Description Instead of If selected, waypoint descriptions, rather than waypoint
Onboard Set- Name on Onboard Dis- names, are displayed on the onboard hardware.
Cause TOPE to beep If selected, the on-board hardware will beep whenever it
when message receives receives a mesage.
External Reference
Show External Select to display the External Reference field in Delay Type
Ref Editor.
If the check box is selected you can not save a stockpile and a
Ref If the check box is cleared, you are able to save a stockpile
and delay type with the same external reference.
Display Set-
Blocks to Display The number of blocks to display for each repeating group.
tings
Unit Set Click this cell and select the unit set.
Click this cell to enter the EPSG code. If you are using a cus-
EPSG Code tom EPSG code you must run the
publishCoordinateSystem tool first.
Incident FTP
FTP Job FTP user name The user name to use when connecting to field equipment
when using FTP.
Default = aquila.
Default = cold.
Onboard download dir- The download directory onboard the machine where you can
ectory retrieve incident files from.equipment when using FTP.
Default = mir_out.
Item Description
The maximum number of characters in a Loading Tool name. You can increase
the name lenghth to up to 16 characters.
Default = 7.
Name Length
Clear this check box to remove the display of the mining block selector from
Default = Selected.
Machine Broadcast
Item Description
If selected, machine positions will be sent to the field using multicast mes-
Enable Multicast Deliv-
saging.
ery
Default = Selected.
If selected, machine positions will be sent to the field using unicast messaging
(possibly in addition to multicast messaging). Only machines withan explicit
Enable Unicast Deliv-
’Machine Broadcast URL’ defined, and that have sent a PositionReport2 meas-
ery
sage to the office are sent updates using unicast delivery.
Raise alarms for time If selected, Alarms are raised when a time synchronization issue is detected.
Item Description
Enter the length of time for which the system will not retrigger time syn-
Non-retrigger (Time
chronization.
Sync)
Default = 300s.
Put Loaders and If selected, Loaders and Shovels are put on delay when they lose radio con-
Maximum time between The maximum time a non-autonomous machine can be out of contact.
The time to allow a machine to determine a fix after startup, before marking it
Default = 300s.
Machines shut down for the specified length of time will be included in machine
Machine Off inclusion
broadcast messages.
period
Default = 60s.
Item Description
Enter the length of time that the office software will broadcast an All Machine
Position (AMP) message, on receipt of an AllMachinePositionRequest mes-
Full AMP inclusion sage. This can be used to limit the amount of network traffic when a machine
period that has been out of comms for more than 60 seconds returns to the network,
or when a machine that has been shutdown is switched back on, and requests
an AMP mwaaFW.
If you are using Position Awareness, you must set the following fields.
If you are running Terrain with Fleet, the following key needs to be changed in the MCU for the Ter-
rain on-board systems to match. $ Seconds to Request AllMachinePositions
After the entered number of seconds has elapsed since the last PR2 was sent,
Period of time between
a new PR2 is sent.
PR2s
Default = 5s.
After the vehicle has moved the entered number of centimeters since the last
Default = 500cm.
Min period of time The minimum number of seconds between PR2s, (i.e. PR2s are not to be sent
between PR2s faster than this frequency).
This tab allows GPS solution types to be mapped to nominal GPS Accuracies. All new equipment should
report GPS accuracy in meters. Old equipment can report GPS accuracy as one of a number of Solution
Types. The office software must always report GPS accuracy of machines in meters. As a consequence, the
office software needs to be configured to convert old-style Solution Types to meters. The fields on this tab
allow the mapping between Solution Types and meters.
Item Description
Show External Ref If selected, the External Reference field is displayed in machine editors.
Onboard Hardware
Default = 5.
Synchronize MineStar files on Select this check box to have the latest
machine startup MineStar files FTP’d to machines on startup.
Onboard Files
Allows you to override the value specified in the
standard Platforms.xml file. The default is to
Machines Using Mining Block allow mining block files to be sent to loaders
Files only. It is also possible to configure the office
software to send mining block files to both
trucks and loaders as required.
Use MD5 Checksum for File for the file. When the file is received, the check-
Default = selected.
TOPE Con-
Refer to your TOPE Configuration Manual for information on these settings.
figuration
Specifiees the number of times the office software should try to send a message if the
Machine Type
machine operator has not replied in the time specified in the Time Before Resend field.
Configuration
Default = 5 seconds.
Outgoing Messages
Item Description
Specifies how long the office software should wait for a reply from a machine it
Specifies the number of times the office software should try to send a mes-
sage if the machine operator has not replied in the time specified in the Time
Number of Retries
Before Resend field.
Default = 5 seconds.
Assignment
If selected, the office software uses the material in the dipper
Use Dipper Mater-
record when determining assignments.
ial
Default = Not selected.
Publish Unknown
If selected, the office software publishes assignment events about
Truck In Load
load reports for unknown trucks.
Reports
Enter the distance between the loading tool and the truck at which
Ignore Load Report
spurious load reports are ignored and not processed.
Distance
Default = 100m.
Specifies the time, in milliseconds, that a truck may exceed its spe-
Maximum Overdue
cified overdue duration before it is classifed as "Stopped".
Duration
Default = 120000 ms (2 minutes).
Specifies the minimum position change for which a fuel bay is con-
Fuel Bay
sidered to have moved.
The time, in minutes, after changes to KPIs have occurred that the
KPI Forward
changes are sent to mobile terminals. Note that Mobile Terminal is
Refresh Interval
not for general customer use.
KPI Forward Retry The number of times to try to send data to remote terminals should
Remote KPI Count a failure occur.
Update Set-
tings The time, in milliseconds, to wait for a response from mobile ter-
KPI Forward time-
minals before trying to re-send. Note that Mobile Terminal is not for
out value
general customer use.
Item Description
Default = 0.1.
Item Description
Default = 0.3.
The points located within the specified number of cells are updated
Default = 10.
Platform
System
The name of your mine site. This can be any text string, but it
General Site name
should be relatively short and meaningful.
Services
Note: If you are a Health-only site, ensure that you
Exclude GeoServer and GeoDatabase . Move them
from Include to Exclude if necessary.
Current Application The name or IP address of the Application server. This field is
Server
server filled by default.
JMS Host Name Enter the TCP/IP host name used by the Front-Side bus.
Log Level
Load Haul Dump Terrain Select this check box to have MineStar services propagate
Synchronise information to Terrain for mining block and bench changes.
Terrain RMI host name The TCP/IP host used by the Terrain RMI.
Terrain RMI Port Num- The TCP/IP port used by the Terrain RMI.
ber
Default = 14009.
Terrain GIS Port Num- The TCP/IP port used by the Terrain GIS.
ber
Default = 7070.
Select from the list which server the office software will use
as a Database server. In the Server column in the table, enter
the Production, Standby, and Test server names beside their
corresponding roles.
Default = Production.
Instances
l Secondary
Instance Templates
l Operational
Model
l Operational His-
torical
l Operational Sum- Specifies the usernames and passwords for each of the data-
l Reporting
l Repository
l Schema Tem-
plate
l GIS Data
Setting the read only username and password gives the user
Read Only User Name
read only access to tables and views in the Historical data-
Read Only Password
base.
l Oracle Instance scripts, etc., require authentication. These features read this
Reporting Locale:
The language and country to use when generating descrip-
Language tions for reporting. You can set this independently of the
server locale.
Country
The name of the BIAR export file and the locations where you
BIAR Export File want it to be saved. It should be saved to a location off the
server.
BO Password and password (Start > Program Files > Business Objects [ver-
sion]> Central Management Console on the Reporting
Server.
The name of the CMS. Typically the host name of the Busi-
CMS
nessObjects server. Verify by logging on to the CMS.
The queries that the exportBIAR engine will carry out. Typ-
ically would be reports, categories, users, universe(s) and
any dashboards exported to the repository. Queries are sep-
arated by semicolons, e.g
MineStar can send If selected, allows the office software to send email internally
Email email internally at the at the site. Such emails are limited to the same domain as the
site default sender.
Address of Default The default sender address for emails sent from the office
Sender software.
The default recipient address for emails sent from the office
Address of Default
software. You are able to send emails to more than one
Recipient for e-mails
address by separating the addresses with commas.
The user name for Default Email Sender. Not used for non-
Email-User
authenticated servers.
Select the most appropriate Heap space size for the mine
site.
l Model.
l Historical.
l Summary.
Enterprise Extensions
Integrate Data
Enterprise
Exports with Tape
Backup NOTE: This functionality is limited. Refer to "Con-
Backups
figuring tape backups" on page 195 for more inform-
ation.
Auditing Topics
Specifies the topics for which auditing is enabled.
Environment
The configuration options in this option set relate to low-level office software functions, and should only be
changed by Fleet Consultants, or on advice from Fleet Customer Support. You need to be in Expert Mode to
view this option.
Administration
Select this check box to use bind variables for date vari-
ables when building SQL statements. If not selected, lit-
Use bind variables for dates
eral values will be used for dates. This can impact
performance with different versions of Oracle.
Maximum Events in the The maximum number of events in the incoming queue.
Maximum Events in the The maximum number of events in the queue for fil-
Maximum Events in the The maximum number of events in the queue for deliv-
Events
The configuration options in this option set relate to low-level office software functions, and should only be
changed by Fleet Consultants, or on advice from Fleet Customer Support. You need to be in Expert Mode to
view this option.
Default = 15.
Default = 30.
Default = 3.
The configuration options in this section should only be modified by experienced Fleet personnel.
Alarm Subject Enter the office software component that has the fault.
Fault Reset Message Enter a message describing shich fault has been reset.
Jobs
Logging
The configuration options in this section should only be modified by experienced Fleet personnel.
Packages
Specifies the format for packages to trace.
to Trace
Report Cache
Enter the SMTP host in the first line, the sender of the
Recipients email in the second line, and all other recipients the mail
is being sent to in remaining lines.
Email
Enter the subject.
Normal Pro- Subject
cessing; Default = Report server name and the report title.
Enter the SMTP host in the first line, the sender of the
Recipients email in the second line, and all other recipients the mail
is being sent to in remaining lines.
Workgroup Extensions
The time, in minutes, past the hour that the office soft-
Scheduled Minutes To
ware runs sendAllToSupport. Depending on the con-
Run Each Hour
figuration, this can be via FTP or e-mail.
Site can send to If selected, the office software uses FTP to transfer
MineStar’s FTP server files to Fleet Customer Support.
Support The name of the server that will be used to transfer files
Uploads FTP Enabled Computer between the customer site and Fleet Customer Sup-
port.
FTP User The user name to use when logging in to the FTP site.
Default = 10.
Scheduled Time To Run The time that a system snapshot should be taken each
Each Day day.
The time, in minutes, past the hour that the office soft-
Scheduled Minutes to
ware takes an Operating System Snapshot.
Run Each Hour
Default = 5.
Days to retain message The number of days message files will be retained.
files Default = 7.
Short term = 7.
Maximum Number of once. A large number is faster, but it may exceed the
Default = 50000.
RMAN Incremental DB Enter the time you wish the incremental backup to
Backup Schedule Time occur.
RMAN Incremental DB Select how often you wish the incremental backup to
RMAN Archivable DB
Enter the time you wish the archive backup to occur.
Backup Schedule Time
RMAN Archivable DB
Select how often you wish the archive backup to occur.
Backup Schedule Fre-
Default = 2 hours.
quency
figuration
Purge Recycle Bin TIME Enter the time of day to purge the recycle bin.
Platform - Clients
NOTE: If you change the "Look and Feel" setting on any of the Platform options you may experience
slight differences in the GUI than if you were looking at the Custom or Platform setting.
Explorer
Task Bar Size In Pixels and dialog boxes according to the location and size
of the Windows Task Bar.
Default = 26.
Appearance
If selected, the Fast Open/Close buttons are dis-
Display fast open/close
played on the splitter for the Navigator in the office
buttons on the navigator
software user interface.
Default = 30.
Behavior
Specifies whether opening a page should always
Open Page Policy create a new instance, or open an existing
instance if possible.
Explorer - Client
Play Sound for If selected, the office software attempts to play a sound
General
Urgent Alarm when urgent alarms are raised.
The look and feel to apply to the office software user inter-
Look And Feel
face.
Specifies the tab location and format when the Page Bar
Page Bar
is displayed.
Appearance con-
Specifies various advanced options for the office software
tinued
user interface.
Home Page
Desktop Man- NOTE: If you want to use a Welcome Page Con-
Show Save If selected, a dialog box displays when the user logs out of
Desktop Dialog On the office software, asking to save the current desktop
Exit configuration.
Welcome Page
Configuration
NOTE: This requires that the Home Page field be
set to Welcome on the Desktop Management tab
(see above).
Welcome Image
NOTE: This requires that the Home Page field be
set to Welcome on the Desktop Management tab
(see above).
Exit On Login If selected, the office software exits when the Login dialog
Cancel box is canceled.
The user names that will display in the User list in the
Users Login dialog box. These users must also be defined in the
Login Dialog office software before they can log in.
Password
You should not set a default user and pass-
word in a production system.
Select the check box to supress the warning message displayed if the loc-
Notifications
ation for thehealth event is not available.
The configuration options for Supervisor are a subset of those available for the office software. Refer to
"Explorer - Client" on page 433 for details.
Use the following configuration options to specify how the office software should display information in pages
that contain tables. You can elect to use the operating system defaults or your own custom colors.
Use custom foreground If selected, the office software uses the specified cus-
color instead of operating tom color in tables instead of the operating system
system default default color.
Colors - Fore-
ground
Custom Foreground
Specifies the custom colors to use instead of the oper-
Inactive Cell
ating system default color.
Warning Cell
Use custom background If selected, the office software uses the specified color in
color instead of operating active table cells instead of the operating system default
system default color.
Colors - Back-
Specifies the color to use in active table cells instead of
ground Custom Background
the operating system default color.
Formatting Styles
Use the following configuration options to specify how the office software should handle date and time inform-
ation. You can specify different formats for day, date and time display, and also for different durations.
If you are using a Spanish system you will note that the Days (d), Hours (hr), Minutes (m) and
Seconds (s) notations are not translated. You are able to go to use this Formatting Styles section to
change the notations to the Spanish equivalent.
Time Format The format that the office software uses to calculate times.
The format that the office software uses to display dates in the
The format that the office software uses to display days in the
The format that the office software uses to display times in the
Time Display
user interface.
Format
Default = HH:mm:ss.
l 0 – years.
l 1 – weeks.
l 2 – days.
l 3 – hours.
l 4 – minutes.
Default
l 5 – seconds.
l 6 – milliseconds.
Display: 2 hr 30 m 10 s.
Short
These provide variations on the default duration display for dif-
Long ferent purposes. The configuration file for each page specifies
which duration format to use.
Verbose
Blank When Zero The duration display format to use when the duration minute
Minutes value equals zero (0).
Blank When Zero The duration display format to use when the duration second
Seconds value equals zero (0).
Production
The following table describes the configuration options for the Cycle Assistant. The available fields for each of
the various cycle types are the same.
Cycle Configuration
Use the Cycle Configuration option set to specify how the office software should handle the events and
behaviors associated with creating cycles. This includes the events that signify spotting and loading, and any
Use RecoveryFile.dat
If selected, the office software uses RecoveryFile.dat
files to back up the
files to store and recover the cycle cache.
cycle cache
Default = 0.
Historical Start Time start time for the first open cycle.
0 = Disabled.
Default = 10000.
Stop-before-Loading software should stop looking back for the last Stop-
Default = 60000.
Stop-after-Loading software should stop looking forward for the first Stop-
Default = 10000.
0 = Disabled.
0 = Disabled.
Selecting this check box enables the Wait for Load fea-
ture.
This field specifies the event that you should look for
Stop before Dumping
which indicates you should stop looking for the last
guard
stop before dump.
Truck Spotting Dump activities stops spotting. This is determined by the last stop
before dumping begins. It ends when dumping begins.
Default = 50%.
Ignore 2Dfix way- If selected, position reports with a 2Dfix where an asso-
points ciated road segment is not feasible will be ignored.
Cycle UI Configuration
Use the Cycle Query Parameters option set to specify the default bounds that the office software uses when
performing a cycle query. If a user attempts to perform a cycle query that exceeds these bounds, the office
software displays an information dialog box asking for confirmation.
Item Description
The maximum number of queries that the Cycles Manager can safely process. If a
query results in more than this number of rows being returned, the office software
Max Query Limit
breaks the query into "chunks" to reduce the load on Mine Tracking.
Default = 250.
A row count threshold beyond which the office software produces a warning to
users that the query may take a long time to process.
Max Query Count This parameter is tested second when processing cycle queries in the Cycle Assist-
ant.
Default = 500.
A time threshold, in hours, beyond which the office software produces a warning to
users that the query may take a long time to process.
Max Date Range
This parameter is tested first when processing cycle queries in the Cycle Assistant.
The amount of time, in minutes, to extend the query by when searching for sur-
Default = 60.
Cycle Assistant
Item Description
This colors the background of the Sink Destination column when it does not match
Show Sink Mis- the assigned Sink. The color is taken from Machine Tracking > Graphical Display,
Cycle Editors
This enables you to choose whether or not the office software remembers the selec-
Remember Grade
ted grade block the next time Cycle Editor is opened and cycles are being edited
Block
(check box selected) or not remembered (check box cleared).
Item Description
Selecting this check box will make the message appear to have been gen-
Make New
erated recently.
If the timestamps are not adjusted, enter 0 to throttle the input rate by wait-
Default = 0.
Select this check box to ensure that timestamps are offset from the first
Acu Time
message by the same interval as in the real dats.
Commsserver Enter the address of the comms server to which messages will be sent.
Item Description
Input File Enter the directory or the file from which messages are sourced.
Select this check box if the cycle tester should stop after all files have
Single Pass been played once. If the check box is not selected the cycle tester will
cycle through them indefinitely.
Ignore Outbound Select this check box to play only inbound messages.
Delay Configuration
Use the Delay Configuration option set to specify the delay categories that the office software should use
as time buckets for reporting. The office software aggregates delay times based on the categories listed here.
Delay categories are also used to group Delay Types.
Item Description
Safety Inspections – Name Specifies the delay type for failed safety inspection delays.
Item Description
Specifies the maximum number of minutes from the current time for a
Threshold
Delay Event’s timestamp before it is deemed to be invalid.
Automatically start and stop Select this check box to have the machine put on delay when an operator
delays from operator login logs off. The delay is stopped when an operator logs on to the machine
changes again.
Enter the name of the delay type to use when the operator logs off the
Delay Type
machine.
Selecting this check box allows you to automatically start a delay when a
Automatically start delay
processor becomes unavailable. When you select this check box, a script
when a processor becomes
of the same name as that entered in the Delay Type is created when the
unavailable
Processor is set to Unavailable.
Item Description
Override Duration Dis- If selected, the office software overrides the duration display unit for the dis-
play Unit play of SMU values.
Specifies the units in which to edit SMU values in the the Fuel & SMU Assist-
Local Display Unit
ant.
Fuel Properties
Item Description
The name of the fuel to use for a machine when it is not specified explicitly
Default = Diesel.
Default fuel type descrip- The description of the fuel to use when creating the default fuel type.
Included Configuration
Mine Type Whether the site is operating on a Coal or Other fuel burn rate, Other being
metallurgical. This provides the basis for the selection of the appropriate fuel
burn rate when importing Included Configuration machine class files.
Default = Other.
KPI Limits
Note: In the office software, where there is an asterisk beside the field it indicates that changing the
field does not require a restart of the office software.
Site and Fleet Pro- Show productivity Select this check box to force the productivity charts
ductivity Plan charts using averages for Site and Fleet to smooth their raw values out so
for fixed periods that the values on the charts look less like noice or an
audio signal.
Productivity chart The value in this field determines how smooth the
fixed period charts will look. The maximum value is 15 minutes,
the minimum is one minute.The shift is divided into
these time periods, then any productivity value that
lands in a particular period is summed together and
averaged. The average is displayed on the chart. The
smaller the time period, the closer the chart will look
like original raw data.
Production Target Enter the production target for the fleet and site.
System Limits Site Productivity Limits Specifies the limits for the system.
Fleet Productivity Lim- Attention limits are displayed as Yellow in the office
its software indicating that the KPI is approaching the
configured warning levels for operation and should be
Material Moved Limits
addressed before warning levels are exceeded.
Processor Limits Maximum Queue Time Specifies the limits for the Machines.
Limits
Attention and Warning limits are as described pre-
Maximum Hang Time viously.
Limits
Dig rate limits are calculated by taking the loader's
Percent of Production maximum limit value and dividing it by the number of
Target Mined Llimits operational hours in the shift.
Minimum Utilization
Limits
Minimum Availablity
Limits
Minimum Efficiency
Limits
Percent of Maximum
Rate Limits
Loader Limits Truck Used Limits Specifies the limits for the Machines. Attention and
Warning limits are as described previously.
Trucks Needed Limits
KpiSummaries Configuration
Use the Mining Block Import option set to specify how the office software should create mining block
information.
You can also create and edit the Directory Prefix and
Mining Block Engine fields in the the office software
Platform > Job Runner > Mining Block Importer.
Polygon File Suffix The suffix used to distinguish polygon attribute files.
Points File Suffix The suffix used to distinguish polygon points files.
Import Job Specification The URL specifying the import job instance to run.
Default Values
Stockpile Folder The header labels used to identify the various stock-
Label pile folders.
Assignment Destination The header label used to identify the assignment des-
Label tination.
Output Region in Pts Select this check box to include the top level block
Header folder name in the export file header.
Output Controls
Specifies any extra text to include in the points file
Extra Output Text
header.
Oracle Export
The instance name where the mining block data is
Options Instance Name
stored.
Connction
User Name The user name to log into the instance.
Options
Options
The number of folders to use in the resulting mining
Nr Hierarchy Folders
block hierarchy.
Display name definitions Mining Block Dis- The format of how a mining block name is
play Name described. Use group numbers from 0-9 and the
mining block name to describe the mining block dis-
play name, for example {0}:{1}:{2}_{name} PIT_
A:CUT_A:BENCH_A_001.
Overmined
Overmined alarms occur when either the mined mass or volume of a mining
block exceeds the original mass or volume by the specified percentage.
Undermined
Undermined alarms occur when either the mined mass or volume of a min-
ing block is less than the specified percentage of the original mass or
volume.
0 = disable.
0 = disable.
0 = disable.
0 = disable.
Terrain Interface Synchronise min- When this check box is selected, the material man-
ing blocks to Ter- agement service sends the details of new and
rain via RMI changed mining blocks to Terrain via the RMI inter-
Note: The asterisk face.
beside the fields
means that this set-
ting can be changed
Error on Terrain When this check box is selected, if an error occurs
in a live system and
PTS/CAT Update when notifying Terrain with mining block changes
the change takes
for PTS/CAT files, an error is loggedd. Otherwise,
effect immediately
if the check box is cleared, a trace message is
without having to
logged.
restart the system.
ROM Loaders
Item Description
Pre-processing Configuration
Disable error
detection, pre- Select this check bok to disable all production error detection
vention and cor- features.
rection
Log production Select this check box to log all production related events to
related events {MSTARLOGS}/production in a comma-separated format.
This tab has all the configuration setting for production error
detection. You should refer to the screen in Supervisor for a
description of the following settings.
This tab has all the configuration setting for production error
prevention. You should refer to the screen in Supervisor for a
description of the following settings.
This tab has all the configuration setting for production error
prevention. You should refer to the screen in Supervisor for a
description of the following settings.
SMU Interpolator
General SMU Enabled not be updated and the SMU interpolator will return 0.
Max SMU Boxes Per Machine Used (LRU) cache for each machine.
Default = 6.
Default = 4.
Default = 48.
Default = 12 hours.
Default = 12 hours.
TKPH Parameters
TKPH Configuration Use K1 Factor Select this check box to apply the K1 factor
to the TKPH calculation.
Sample Period Only the cycles that end within this period
contribute to the TKPH calculation.
Dynamic TKPH Con- Preferred TKPH Values Select the TKPH algorithm to use for determ-
figuration ining TKPH warning levels (not for Assign-
ment).
Ambient Temperature file Enter the location of the CSV file used by the
Ambient Temperature Service.
Temperature file date Enter the format of the dates and times
format within the Ambient Temperature Service file.
Assignment Options
Use the Assignment Options pages to specify the various configuration options for the office software.
iAssignment runs in a single or separate virtual machine (VM). Shortcuts for starting VMs are generated using
the mstarrun targets makeShortcuts or applySystemOptions. See the Supervisor page reference chapter for
information on these targets.
You should only use this page when the IAssignmenServer is not running, for example, the initial setup of the
system.
NOTE: This page can also be accessed via the office software when in Expert Mode.
l Start Supervisor, and on the Contents menu point to Setup and click Assignment Options.
The following sections describe the various system assignment options that are available. The options are
described by function in the order that they appear in the Assignment Supervisor tab set.
l Invalid Assignments
l Scheduled assignments arriving outside boundaries
l Missed ASAP scheduled assignment
l Exceeded number of allowed queued trucks
l Deviation from the scheduled assignment required time
l Time waiting at the server
l Preferred server
l Blend violations
l TKPH violations
Scheduled automatic assignments were improved to ensure trucks arrive during the specified window and as
close to the required time as possible without impacting production. Scheduled assignments are considered
four assignments ahead into the future at the time of an assignment. The following behavior will be observed.
l ASAP will cause the truck to be assigned for the next valid load state to a destination which will accept
the same load state. For example a truck with an ASAP assignment when next empty will be assigned
to a location accepting empty trucks when the next assignment is requested.
l Trucks will not be assigned to a destination which does not accept the load state (note loading tools
and processors are considered empty or full).
l Trucks will arrive at a destination as close to the required time as possible.
l A truck will travel to the destination which will allow arrive as close as possible to the required time.
l A truck will arrive before the arrive before time if possible.
l A truck will always arrive after the arrive after time.
l A truck will arrive between the arrive after and arrive before time if possible.
l A truck will not be assigned to a fuel bay on delay.
l A truck can be assigned to a fuel bay on delay with an ASAP scheduled assignment.
l The use of scheduled assignment penalties for shift change scheduled assignments allows trucks the
opportunity to arrive at multiple different destinations at different times.
l If the truck cannot arrive between the arrive after and arrive before time it will arrive as close to the
required time as possible.
A truck will arrive between the “Arrive Before” and “Arrive After” time of a scheduled automatic assignment if
feasible. The “Arrive Before” and “Arrive After” times are optional and the preference is to assign the truck to
arrive close to the required time. The required time precision was added to the Assignment Supervisor to allow
a truck to arrive within 5 min, 10 min, 20 min intervals at the same cost. The precision is set in Assignment
Supervisor > Policies > Scheduled Assignment > Scheduled Assignment Arrival Time Precision.
Truck assignments adhere to station capacity when the Assignment Planner is enabled. The following beha-
vior will be observed.
l If the station capacity is set to 0 then there is no limit of trucks at the station.
l The queuing check box for the destination allows trucks to queue at the station for availability.
l Trucks will not be assigned to a station if there is no queuing, there is no remaining capacity and all the
trucks are on delay.
l Trucks will be assigned to a station if queuing is allowed, the arrival of the truck is after the end of the
first delay and the queuing time does not interfere with production.
l ASAP scheduled assignments will send a truck to a station regardless of capacity being exceeded or
queuing enabled.
l Trucks at the station (with queuing allowed) are considered to be there for 15 minutes by default if not
on delay and queuing is allowed at the station. This default can be changed in the Assignment Super-
visor > Policies (tab) > Stations > (panel) > Default time at station.
l Trucks at the station (with no queuing allowed) are considered to be at the station until the truck is reas-
signed
l If the queue length for a fuel bay is 0 this means no queuing is allowed
l Trucks with a manual, scheduled automatic or scheduled manual assignment are considered to be at
the station/fuel bay
l The machines at a time for a fuel bay and queue length allows only the specified number of trucks to be
at the fuel bay at the one time.
l Trucks will only be assigned to the fuel bay if the fuel bay is available for assignment or not on delay
(this will be shown in the assignment context).
l A truck must share an assignment group with the fuel bay, or be allocated to no assignment group to be
assigned to a fuel bay (this will be shown in the assignment context).
l A truck will not be assigned to a fuel bay or station if there is no queuing allowed until one of the trucks
at the fuel bay or station is assigned to another destination
l A truck can be assigned to a fuel bay or station if queuing is allowed and the fuel bay or station is occu-
pied.
l The end of the delay is the time the truck is expected to leave the fuel bay when it arrives, otherwise
the refueling time is used when queuing is allowed.
l Station capacity is separate from the capacity at fuel bay machine. If a truck is assigned to a fuel bay it
considered as a machine at the fuel and not the station.
The assignment contexts were updated with statements describing all incompatibilities when calculations
occur with the Assignment Planner. The assignment planner will distinguish occurring as a result of
There are additional statements describing why compatible destinations were included.
l The truck will arrive at the scheduled assignment destination outside the arrive after or arrive before
window
l There is no capacity at the scheduled assignment destination
l The truck needs to wait for a delay to end at the loading tool processor or fuel bay
l The maximum TKPH will be exceeded
l The production goals will not be met
l The loading tool/processor/material are not in the production plan
l The truck would better meet the deviation at another destination
l The truck has a preferred loading tool/ processor
Deviation improvements
The assignment deviation cost was improved when using the Assignment Planner. The deviation is cal-
culated as two costs, for the committed assignments and then for the future forecast assignments. The two
assignment deviation costs ensure trucks follow the production plan and arrive to loading tools and pro-
cessors at regular intervals. The costs were improved by determining the expected rate of material from the
production plan and forecasting actual rate of material on either the committed assignments or future assign-
ments. The difference from the actual and expected was scaled so that high producing loading tools can be
compared with lower producing loading tools. A more accurate comparison can be made between loading
tools and processors to meet the production requirements.
Truck assignments adhere to the production goals more closely using the Assignment Planner without surging
or starvation. Adherence to the production goal is achieved by considering the rate of material to be trans-
ported along the production arcs (loading tool, processor, and material combination). The rate is calculated
using historical information up to the production recording period in the Assignment supervisor and trucks on
route.
In Assignment supervisor you are able to regulate “at least” and “no more than” production goals. These
options ensure that the rate of material along the associated production arcs (loading tool, processor, and
material combination) does not drop below or go above the specified threshold rate. The rates can be set by
changing “at least” in Assignment Supervisor > Policies > Production Requirements > Threshold for
maximum production goals, and changing “no more than” in Assignment Supervisor > Policies > Pro-
duction Requirements > Threshold for minimum production goals.
Changing a production goal does not clear the history of material dumped along the associated arcs. This
means that if a production goal dashboard in the Fleet Update Assistant is showing “500 t/h” and the pro-
duction goal is changed from “at least 1000 t/h” to “at least 1100 t/h” then the production goal dashboard will
still show “500 t/h”.
Blend improvements
Truck assignments will ensure blends are kept within better specification using the Assignment Planner. The
ratio of material in the blend is calculated using the dumps during the production recording period and trucks on
route to the blend processor. Changes were made to blending to ensure:
l only the materials in the blend can be dumped at the blend processor.
l discrete grades can only be dumped at the blend processor if specified in the blend when a discrete
blend is specified.
An Assignment supervisor option allows unspecified discrete grade to be dumped at the blend processor
when a discrete blend is specified. Assignment Supervisor > Policies > Blending > Allow free tonnes
assigned for discrete blending.
Changing a blend as a target at a processor does not clear the history of material dumped along the asso-
ciated arcs. This means that a blend at a processor with 30% HG and 70% LG will still show these numbers if
the production goal is changed for the blend. If the blend is changes then the percentages will be adjusted
depending on the blend material or discrete grades.
Delay improvements
The Assignment Planner better manages assignments when machines (including loading tools, fuel bays and
processors) are on delay. The following behavior will be observed when trucks cannot be assigned to loading
tools or processors on delay (in Assignment Supervisor):
The following behavior will be observed when trucks can be assigned to loading tools or processors on delay
(in Assignment Supervisor).
l There is a restricted period prior to the start of the delay and before the allowed time where it is pre-
ferred a truck will not arrive at a machine.
l Trucks will be assigned to a machine to minimise time waiting if all the machines are on delay (restric-
ted period) when the truck arrives.
TKPH Improvements
Setting the Dynamic TKPH in Supervisor > Options > System Options > Production > TKPH Parameters
allows the values to be used by Assignment. The Dynamic TKPH is provided to Assignment at the end of
each dump. The Assignment Planner will consider the TKPH occurring over the period set in Supervisor >
Contents > Setup > Assignment Options > Policies tab > TKPH period.
A future forecast considers the TKPH for the next four assignments taking into consideration the dynamic
TKPH over the last hour (default). Trucks are assigned to minimize TKPH over the next four assignments.
The assignment context will show the TKPH estimated for the next four assignments. The assignment trace
file will show the TKPH for the next four assignments given the different choices.
Two options were added to the Assignment Supervisor which enable the logging of Assignment context errors
and assignment calculation errors using the Assignment Planner. These options are enabled by default. This
functionality prevents duplicate errors appearing in Assignment when each assignment is generated.
A global limit of trucks queued for each loading tool or processor can be specified in Assignment supervisor.
For example, specifying two as the maximum number of trucks at each source will ensure that no more than
two trucks at a time will be queued at each loading tool. The limits can be changed in Assignment Supervisor.
Loading tools - Assignment supervisor > Policies > Queuing > Maximum number of trucks queued at
each source.
Processors - Assignment supervisor > Policies > Queuing > Maximum number of trucks queued at
each sink.
Queues
The loading tool rates for loading tools has been modified. This impacts
The loading tool rates are cleared when the following data is modified:
The calculation of the loading tool rate has been modified to be the sum of compatible truck payloads divided
by the time for a truck to be loaded at the loading tool (includes spotting for single sided loading).
The time for a truck to finish at a double-sided loading tool is the maximum of the loading and spotting time.
Incompatible trucks with a loading tool will be loaded in 90 seconds and take 90 seconds to spot. This is
required for future predictions with manual assignments (assigning to an incompatible loading tool).
Item Description
Assign when no operator Select this check box to specify that assignments be sent to trucks, even
when there is no valid operator logged on.
Select this check box so the Ton Kilometer Per Hour (TKPH) values are
taken into consideration when making assignments. If the TKPH tolerances
are exceeded, a warning is issued.
Default = Selected.
Item Description
Enable Mining Block Lock- Select this check box to enable mining blocks to be used to restrict the des-
ing tination of loaded material, and turn mining block locking on for loading
tools. When selected, the processor specified in the loaded mining block is
used to determine the assignment for the truck to unload. This option
MUST be selected if mining block locking is being used. If mining
block locking is not used, do not select this box as it creates an overhead on
the system.
Select this check box to enable the mining block destination to be con-
sidered the only destination for the loading tool currently loading the given
mining block. Strict mining block locking uses only the processor specified
in the loaded mining block to determine the allowable assignment for the
truck to unload. If the mining block lock is not feasible, then assignments
are calculated from the material that is being loaded.
If the selected processor is available, all material produced from the mining
block should be directed to it. Otherwise, the truck will be unassignable and
the destination must be resolved by the controller.
All requests from the truck or from reassignment waypoints are ignored and
the assignment will continue until completion, however, material changes
will trigger the calculation of a new assignment.
Item Description
Enable Station Destin- Select this check box to allow station destination locking to be specified for
ation Locking scheduled assignments. After a scheduled assignment has been activated,
it is considered locked and is not recalculated unless a controller, using the
office software, requests a new assignment through the “Truck Assistant”.
All requests from the truck or from reassignment waypoints are ignored and
the assignment is continued until completion.
If mining block locking is not used, do not select this box as it creates an
overhead on the system.
Item Description
Automatically reassign Select this check box to allow the office software to automatically reassign
trucks if loading tool or trucks with an existing automatic assignment to a loading tool or processor
processor goes on delay that goes on delay.
If you select the check box, when a loading tool or processor goes on delay,
Assignment attempts to automatically reassign trucks assigned to the load-
ing tool or processor. This includes trucks that are traveling to the loading
tool or processor, and trucks that have already arrived at the loading tool or
processor but have not yet commenced loading or dumping.
Automatically reassign Select this check box to have available trucks reassigned to any loading
trucks if loading tool or tool or processor that ends its delay and becomes available.
processor ends delay
Allow trucks to dump at Select this check box to allow trucks to be assigned from a loading tool to
Item Description
Notify loading tool of Select this check box to specify that a message be sent to loading tools
assigned processor upon completion of loading a truck, i.e. upon the truck changing state to trav-
eling, indicating the destination and material of the truck. If the software on
the loading tool allows the display of truck assignments, the assignments
can be displayed if desired. The message is not present if the truck is reas-
signed or manually assigned to some other processor.
Send backup assignments Select this check box to allow the office software to automatically send
backup assignments to a truck whenever it enters a destination in a bad
communications area while going to processors, stockpiles or dumps.
These assignments are sent to the next loading tool after the dump is com-
pleted. Once the truck reenters an area of good communications the assign-
ment is confirmed.
Suppress start of service Select this heck box to prevent any start of dumping or loading assign-
assignments ments.
Item Description
Turn-around policy Specifies whether trucks can receive an assignment that requires that they
turn around. It is usual for mines to discourage or disallow trucks from turn-
ing around on a haul road. When reassigning a truck, it is possible that the
closest loading tool or processor may require the truck to turn around. The
mine can configure Assignment in this case for safety or productivity reas-
ons. Valid options are:
Never - Do not reassign if the truck needs to turn around. If you select
"NEVER" allow turn-around, no reassignments are generated that require a
truck to turn around. For example, if your trucks go to destinations to tie
down either manually or assigned by the system, where only one road
enters the destination, when they come off delay the office software will be
unable to assign the truck. This is because the office software interprets the
truck as coming back on the road after a delay as a U turn.
Assignment allows the truck to be assigned away from the destination even
if it has to exit via the road it came on. If a truck is at a 'dead end' road, it is
assumed that it has turned around. Assignment will assign trucks in this
situation.
Avoid - Allow the truck to turn around only if no other alternative assign-
ments exist.
Allowed - Allow the truck to turn around. The turn around time is included
when determining the best reassignment that would require the truck to turn
around.
Item Description
Turn-around Time Specifies the average time, in seconds, that should be added to the travel
time for assignments that require a truck to turn around.
Log assignment context Select this check box to save the information about the assignment
decision for analysis by the Mine Controller or Fleet Customer Support.
Default = Selected.
Restrict logging of assign- Select this check box to save the information about the assignment
ment context to failed decision for analysis by the MineController or Fleet Customer Support in the
assignments only case of a failed assignment.
Show exceptions
This option is primarily for development usage only, and
should not be used on the mine site except under instruction
from Fleet Consultants.
Select this check box to log information about exceptions when they occur.
Item Description
Maximum Assignment Specifies the maximum number of hours an Assignment trace file can exist
trace file age before being replaced with a new trace file.
Default = 24 hrs.
Maximum Assignment Specifies the maximum file size of an Assignment trace file before being
trace file size replaced with a new trace file.
Assignment trace file his- Specifies how long the trace file history remains on the system before being
tory age deleted.
Default = 7 days.
Maximum time allowed for Specifies the maximum length of time that Assignment can use to calculate
any action a new assignment or to update the system.
Default = 5 minutes.
Item Description
Defalut = Selected.
Default = Selected.
Default = Selected.
Default = Selected.
Item Description
Trusted time interval after The amount of time after a restart, and a completed truck assignment, that
a completed assignment information should be trusted.
Show all values for loaded Select this check box to show the predicted values in the resolve truck
trucks upon restart state helper for all loaded trucks.
Policies tab
Note: Refer to the end of this section for information on which options on this tab directly impact
Scheduled Assignments.
Delays Delayed trucks in a Select this check box so that trucks will resume queueing
queue will resume when placed on a delay while queueing rather than getting a
queuing when taken new assignment when taken off delay.
off delay
Queuing Loading Tools Maximum number of trucks queued at each loading tool
Default = 2.
Default = 2.
Default = 15 minutes.
Default = 60%
Default = 1 minute.
Default = 30.
Default = 1 count.
Default = 1 hour.
Haulage Map This field allows you to change the road travel time. The
Road Map cached travel times for the road haulage network will be recal-
culated when a road travel time changes by at least 20
seconds.
Default = 20 seconds.
Note: Also refer to Assignment Planner Improvements for further information on Scheduled Assign-
ments.
The Default Time at Station is how long a truck is expected to remain at a station if the truck is not on delay
and queuing is allowed at a station. The Default Time at a Station applies to
The Default Time at Station is factored in future assignment calculations. For example, Truck A is manually
assigned to Station 1 with station capacity of 1 and allows queuing. Truck A arrives and remains in the state
route done. Truck B has an automatic scheduled assignment to Station 1. Truck A is only expected to remain
at Station 1 for Default Time at Station of 15 minutes. Truck B will be automatically assigned to Station 1 to
arrive 15 minutes after the Truck A arrives at Station 1.
The Scheduled Assignment Arrival Time Precision is how close the truck is expected to arrive to the
required time at the scheduled assignment destination. The Assignment Supervisor allows the Scheduled
Assignment Arrival Time Precision to be
The precision helps determine how close the scheduled assignment is to the Required Arrival Time. For
example arriving at 00:17 is considered closer to the Required Arrival time than 00:27 using strict precision.
However, the cost is the same for 00:17 and 00:27 using Normal precision. The truck will arrive before the
Arrive Before time where possible.
Using smaller increments for the Scheduled Assignment Arrival Time Precision reduces production. For
example, a truck may have a scheduled assignment to tie-down empty or loaded between 17:40 and 18:00
with a required time of 17:50. The truck may be in a situation where it can complete a 10 minute cycle. With
strict precision the truck will be automatically assigned to tie down at 17:46 however there was an opportunity
to complete another half a cycle.
Selecting the Trucks can be automatically assigned to loading tools on delay or Trucks can be auto-
matically assigned to processors on delay check boxes allows a truck to arrive at a loading tool or pro-
cessor on an assignment delay. The loading tool or processor has a restricted period where the truck can
arrive at the loading tool or processor if there there are no other machine destinations available, and an allowed
period where the loading tool or processor is still considered on an assignment delay but trucks will be auto-
matically assigned as if the loading tool or processor is available.
The Restricted Period Before Delay Starts is a period before the delay start time where it is not intended a
truck will load or dump as a result of an automatic assignment to a loading tool or processor. The Allowed
Period Before Delay Ends is a period before the delay end time where trucks can be automatically assigned to
arrive at the loading tool or processor. A truck will be automatically assigned to a loading tool or processor with
the earliest delay end time if the truck has a choice to load or dump at multiple loading tools or processors with
a restricted period. A truck will be automatically assigned to optimise production if it can arrive at a loading tool
or processor during an allowed period.
Clearing the Trucks can be automatically assigned to loading tools on delay or Trucks can be auto-
matically assigned to processors on delay ensures trucks do not load or dump at a loading tool or pro-
cessor on an assignment delay. The truck will not receive an automatic assignment if all loading tools or
processors are on an assignment delay. Trucks can still be manually assigned to loading tools and processors
on an assignment delay.
What is the behavior of trucks not allowed to a delayed loading tool or processor
The chart below shows the difference in decisions for a truck if it were to arrive at one of two loading tools
when trucks are not allowed to delayed machines. For example, if a truck requested an assignment and could
arrive at Machine 1 at 1:30 and Machine 2 at 4:30 then the truck would be automatically assigned to either
Machine 1 or Machine 2 to optimize production. The decision to go to either Machine 1 or Machine 2 will
depend on the closest loading tool and compliance to the production plan.
The chart below shows the difference in decisions for a truck if it were to arrive at one of two loading tools
when trucks are allowed to delayed machines. For example, if a truck requested an assignment and could
arrive at Machine 1 at 1:30 and Machine 2 at 4:30 then the truck would be automatically assigned to Machine 2
because of the restricted period at Machine 1.
The scenario illustrated below shows the machine destinations a truck can be automatically assigned to for
various situations with delays. The behavior of the truck occurs when a truck can be automatically assigned
to a machine destination (loading tool or processor) on delay. For this scenario to occur the following con-
figuration must be set in the Assignment supervisor:
l Trucks can be automatically assigned to loading tools on delay or Trucks can be automatically
assigned to processors on delay check box is selected
l Restricted period before delay starts is set to 300 seconds
l Restricted period before delay ends is set to 300 seconds
The behavior of a truck can be interpreted by the time line under Possible assigned machine destinations if the
truck arrived at Destination 0, Destination 1 or Destination 2 during this period. If a truck were to arrive at
either Destination 0, Destination 1 or Destination 2 between 00:00 - 00:05 then all machine destinations would
be unavailable for assignment. The automatic assign would fail as there are no machine destinations available
for assignment.
00:00 - 00:05
During 00:00 and 00:05, all machine destinations are unavailable for assignment. The truck would get a failed
automatic assignment.
00:05-00:10
During 00:05 and 00:10, Dest 2 is the only machine destination available for assignment. The truck would get
an automatic assignment to Dest 2.
00:10-00:15
During 00:10 and 00:15, Dest 2 and Dest 1 are the only machine destination available for assignment. The
truck would receive an automatic assignment to Dest 2 or Dest 1 depending on factors other than delays,
including production.
00:15-00:20
During 00:15 and 00:20, all machine destinations are available for assignment. The truck would receive an
automatic assignment to either Dest 2, Dest 1 or Dest 0 depending on factors other than delays, including pro-
duction.
00:20-00:25
During 00:20 and 00:25, there is a five minute restriction period prior to the delay for Dest 0. Trucks will not be
assigned to a machine destination with a restriction period prior to the delay if there are other machine des-
tinations available. The truck would receive an automatic assignment to Dest 2 or Dest 1 depending on
factors other than delays, including production.
00:25-00:30
During 00:25 and 00:30, there is a five minute restriction period prior to the delay for Dest 1 and Dest 2. It is
not desirable to assign the trucks to machine destinations with a restricted period or delay. However, the truck
will still be automatically assigned to the machine destination which finishes delay first, which is Dest 2 in this
scenario.
00:30-00:35
During 00:30 and 00:35, all the machine destinations are on delay. It is not desirable to assign the trucks to
machine destinations with a delay. However, the truck will still be automatically assigned to the machine des-
tination which finishes delay first, which is Dest 2 in this scenario.
00:35-00:40
During 00:35 and 00:40, there is a five minute allowed period to assign trucks to Dest 2 while Dest 1 and Dest
0 are on delay. Dest 2 is considered in the production plan for this period and a truck will be automatically
assigned to Dest 2.
00:40-00:45
During 00:40 and 00:45, there is a five minute allowed period to assign trucks to Dest 0 and Dest 1 while Dest
2 has ended the delay. All the machine destinations are considered in the production plan at this point. The
truck would receive an automatic assignment to either Dest 2, Dest 1 or Dest 0 depending on factors other
than delays, including production.
00:45-00:50
During 00:45 and 00:50, all machine destinations are available for assignment. The truck would receive an
automatic assignment to either Dest 2, Dest 1 or Dest 0 depending on factors other than delays, including pro-
duction.
00:50-00:55
During 00:50 and 00:55, a non-assignment delay is added for Dest 2. Non-assignment delays still allow trucks
to be automatically be assigned to a machine destination. The truck would receive an automatic assignment
to either Dest 2, Dest 1 or Dest 0 depending on factors other than delays, including production.
00:55-00:60
During 00:50 and 01:00, an assignment delay is added for Dest 1 which is five minutes in duration. A truck can
be automatically assigned to Dest 1 during the delay, because the duration of the delay is less or equal to the
allowed period of five minutes. The truck would receive an automatic assignment to either Dest 2, Dest 1 or
Dest 0 depending on factors other than delays, including production.
Solvers tab
The Assignment Planner selections are enabled by changing a config file. The config file should only
be changed by Caterpillar staff.
Stop all services.
Navigate to mstarHome\res\minestar\assignment\configuration\Config.properties and change
SHOW_EXTENDED_ASSIGNMENT_OPTIONS = false to SHOW_EXTENDED_ASSIGNMENT_
OPTIONS = true.
Start all services.
Default = 5%.
Default = 24 hours.
Default = 40 minutes.
These options change The predicted percentage of fleet that will be unavailable for
the production planner assignment at some point during the lifecycle of the production
in the Assignment mod- plan.
ule.
Default = 5%.
These options change Sets the minimum duration that is considered a production
the production plan in period.
the Assignment mod-
Default = 5 minutes.
ule.
Minimum Horizon
Maximum Horizon
Default = 10%.
Prioritization Tolerance
Default = 2%.
Default = 20 minutes.
1. Check the MineTracking logs to see if a system problem is causing the assignment issues.
2. Check the CommsServer and CommsController logs to see if a communication problem exists.
If either MineTracking or communications are causing the assignment issue, resolve these issues and
recheck the assignment behavior. If the office software is operating normally, it is recommended that the you
do the following.
1. Take screenshots of the wrong behavior and note the expected behavior.
2. Restart Assignment.
3. After the restart take a system snapshot.
4. Submit a DSN with the screenshots, snapshot and the expected behavior.
Assignment has been constructed to be state aware and should rapidly resume assignments after a restart.
The restart will reset the assignment model and in most instances will resolve any observed problems.
The Assignment log should have no errors. If errors are found in the assignment log then a DSN with a snap-
shot should be submitted.
NOTE: If the MineStar Services are restarted all trucks will have a unknown loadstate and will pos-
sibly require manual assignments.
Introduction
The office software relies on an integral application – mstarrun – to perform many of the daily operational
tasks required to keep the office software running. Many of these tasks occur "behind the scenes"; they are
transparent to every-day operations. You can also use mstarrun on the command line to perform many other
tasks, for administration, troubleshooting and monitoring of the office software system.
l Have all knowledge about how to start parts of Fleet in one application
l Make the office software deployment infrastructure internationalizable
l Have the office software deployment infrastructure written in an easy to use and understand language
(Python)
l Make non-Java office software programs easier to write, thus avoiding the extensive use of batch files
l Make the office software independent of assumptions about Windows drive letters
l Enable developers to run the office software from a repository
l Make the office software deployment infrastructure cross-platform.
This chapter provides information on the available mstarrun targets, options and arguments.
Chapter goals
By the end of this chapter, you should:
l Ensure that the directory containing mstarrun.bat or mstarrun is in your PATH. This is important,
because whichever copy of mstarrun you are running determines which office software installation
you are running. The mstarrun executable checks the directory where it is installed and uses that to
determine the value of {MSTAR_HOME}. Any environment variable called MSTAR_HOME is ignored.
l Ensure that Python is in your path. This is necessary for correct execution of mstarrun
NOTE: Any setting of JAVA_HOME in the environment will override any setting in a properties file.
l If mstarrun is asked to run a Java program, it looks for an environment variable called JAVA_HOME.
This is a standard setting used by Java environments. It is often set to C:\jdk1.5.0 on Windows.
debug is off
For example:
mstarrun metadatabuilder -n
mstarrun debug on
mstarrun minetracking.xoc
The command can be a file name or a target. You can use mstarrun to execute the following types of files:
xoc files
py files
Python programs
eep files
mscript files
sh files
bat files
-b
-B
do not pass a bus URL to the application as its first parameter (default)
-c
show the DOS console (default, but usually overridden by client GUIs)
-d
-D
-e path
use the ZIP files listed in path as extension overrides – no matter what extensions the system would normally
pick up, if the same extension appears in the extension override path, use that one instead. This is to allow
testing of new extensions against existing installations.
-j
do not copy java.exe to a different name, just use the real one
-J
-pprogressFile
-PprofilerFile.jpl
-s system
run this application against the specified Fleet system (default is the standard Production system)
-w
-W
argcheck
A pattern used to check that any application arguments are valid. The following keys are supported by
argcheck:
If you completely omit the argcheck key, any set of arguments will be passed through to the underlying applic-
ation.
args
argformats
Space separated list of 0-base indices of arguments to which interpretFormat should be applied before
passing them to the application. InterpretFormat interprets variable interpolations such as {MSTAR_HOME}.
argpaths
Space separated list of 0-base indices of arguments to which interpretPath should be applied before passing
them to the application. InterpretPath interprets variable interpolations (by calling interpretFormat), and also
fixes path separators (/ and \) and canonicalizes directory names where possible.
background
The name of a color. The color of the background in the window in which the process runs. Valid colors are
black, blue, green, aqua, red, purple, yellow, white, gray, light blue, light green, light aqua, light red, light
purple, light yellow, bright white, azul, amarillo, negro, blanco, and verde.
Set to 1 (true) or 0 (false). If true, it will automatically close any new window that was opened to run the com-
mand in. Otherwise the window will stay open. The default is 1.
_COUNTRY
foreground
The name of a color. The color of the text in the window in which the process runs. Valid colors are black,
blue, green, aqua, red, purple, yellow, white, gray, light blue, light green, light aqua, light red, light purple, light
yellow, bright white, azul, amarillo, negro, blanco, and verde.
_LANGUAGE
The language to pass through to Java. Note that this is different from the mstarrun locale, which affects only
mstarrun's behavior.
newWindow
Set to 1 (true) or 0 (false). If true, it will start the process in a new window. In Windows, the new window is an
instance of 'cmd.exe'. The default is 0.
output
A path to be interpreted by mstarpaths. May also include {YYYY}, {MM}, {DD}, {HH}, {NN} to interpolate the
current year, month, day, minutes, hours, minutes in the name. If specified, the filename will be interpreted
and the standard output and standard error of the application will be saved to that file.
passBusUrl
Set to 1 (true) or 0 (false). For file formats such as XOC and EEP, mstarrun sets this value to what it knows is
right, but for Java classes and other file formats, it may need to be specified. The default is 0.
_TIMEZONE
The time zone to pass through to Java. This must be a Java-defined time zone ID.
usage
A string that will be printed if the argument checking fails (see the argcheck key).
app.name
MSTAR_LOGS
MSTAR_TRACE
MSTAR_TEMP
MSTAR_HOME
MSTAR_CONFIG
MSTAR_ADMIN
MSTAR_XML
The directory which contains subdirectories for parts of the system which use XML. These subdirectories con-
tain DTDs and configuration information.
jdk.home
user.timezone
user.region
openorb.home
mstarrun Targets
The following section is an abbreviated list of mstarrun targets, a short description and their arguments and
options. Arguments and options in square brackets [ ] are optional.
Target Documentation
Each mstarrun target is documented as follows:
targetName
Usage
Description
Arguments
Options
Example
Notes
List of targets
The following is a list of the most commonly used mstarrun targets, and is by no means comprehensive. For a
complete list of available mstarrun targets, open a command shell and execute the following command:
mstarrun targets
applySystemOptions
Usage
mstarrun applySystemOptions
Description
checkDataStores
Usage
mstarrun checkDataStores
Description
Check that the model, historical, template and reporting datastores are correctly created.
checkUpdates
Usage
mstarrun checkUpdates
Description
This is a diagnostic tool used by Fleet Customer Support which checks that extensions and patches have
been installed correctly.
checkScheduler
Usage
mstarrun checkScheduler
Description
Check that the System Scheduler (WScheduler.exe) process is running and that all scheduled tasks created
for it are enabled (refer to makeScheduledTasks). If the scheduler is not running it will be started. If any task is
disabled, an e-mail message is generated.
Notes
This is the only Windows Scheduled Task that is generated. See startScheduler.
cleanExpiredData
Usage
mstarrun cleanExpiredData
Description
Generate an input file for DBDataMan based on entries specified in Supervisor, and then runs the process.
Notes
cleanExpiredFiles
Usage
mstarrun cleanExpiredFiles
Description
Perform a scan of directories and marks files as _MARKED_FOR_DELETION_<fileName> when their reten-
tion period is reached. These marked files are permanently deleted when their deletion period is reached.
Notes
createDataStores
Usage
mstarrun createDataStores
Description
For example, on a server machine where the Oracle admin drive is D: and where two additional drives are
required:
createDataStores performs the following functions, using the information specified in Supervisor and the argu-
ments entered at the command line:
cycleRecalcUtility
Usage
mstarrun cycleRecalcUtility
Description
emptyDataStore
Usage
Description
Arguments
<dataStoreName>
exportBIAR
Usage
mstarrun exportBIAR
Description
exportDataStores
Usage
Description
Runs the exportData target for named data stores (multiple or all can be run).
Options
-Z
-d
Arguments
<datastoreName>
Example
exportDataToXml
Usage
Description
Create a zip file containing one XML file per specified data set. If -a is specified, one XML file is created con-
taining all data sets. The export process exports any data set that is specified as well as any other objects
that are referenced. This means that the export set will be fully self contained.
Options
-d
AllModelEntities
Users
PersonnelInformation
MachineInformation
MessageInformation
Standard Messages
MineLayout
MaterialInformation
ProductionInformation
AssignmentInformation
-a
-o
Export to the specified directory instead of to the default directory specified in Supervisor, typically {MSTAR_
DATA}/export
FieldStats
Usage
mstarrun FieldStats [-l] [-d] [-r] [-n] [-s] [-u script] [-N] [-M] [-L cellSize] [-D] [-H] [-T] [-W] [-U script] <gwm files
OR directory OR zip files>
Description
Use in conjunction with Field Communications Monitor in the office software. Use where you wish to accu-
mulate the statistics over a large number of days and don’t want to use the GUI. You can run this tool against
individual gwm files, directories of gwm files, or zipped office software files.
Options
-l
Calculate latency.
-d
-r
-n
-s
Count satellites.
-u
-N
Group by Machine.
-M
Group by location (default cell size 50m square. Optional argument can specify new size).
-D
Group by Date.
-H
Group by Date/Hour.
-T
-W
-U
-a
AND - combine two dimensions to create a pivot table, e.g. -N -a -T to slice by machine AND time of day.
-h
-b
-x
Only accumulate values >= x for mean and standard deviation. Used to reject outliers.
-y
Only accumulate values <= y for mean and standard deviation. Used to reject outliers.
NOTE: See the Pages chapter in your Fleet User Manual for a description of the Field Com-
munications Monitor page.
FileConverter
Usage
mstarrun FileConverter -inputfile report.txt -inputtype utf8 [-OutputFile output FileName] -outputtype ascii
Description
Converts files to and from various supported formats, but is mainly intended for the conversion of files into
ASCII format.
Options
-OutputFile
-outputFileName
grabOnboardDiagnostics
Usage
Description
Retrieve onboard diagnostic files for a specified truck name and IP address.
grabState
Usage
mstarrun grabState
Description
GwmExport
Usage
Description
Allows support staff and other authorized personnel to better interpret and analyze field communications mes-
sages.
If no bus url is specified, raw values for parameters are output instead of names, e.g. the truck ID is printed
instead of the truck name.
Options
-c
-b
-o
Specifies an output.
import
Usage
Description
Used by support staff to import a database export into a named Fleet datastore.
The dump file can also be a .zip file containing a .dmp file.
Arguments
<datastore>
<dumpfilename>
ImportDataFromXml
Usage
mstarrun importDataFromXml [-e | -i] -f <filename> [-d data set] [-p | -k]
Description
Import data from a zipped XML export or manually edited XML file. The export directory is examined and any
zip or XML files found in that directory are displayed as choices.
Data can be imported into any database, not just the one that was used to generate the data. This is per-
formed by using data merging based on business keys defined in metadata.
Options
-f
Import from the specified file. This can be relative to the UFS export path or an absolute path.
-e
-i
-d
The data set to import. If no data set is specified, all data in <filename> is imported.
-p
Attempt to purge existing data before importing. This is not guaranteed to work due to data dependency
issues.
Keep existing data. New data is imported but existing data is unaffected.
importExportedData
Usage
where:
l the first argument is the database you want the data to go into.
l the second argument is the directory the data was exported to.
Description
Used by support staff to reimport data which has previously been exported using DBDataMan. Previously
DBDataMan generated a set of Windows .bat files which needed to be executed.
For example,
initialiseTpiMachines
Usage
Description
Options
<list of machines>
initProdSystemFromSnapshot
Usage
Description
For application server failback, it extracts configuration information from the snapshot and applies it to the
office software.
l replaces the current model database with the one from the snapshot.
l merges historical and summaries data from the standby system back into the production database
using the mstarrun migrateStandbyDataToProduction task.
Options
-a
-d
The default is neither option. The command will then run on both the application and database servers.
-e
initStandbyDbFromSnapshot
Usage
Description
Options
-a
Used when run by the scheduled task to enable more efficient updating of the standby system, and is ignored
if the standby database server is currently being used as the production database.
Arguments
<snapshot file>
initSystemFromSnapshot
Usage
Description
Arguments
<snapshot file>
inspectModel
Usage
mstarrun inspectModel
Description
listClients
Usage
mstarrun listClients
Description
This is a diagnostic tool used by Fleet Customer Support staff which looks through the log files and displays
which clients have connected to the office software.
logConfigurationsEditor
Usage
mstarrun logConfigurationsEditor
Description
logspeedo
Usage
Description
Options
-png
Create a PNG image rather than displaying the results on the screen.
-0
Notes
logspeedo2
Usage
logspeedo3
Usage
Description
makeCatalogs
Usage
mstarrun makeCatalogs
Description
Refresh the specified catalogs for all job runner pages. If no arguments are specified, displays usage inform-
ation and a list of available catalogs. Catalog names are listed in the Arguments section below.
Options
-b
Generate catalogs in the base area {MSTAR_HOME}. If omitted, catalogs are generated in the system area
{MSTAR_SYSTEM_HOME}.
Arguments
l Collections
l DataSets
l Displays
l Documents
l Forms
l OptionSets
l Pages
l Permissions
l Reports
l Tools
l All
Example
makeDataStores
NOTE: From release 3.1 onwards, maintainKpiSummaries is obsolete, and has been merged with
makeDataStores.
Usage
To show usage:
mstarrun makeDataStores
"All parts" are for example, Checks, Schema, ModelData, Views, ReferenceData, HealthViews, Con-
sistencyCheck.
mstarrun makeDataStores x y z
Run one or more steps specified in Arguments to upgrade the schema, data and views in the databases to
reflect the application model.
Options
-q
Used to create database-specific objects. For example, mstarrun makeDataStores -db =summary
all creates the database objects specific to the summary database alone.
-major or -m
-pdropConstraints will not work in release 3.1 onwards. Although it will give the appearance
of being successful, it will not perform a complete update to the database.
-dropConstraints or -dc
This flag is still supported, but is no longer used for major system upgrades. A major system upgrade auto-
matically drops constraints.
-ConsistencyChecker
Run automatically at the end of a makeDataStores, where the major flag has been specified; otherwise can
be invoked manually. Only operates on the model database, however, you can also run this target on the his-
If you do not use the ConsistencyChecker option, the consistency check process will just report on any data
integrity problems in the database.
If you are setting this in Supervisor, and do not select the ConsistencyCheck check box, the same thing will
occur. if you select the option, any issues to be fixed.
-verboseSchema
-riSchema
-verboseViews
-warnWhenShorteningViews
-skipViews
-skipHistoricalLookups
-skipUniverse
-skipReportingMetadata
-skipSummary
-numericsAsMeasures
-health
-legacyHealth
Arguments
Checks
Schema
ModelData
Normally runs after Schema and loads XML-based data into the model. Loads any XML file found in
{MSTAR_XML}/modeldata.
Views
ReferenceData
ConsistencyCheck
all
Health
LegacyHealth
Example
Create/Update Usage
Since maintainKpiSummaries is integrated into makeDataStores there is no separatec ommand retained for
creating tables, dimensions, etc for summaries.
Drop Usage
Currently, the only option for cleaning up the entire database is by using emptyDataStore.
The following command can be used to exclude the summaries while running makeDataStores.
Please note that entire system need not be brought down to upgrade the summaries. The summary database
alone can be called using
Please note that the above command should be able to update the
KpiSummaries schema objects without the need to run
mstarrun emptyDataStore SUMMARYDB first.
ModelDataThis argument can be used to load standard truck and loading tool activities into the model
database.
When you run makeDataStores, the standard activities are loaded into folders called "Trucks
(Required)" and "Loading Tools (Required)" as defined in the XML data. These folders are marked as
read only (a new attribute on activity groups). Note that this functionality is not yet fully supported.
When new activities are loaded, any existing activities with the same name are renamed to "XXX
(OLD)" - this is because some sites have already defined a Hang Time activity for example. Note that
this must be defined as Hang.Time, not Hang Time. The old activities can be deleted. The standard
internal activity name is always Hang.Time – internationalization determines what is displayed in the
user interface, for example, "Wait For Truck".
You can also use this argument to import Alarm Types. the office software requires information about
priority, resolution etc to be stored about each alarm it can raise i.e Alarm Type information. The office
software ships with pre-defined Alarm Type information which is imported into the model database as
part of the makeDataStores process. makeDataStores has a step called ModelData which runs an
mstarrun target called importDataFromXml. This run target takes XML files containing mine model
information and imports that into the model database. The office software uses this mechanism to
ensure that the necessary Alarm Type and Activity information is always available in the mine model
when a system is set up. The files which ship with the office software containing this information are
called AlarmTypes.xml and Activities.xml and live in the xml/modeldata directory.
When importing information, the office software will by default overwrite existing information already in
the database. There is a parameter called keepExisting which can be used to ensure that if a record is
already present in the database, the information will not be overwritten from the XML file. This is false
by default but is set to true when run from makeDataStores to ensure that any user modifications to
alarm type information etc are preserved and only new alarm type records are imported.
makeDataStores isRequired
Description
After you apply a patch,you must run the command mstarrun makeDataStores isRequired. This command
informs you as to whether or not there are changes in the schema.
If the schemas are up-to-date, the following message displays: “Schemas are up to date.” If the schemas are
not up-to-date, the following message displays: “Schema check - out of date database schemas: Run
makeDataStores. Do you want to get the details? Press Y or N and enter.”
After pressing 'Y', all the changes in the new schema are required.
makeScheduledTasks
Usage
where:
Description
Generates a Windows scheduled task and SystemScheduler scheduled tasks required for on-going admin-
istration of various roles.
Arguments
AppServer
DbServer
Client
all
Options
-e <directory>
The directory in which to generate the task files for the SystemScheduler utility.
Example
makeShortcuts
Usage
mstarrun makeShortcuts
mstarrun makeShortcuts x
Description
makeSystem
Usage
Description
Creates systemName if it doesn’t exist, or updates it to include any new patches or other changes. If it
doesn’t yet exist, systemName is created in \mstarFiles\systems\
Arguments
<systemName>
Options
<centralDir>
The network path to the shared central directory of a system that is already installed.
-u
Upgrade the build to use mstarHome install and delete all patches.
-k
-f
Force generating Jetty web application directories, which is useful for running Jetty as a service (Force Jetty).
-v
Verbose mode.
--version
-h
migrateStandbyDataToProduction
Usage
mstarrun migrateStandbyDataToProduction
Description
Option
-e
-n
printClassPath
Usage
mstarrun printClassPath
Description
printPatches
Usage
mstarrun printPatches
Description
printSystemProperties
Usage
mstarrun printSystemProperties
Description
profileTraceFiles
Usage
Description
Options
<dataStore>
The Oracle Database Instance to run SQL trace for. The default is _HISTORICALDB.
<OracleAdminDir>
<TraceFilesDir>
recentTravelTimes
Usage
Description
Extracts stored travel time data by machine class and road segment for the specified number of days, and
saves the information in {MSTAR_DATA}/TravelTimeData.txt
Options
<Days>
The number of days to look back to gather data. The default is 14.
refreshBuild
Usage
mstarrun refreshBuild
Description
Check that the nominated build has been unpacked and unpack it if necessary. Create the necessary dir-
ectory structure for the build if it does not exist, and update MineStar.ini to point to the new build.
replaceDataStoresWithModel
Usage
Description
Import a model and pitmodel database and make a matching model/historical pair which is ready to run.
<modelDumpFile>
<pitmodelDumpFile>
replaceDataStoresWithXml
Usage
Description
Import an XML model database and make a matching model/historical pair which is ready to run.
Options
<filename>
runSQLTrace
Usage
Description
Options
dataStore
The Oracle Database Instance to run SQL trace for. Can be either _MODELDB or _HISTORICALDB
(default).
sendAllToSupport
Usage
Description
Send all files in {MSTAR_BASE_CENTRAL}/outgoing to Fleet Customer Support and then moves them to
{MSTAR_BASE_CENTRAL}/sent
Options
FTP
MSG
ATT
Example
sendCommand
Usage
Description
Options
beep
echo <stuff>
The gadget is started inside the object server. Usually used to turn on extra diagnostic information.
Tells the object server about a new command that should be added to this list. This is used to allow the object
server to execute commands which did not exist when the object server was started, as long as the Java
code can be inserted into the process's classpath.
sendLogging
Usage
Description
Send a new logging configuration file to an object server, i.e., to change its logging or tracing without restarting
it.
Options
<ObjectServer>
<filename>
The name of the logging configuration file to send. This is typically TopologyConfig.properties.
showDbConnections
Usage
Description
Arguments
<dataStore>
Example
snapshotDb
Usage
Description
Perform a snapshot of DbName. If RunMode is not specified, User is assumed. If OutputFilename is not
specified, the snapshot is created using the default file naming format, and in the directory specified in Super-
visor.
Options
<DbName>
<RunMode>
The mode in which to perform the snapshot, either H (System) or U (User). Default is U.
<OutputFilename>
Optional filename for snapshot. If blank, the default naming scheme and directory is used.
Example
snapshotOs
Usage
Description
Perform a snapshot of the active Operating System. If RunMode is not specified, User is assumed. If Out-
putFilename is not specified, the snapshot is created using the default file naming format, and in the directory
specified in Supervisor.
Options
<RunMode>
The mode in which to perform the snapshot, either H (System) or U (User). Default is U.
<OutputFilename>
Optional filename for snapshot. If blank, the default naming scheme and directory is used.
Example
mstarrun snapshotOs
snapshotSystem
Usage
Description
Perform a System Snapshot This is typically a scheduled task (AUTO and STANDBY) but can also be per-
formed interactively using Supervisor or a desktop shortcut (USER mode). Snapshot files are saved in the dir-
ectory specified in Supervisor.
Options
-d
-o
<runMode>
Example
startScheduler
Usage
mstarrun startScheduler
Description
Notes
switchActiveDatabase
Usage
Description
One or both of the -r and -p options needs to be specified. The role will be one of PRODUCTION, STANDBY,
or TEST.
syncCycles
Usage
Description
Performs some automatic operations on production data. The output of the command is a list of cycles fol-
lowed by details, for example:
The keyword CYCLE indicates that the line represents a cycle – automated processes may use this keyword
to select only those lines. The keyword is followed by the name of the primary machine of the cycle, then the
start time, a dash, then the end time. The detail lines vary from target to target, but are always indented to
show that they refer to the cycle line above.
Targets
check
Checks that all cycles in the given range have all mandatory fields. This is the same check that causes red
cycles in Cycle Assistant. The detail lines specify which attributes are missing.
misdump
Displays all cycles for which the assigned sink is different from the actual sink. The detail is the keyword
“Assigned” followed by the assigned location name, then the keyword “Actual” followed by the actual sink
name.
report
Displays all cycles which have data consistency problems. The details describe the problems found.
sync
This target modifies production data. For each cycle found, the target synchronizes the delays with cycles
then ensures consistency of the cycle activities. This is the same as the “synchronise delays” action in Cycle
Editor. Note that this action does not consider whether the delays being synchronised make sense in the con-
text of the cycle. This target will resolve some of the problems reported by the “report” target, but cannot
resolve, for example, the presence of undetermined activities.
syncStandbyInformation
Usage
mstarrun syncStandbyInformation
Description
updateMaterialGroup
Usage
Description
MaterialGroup
validateKpiSummaries
From release 3.1 onwards, maintainKpiSummaries is obsolete, and has been merged with
makeDataStores. The validation features that were present in maintainKpiSummaries are currently
retained under validateKpiSummaries.
Usage
validate | list
Examples
Was
Now
Was
Now
Was
Now
uploadHistory
Usage
Description
Show last upload records in console or generate an upload history report in HTML format for the last 30 days.
Options
-quick
Show last updated records in console. The following parameters are ignored: days, address, file.
-days
-address
-file
Only include upload records for the specified file (the original name of the VIMS file).
validateWaypoints
Usage
Description
Validate and correct all waypoints in the office software according to options.
Options
-updateLoaders
-updateLoadoutUnits
-updateProcessors
-updateShovels
-rename
-removeUnused
-listUnused
-processorOnly
Remove processor waypoints not used by servers destinations or roads, and list other waypoints not used by
servers destinations or roads.