Professional Documents
Culture Documents
Disclaimer
Information of a technical nature, and particulars of the product and its use, is given by AVEVA
Solutions Ltd and its subsidiaries without warranty. AVEVA Solutions Ltd and its subsidiaries disclaim
any and all warranties and conditions, expressed or implied, to the fullest extent permitted by law.
Neither the author nor AVEVA Solutions Ltd, or any of its subsidiaries, shall be liable to any person or
entity for any actions, claims, loss or damage arising from the use or possession of any information,
particulars, or errors in this publication, or any incorrect use of the product, whatsoever.
Copyright
Copyright and all other intellectual property rights in this manual and the associated software, and every
part of it (including source code, object code, any data contained in it, the manual and any other
documentation supplied with it) belongs to AVEVA Solutions Ltd or its subsidiaries.
All other rights are reserved to AVEVA Solutions Ltd and its subsidiaries. The information contained in
this document is commercially sensitive, and shall not be copied, reproduced, stored in a retrieval
system, or transmitted without the prior written permission of AVEVA Solutions Ltd Where such
permission is granted, it expressly requires that this Disclaimer and Copyright notice is prominently
displayed at the beginning of every copy that is made.
The manual and associated documentation may not be adapted, reproduced, or copied, in any material
or electronic form, without the prior written permission of AVEVA Solutions Ltd. The user may also not
reverse engineer, decompile, copy, or adapt the associated software. Neither the whole, nor part of the
product described in this publication may be incorporated into any third-party software, product,
machine, or system without the prior written permission of AVEVA Solutions Ltd, save as permitted by
law. Any such unauthorised action is strictly prohibited, and may give rise to civil liabilities and criminal
prosecution.
The AVEVA products described in this guide are to be installed and operated strictly in accordance with
the terms and conditions of the respective license agreements, and in accordance with the relevant
User Documentation. Unauthorised or unlicensed use of the product is strictly prohibited.
First published September 2007
AVEVA Solutions Ltd, and its subsidiaries
AVEVA Solutions Ltd, High Cross, Madingley Road, Cambridge, CB3 0HB, United Kingdom
Trademarks
AVEVA and Tribon are registered trademarks of AVEVA Solutions Ltd or its subsidiaries. Unauthorised
use of the AVEVA or Tribon trademarks is strictly forbidden.
AVEVA product names are trademarks or registered trademarks of AVEVA Solutions Ltd or its
subsidiaries, registered in the UK, Europe and other countries (worldwide).
The copyright, trade mark rights, or other intellectual property rights in any other product, its name or
logo belongs to its respective owner.
Contents
Page
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4:1
Logging
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4:2
12.0
ii
12.0
16:3
16:3
16:5
16:6
16:7
iii
12.0
16:11
16:11
16:11
16:13
16:14
16:14
16:14
16:14
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:1
Global
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19:3
Update Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Timing of Updates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Checking Locations are Aligned. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Change Primary - Repair Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Risks of Aligning Databases Across Locations by File Copying . . . . . . . . . . . . . . . . . . . .
Flushing/Issuing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Transaction Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Daemon Log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
admnew Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
19:3
19:3
19:4
19:5
19:5
19:6
19:6
19:7
19:7
iv
12.0
12.0
vi
12.0
Introduction
This document proposes a set of guidelines for the effective use of the AVEVA Global
product. The guidelines result from current working experience and may be amended in the
light of future experience. Global manages a project distributed over several different
geographical locations connected by a Wide Area Network (for example the Internet) and so
presents special situations for the administrator and engineering user, which the guidelines
address.
Note: References to 'Windows' in this document mean MS Windows 2000 or MS Windows
XP.
AVEVA Global can be used to enhance projects created in either the AVEVA Plant or
AVEVA Marine group of products - henceforth known as the base product in this
document.
1:1
12.0
1:2
12.0
2:1
12.0
2:2
12.0
Global Daemon
The Global daemon (sometimes referred to as the ADMIN daemon) is supplied with the
Global product, in the default install folder. It uses RPC, which is part of the standard
Windows software, and so no additional software has to be installed.
There must be one Global daemon running for each Project at a Location.
Installing the Global daemon is described in the Global Installation Guide, configuring and
starting the daemon is described in the Global User Guide.
3.1
3.2
3.3
The Dabacon buffer size can be changed by using the MODULE command. See the
Administrator Command Reference Manual for details.
3:1
12.0
3:2
12.0
Daemon Diagnostics
The Global daemon has the following types of diagnostic output:
4.1
Tracing.
Logging.
Tracing
Tracing can be switched on when you start the daemon. If you are running the Global
daemon as a service, add a line to the startup batch file singleds.bat to set the environment
variable DEBUG_ADMIND as follows:
DEBUG_ADMIND=1023
If you are not using the Global daemon as a service, you can set DEBUG_ADMIND from the
command line.
The value of the DEBUG_ADMIND variable determines the type of activities that are traced:
0
Not used
Not used
Trace
Thread Library
16
Systems DB Access
32
Dabacon Thread
64
128
Operation Thread
256
Trans DB I/O
512
Not used
1024
Not used
2048
Dabacon Detail
These values are bit settings, so if you want to trace a combination of activities, you add the
above values together. For example, to trace Systems DB access and the Event Loop
thread only, you would set DEBUG_ADMIND as follows:
4:1
12.0
DEBUG_ADMIND=80
To enable tracing for all activities, you would set the DEBUG_ADMIND value to 3071. A
useful level of tracing for tracking commands is 896.
Full tracing can be verbose and fill disk space rapidly, the recommended value of 896 allows
the administrator to gain an idea of the current number of commands running through the
system.
This may help when bringing down a daemon at a particular location. Further tracing may be
required when investigating a particular problem.
4.2
Logging
It is beneficial to have the Daemon log setting activated for troubleshooting purposes as well
as helping the System Administrator to know how the Global daemons are functioning. We
can activate the diagnostics by configuring the Global ADMIN comms log.
The Global ADMIN comms log is activated from Daemon>Daemon Settings. This will
display the Local Daemon Settings form. In the appropriate text boxes, enter the
Diagnostic Logfile name, the Diagnostic Level (see below), and finally Enable the
Diagnostic Logging using the drop-down list. (Note: If you use an environment variable in
the log file path, it must be defined in the daemon script or in the window from which you
started the daemon.)
4.2.1
Diagnostic Level
The number to be entered is the sum of the code numbers for the individual requirements
shown below:
0
None
Received summary
Received detail
Send summary
Send detail
16
32
64
4:2
12.0
None
128
255
Full logging.
The log files can be sent to the administering location at regular intervals.
The log file will get bigger over time. If you want to keep the log record, but start a new file,
move the log file to another directory.
The daemon checks for the log file location every 15 minutes. It will keep writing to the
moved log file until it checks the log file location and finds it has moved, and then a new log
file will be generated.
Note: Logging does not capture the same data as tracing, for full debugging purposes the
trace facility provides much more comprehensive internal diagnostics.
4:3
12.0
4:4
12.0
Database Allocation
5.1
ALLOCATE
ALLOCATE
5.2
5:1
12.0
A Get Work must be done prior to listing the DBALL, (that is, you must carry on doing a Get
Work to see when the databases have been allocated). Allocation is successful when the
DBALL list contains all of the databases allocated.
getwork
5.3
/Satellite
q mem
De-allocating Databases
The same principles for allocating databases, as described above, apply to de-allocating
databases.
If users are reading a database at any satellite location and it is de-allocated at that location
by the hub while it is being read, then the database(s) de-allocated will not immediately be
deleted from the satellite locations.
The command will be stalled in the transaction database and, once all users at the location
exit their session, the database(s) will be de-allocated and the database files deleted.
Note: Only secondary databases can be de-allocated. If a database is primary at a satellite,
first make it secondary, then de-allocate it. If you change a database from primary to
secondary while a user is reading/writing to it, the user will be able to write to the
database until such a time as that user changes modules. A dB does not need to be
primary at the HUB, just as long as it is not primary at the location where it is being
de-allocated.
5:2
12.0
5.4
5.4.1
admnew Files
.admnew files are created when the whole database needs to be propagated. This may
occur:
Whenever a database is allocated to a location. The database is copied from the hub to
the new location by the Global daemon. As the file is copied over the network
connection, a file named
prjnnnn.admnew
is created. Once copying is complete, this file is renamed automatically to
prjnnn
5:3
12.0
Whenever a primary database is merged. The next update will force the entire
database to be propagated.
If there are READERS of the database -users are accessing an MDB which contains
the database (even if they are only in MONITOR). In this case Global will not attempt to
rename the .admnew file until all such users have exited or switched to an MDB which
does not include the database. Once all such users have exited, the copy will normally
succeed.
If the database is locked by a Dead user - a session for a user which has been
expunged. In this case Global attempts to rename the .admnew file, but it fails.
To resolve the second situation, you must do one of the following: either
Ensure that the sessions for all dead users have been killed. Also ensure that no
foreign projects are reading this database; or
Use the NET FILE command in a cmd window (or a suitable third party tool) to identify
network access to the file, and close it.
If the project is not used as a foreign project, there is an additional alternative. The
Overwrite DB Users flag - the attribute LCPOVW of the LOC element - for a location
controls whether a locked file may be overwritten. If this attribute is set TRUE and
there are no database READERS in the project, then Global will overwrite the locked
file by the .admnew file.
Note: This should not be done if other projects include this database as a foreign project,
since these are valid READERS that are not recorded in the session data for the
Global project.
Overwriting of locked databases may be enabled by using the MODIFY dialogue for the
location on the Admin Elements form to enable Overwriting, or by setting the Overwrite DB
Users flag (LCPOVW) to TRUE for the appropriate LOC element on the command-line.
See also Database File Locks.
5.4.2
5:4
12.0
Merging Databases
When setting up a project in a Global environment, you are likely to create many sessions in
the Global database. This is because when ADMIN issues a Daemon command, it first does
a SAVEWORK to give the Daemon an up-to-date view of the Global database. The Daemon
also may add sessions to the Global database.
We recommend that you should merge changes for the Global database and possibly the
system database after setting up a Global project. This should also be done after making
significant changes to the project setup.
6.1
6:1
12.0
6.1.1
6.2
Select a suitable time to execute the merge, and ensure that all users have left the
project
Use CHANGE PRIMARY to bring all its immediate child extracts to the primary location
of the extract being merged
6:2
12.0
Use CHANGE PRIMARY to return the child extracts back to their original primary
location.
Optionally, the databases could be copied (by ftp or similar) to all secondary locations
manually after the MERGE (and before the second set of CHANGE PRIMARY
commands). This avoids the need for the next Update to copy the entire file.
Normally merging would be carried out on the entire extract hierarchy at the project
Hub. However if an extract database owns working extracts, it must be merged at its
original primary location, since the working extract files only exist at that location.
6:3
12.0
6:4
12.0
7.1
7.2
7.2.1
Program Initialisation
The program reads out of the database all input commands not in a final state (processed,
timed out, cancelled) and all the owned operations and output commands and starts
7:1
12.0
progressing these commands. Only unfinished commands will be read. All others will be
ignored and not validated for errors.
If there are any errors found in reading the database, the daemon will not start. It will then be
necessary to provide a (probably) empty database so that the daemon will start from fresh
and not progress any previously running commands.
7.2.2
7.3
It provides the user with information on the progress of his commands, allowing him to
browse the messages that have been received and persisted in this database.
It provides the administrator with an audit trail to determine if and where a problem has
occurred.
It provides other modules of the base product with a store to deposit local
administration information such as element claims. However, these commands are
totally ignored by the daemon.
7.3.1
7:2
12.0
failure can terminate the TRINCO. Its TRPASS will be set to FALSE, its state will be
Complete or a later state, and it may own a TRMLST/TRMESS and perhaps a TRFAIL but
no TROPERs or TROUCOs. Input commands can be given a delayed start time (EXTIME)
after which operations will be generated. It will wait in the Waiting state until this time has
passed. This stay of execution will persist until EXTIME has expired, even if this is a longer
period than the Time out.
The TRINCO stays in Ready state for as long as all its operation and output commands
take to complete. Once the TRINCO has been set to ready the command cannot time out
until all operations have also timed out.
When all member operations and output commands have completed INCSTA is set to
Complete. All failures and successes generated by them are collected together and
handed on to the sending TROUCO (which stores them). The success state of the
command (TRPASS) is set to true if all operations have succeeded. INCSTA is now at
Replied.
Once a reply acknowledgement has been received back from the previous location, INCSTA
is set to Processed and no more actions will take place.
There are other terminating conditions of a TRINCO; Timed Out means that the command
did not manage to start before either its end time was reached, or the number of retries
allowed was exceeded. It will not own any TROUCOs or TROPERs.
The state is set to Cancelled if the command is cancelled before any significant action took
place. Owned TROUCOs and TROPERs may be set to cancelled if they have not yet
started work: subsequent operations that depend on them will be set to Redundant.
7.3.2
7:3
12.0
has been created to store the command. This is stored in the TROUCOs CMREF attribute.
For remote locations this will usually be an unknown reference since the specific transaction
database is not visible. It can be used to track the command down the chain of locations if
the administrator can see all the databases.
When a reply is received OUTSTA becomes Replied. Any reply data is stored under
TRFLST and TRSLST elements and the TRPASS attribute and OUTSTA goes to
Processed.
TROUCOs can terminate by timing out if they fail to send in the lifetime prescribed (Timed
Out. They may never be sent if dependencies are not met, in which case they terminate as
Redundant.
7.3.3
7:4
12.0
7.4
7:5
12.0
TRINCO:
RECEIVED
DATECR
ACKNOWLEDGED
DATEAK
NACKN
READY
DATERD
COMPLETE
DATECM
REPLIED
DATERP
NREPLY
TIMEDOUT
DATEND
CANCELLED
DATEND
PROCESSED
DATEND
NREPAK
WAIT
DATECR
READY
DATERD
SENT
DATESN
NRETRY
DATEAK
NACKN
DATERP
NREPLY
DATERK
NREPAK
TIMEDOUT
DATEND
CANCELLED
DATEND
REDUNDANT
DATEND
PROCESSED
DATEND
TROUCO:
ACKNOWLEDGED
REPLIED
COMPLETE
7:6
and
reply
12.0
TROPER:
7.5
WAIT
DATECR
READY
DATERD
RUNNING
DATERN
NRETRY
STALLED
DATESL
COMPLETE
DATECM
TIMEDOUT
DATEND
CANCELLED
DATEND
REDUNDANT
DATEND
PROCESSED
DATEND
Cancelled Commands
Commands can be cancelled at the location where they were first input. There are rules as
to what a particular user may cancel, but this section describes what happens in the
daemon once a cancel command has been passed to it.
Cancellation only applies to TRINCOs and not to any particular operation it has. The
cancellation is immediately effected if the TRINCO has INCSTA of state Waiting or
Stalled.
If the TRINCO is Ready then all of its operations and output commands are inspected. If
these TROPERs and TROUCOs are all Waiting, Ready or Stalled then those in Ready
or Stalled state are set to cancelled and the waiting ones become Redundant. And the
TRINCO becomes Complete and then Cancelled.
TRINCOs in other, later states are not cancellable and the cancellation is rejected.
A Message is stored with the command as to whether the cancellation was effected, or
rejected.
7.6
7:7
12.0
Messages are not stored in the database except under the TRINCO that was originally
received from a user (not another locations TROUCO) and under the TROPERs and
TROUCOs that the TRINCO owns. This is because messages are collected together
regularly by each TRINCO as its operations progress and these are passed back to the
TRINCOs originating TROUCO. If the sender was the user then the messages are stored in
the database for review under the relevant element: In particular when these messages are
passed between sites the TROUCO receives a set of messages each of which may have
been generated by different operations, and yet they will now all belong to the single
TROUCO. The messages contain sufficient attribute information to indicate the location that
the message originated from, the operation type etc.
When the messages are finally stored below the originating command successes and
failures are persisted as TRSUCC and TRFAIL elements under a TRMLST element. This
will distinguish them from the result successes and fails that are persisted when the
operation or output command finally completes.
The diagram on page
describes the elements created for a simple command claim,
between 2 locations. It provides an idea of the elements created in both transaction DBs.
7.7
7.7.1
7:8
12.0
In this case, all successful database updates report no data to send since the database
was up to date. This is reflected in the summary, which reports the number of successful
Copies and Updates. Note that the success for the Global db is also reported as database
=0/0.
A scheduled update normally only sends the latest sessions for a database - this is an
Update. However, if the database has been merged or had another non-additive change
(reconfigure, backtrack), then the entire database file must be copied. Database copies are
always executed at the destination (the location to which the file must be copied).
The file is copied from the remote location to a temporary file with the suffix .admnew and
then committed. The database copy cannot be committed in the following circumstances:
There are dead users (file is locked) and Overwriting is disabled (see below)
If the commit fails, the .admnew file will be retained. The next copy attempt will test this
file against the remote original to see whether the remote copy stage must be repeated.
In the case of updates, the number of sessions and pages sent is also reported in the
success for each database as well as cumulated in the update summary. In the case of
copies, the number of pages sent will only be reported if the copy is executed locally. For
DRAFT databases, the number of picture-files sent is also reported.
The update summary also reports on the number of other data files transferred (see also
success for Exchange of other data). Note that this will always report a success even if
there is nothing to transfer or Other data transfer is not set up.
7.7.2
7:9
12.0
In this case, the databases could not be propagated, since the secondary database had a
higher compaction number than the primary database. This may happen when a remote
merge is executed without stopping scheduled updates. Normally it will be necessary to
recover the database to resolve this error.
Prevention of Reverse propagation may also be reported in the following situation - a
satellite has executed a direct update (UPDATE DIRECT from the command-line) with a
non-neighbour satellite. The next scheduled update with the intermediate location will report
Prevented reverse propagation. In this case, scheduled updates will eventually resolve the
situation.
The following table summarises Failure messages that can be generated for Scheduled
updates. This does not include all possible failures that may be generated from failed file
copies.
Error No
Symptom
Reason
Update will not report results to CAM this failure cannot be reported at
CAM - usually due to location
unavailable
612
611
613
7:10
12.0
7.7.3
610
610
610
619
615
614
Update failure - database pages are The database file is corrupt at the
not contiguous
destination. This database must be
recovered from its primary location.
628
630
7:11
12.0
In this example, the database still had readers, so the copy could not be completed. An
additional failure reports that 18 pages have been copied from the remote location. The next
retry validates the .admnew file, but still cannot commit it due to readers. A further retry
validates the .admnew file again and attempts to commit it. In this case there are no
readers, but the file is locked.
In this case, the SYNCHRONISE command eventually succeeded, since Overwriting was
enabled. Note that the Successful file copy success reports that nothing has been copied,
since the remote copy stage was executed successfully on an earlier try, when the copy
failed.
Detailed failures for file copies can only be reported at the destination. During a scheduled
update, the success of a copy is verified by checking that the compaction number has
changed. If the copy was executed at the location which executes the scheduled update,
then additional failures may show more detail. (Note this is the partner location for a
scheduled update, not the originator!)
7.8
Cause of Failure
Commit Allocate DB
7:12
12.0
Action
Cause of Failure
Initialise
Set Systemloc
Set Primary
Change Hub
Recover Hub
Unlock DB allocation
Refer to Extract Flush Commands Failing and Reasons Claims and Flushes can Fail for non
Admin command failures.
7.9
7:13
12.0
7:14
12.0
Pending File
On a Global network, most remote commands that are stalled for any reason at a location
are placed in the transaction database at that location, for later processing, (see next
chapter).
A small number of commands that cannot be carried out at once, known as kernel
commands, are instead stored in a locations pending file for later processing. There are
various situations where kernel commands may be added to a pending file. For example:
ISOLATION TRUE/FALSE
LOCK/UNLOCK
PREVOWNER HUB
ALLOCATE (PRIMARY)
CHANGE PRIMARY
All other commands use the transaction database to achieve a similar effect (see next
chapter).
Once a pending file has been created at a location, it will continue to exist. When the kernel
commands stored in it have been executed, they will be deleted from the file. You can tell if
there are any outstanding commands by the size of the file: if it is empty, it will be zero size.
You can read the contents of the pending file using a utility available from AVEVA.
The pending file is named pending, and it will be saved in the project directory (for example,
abc000). It can be read using the glbpend.exe utility provided in the Global install folder.
For example, if the pending file is C:\AVEVA\projects\abc000\pending, the command to read
it is:
8:1
12.0
8:2
12.0
9.1
Ensure that the daemon is running at both the current Hub and the Satellite which will
become the new hub. This can be done by selecting Query>Global
States>Communications from the ADMIN, DRAFT, DESIGN, DIAGRAMS or
SPOOLER module or by issuing a Ping command.
Ensure that you have the project at the Hub backed up and that at least the Global
database (i.e. prjglb) at the satellites is backed up.
Once these steps have been taken, you can change the Hub through the command
line. The GUI will automatically change at the old Hub. At the new Hub, re-enter
ADMIN.
Output to the Global daemon window will indicate that the location is now the Hub or
Satellite.
Note that all databases, including non-propagating databases must be allocated to the
proposed new Hub, for example /Tokyo, before changing the Hub. This may be done
using the command:
You should make sure that the change of Hub location is complete before working with
either the new or old Hub. Check the following attribute to confirm that the hub change has
been successful. For example, if you are changing the Hub from London to Tokyo, then
navigate to the location world /*GL and query the Hubrf attribute:
/*GL
q hubrf
The Hubrf should be set to the name of the new Hub location; in this example, /Tokyo.
You will also see that the location parent attribute of each location (locrf) has changed. This
is a secondary effect, because the Hub location can have no parent. In the above example,
navigate to the location of the old Hub and query the Locrf attribute:
/London
q locrf
The Locrf should be set to the name of the new Hub location; in this example, /Tokyo.
(Previously, London, as the old Hub, had no parent location.)
9:1
12.0
Now, navigate to the location of the new Hub and query its Locrf; for example:
/Tokyo
q locrf
If the Locrf of Tokyo is set to Nulref, then the hub change has been successful. The new
hub, Tokyo, has no parent location.
9.2
/*GL
q hubrf
q prvrf
q nxthb
When there is no Hub, then Prvrf records the name of the previous hub and Nxthb records
the name of the next hub. The Hubrf attribute is set to Nulref. During a Hub change from
London to Tokyo, Prvrf would be /London and Nxthb would be /Tokyo.
Normally, when a Hub change fails, the previous Hub will be restored automatically as part
of the failure operations of the Hub change. You should check progress of the command in
the transaction database. If this recovery fails, then the System Administrator must recover
the previous Hub as described below. This will be necessary in the following circumstances:
PREVOWNER HUB
Re-enter the ADMIN module. This will restore the Hub location and the Hub GUI. (Note: if
daemons are running, then the original Hub location command may still be in progress and
will attempt to commit the hub change or recover the original hub as appropriate)
Make sure that the PREVOWNER command is complete before working with either the new
or old Hub, as otherwise it is possible to end up with two Hubs. If this happens, the Global
database must be propagated (or physically copied) from the new Hub to the old before
further administration is be carried out. If the new Hub was to merge changes while the old
Hub was still active, the system would not be able to recover. It would be necessary to
reinstate the Global database from the backup taken before the change of Hub location was
undertaken.
9:2
12.0
10
10.1
Synchronisation
Synchronisation can be carried out at both Hub and Satellite locations. This process can be
used to synchronise databases at one location with the corresponding databases at a
different location. This is a one-way process: project data is only received.
10.2
Manual Updates
Manual updates can also be carried out at both Hub and Satellite locations. This is a twoway process that can take place between neighbouring Locations. Data will be both sent
and received from the location initiating the update, according to which Location has the
most up-to-date version of the database.
If update is used between two locations which are not neighbours, then Global will attempt
to synchronise the database at the two locations as follows:
If the sending location is the primary location, it will update the database at each
location along the network path to the destination;
If the primary location lies between the two locations, then it will synchronise the
database at the sending location with the primary location and update the database
from the primary location to the destination location.
It is also possible to do a direct update between two non-neighbour locations using the
UPDATE DIRECT command. However this is not recommended, since it can result in
Reverse propagation errors from scheduled updates. This happens because UPDATE
DIRECT results in the database being more recent at the secondary destination of the
update than at the intermediate satellite through which scheduled updates are routed.
10:1
12.0
To learn more about Reverse Propagation errors, see Recovery from Reverse Propagation
Errors.
10.3
10.4
10:2
12.0
If this is done it is possible to regenerate all Picture and Neutral Format Files at the satellite,
even though the Database is secondary.
For Picture and Neutral Format Files to be successfully propagated the environment
variables %ABCPIC% and %ABCDIA% must be set in the Daemon kick-off script.
Final Designer, Schematics and Marine Drawings files are always propagated, even if
Picture/Neutral Format File Propagation is disabled.
10.5
10:3
12.0
For these files to be successfully propagated the following environment variable must be set
in the Daemon kick-off script:
Final Designer Drawing
{ABC} DWG
{ABC} DRG
Schematic Diagrams
{ABC} DIA
Stencil
{ABC} STE
Template
{ABC} TPL
The {ABC} DIA folder can contain Neutral Format (SVG) files as well as, or instead of Viso
Schematic Diagram files. For a detailed description of the file formats that are monitored
within the above folders refer to the Administrator Command Reference Manual.
Only PDMSDWG files that are associated with a DRAFT Database are propagated.
Associated PDMSDWG Files are Sheet and Overlay drawings. Other DWG files, such as
Backing Sheets and Symbols need to be propagated through Transfer of other Data. See
Transfer of Other Data.
This is also the case for AVEVA Marine where there are drawings located in the ASSI,
ASSP, BACK, BTEM, CPAR, MARK, NPLD, NSKE, PDB, PICT, PINJ, PLIS, PLJI, PPAR,
PRSK, RECE, SETT, STD and WCOG directories.
10.6
10.7
The macro directory, (for example,. abcmac), must exist at all locations where macros
will be created.
The project variables for the macro directory (for example, ABCMAC) are set for the
daemons. (All project environment variables must also be set for users, of course.)
Update Timings
It is extremely difficult to predict the length of time that an update will take to complete. It will
depend upon the bandwidth that is dedicated to the update process at the time it is run.
Therefore, if the line is shared with other comms programs (mail, internet, etc), the update
performance will be affected. The timings described below were undertaken on a line that
had no other process competing, and a line that was extremely clean - that is, its rate of
failure would be near zero. On a normal WAN line, its collision and failure rate would not
achieve anywhere near such a low level.
10:4
12.0
These test timings were taken when propagating 11080 pages (22695936 bytes) of data
between two machines.
10.8
10:5
12.0
the UPDATE ALL command is used). Files can only be transferred between neighbouring
locations, and this method cannot be used to send files to/from off-line locations.
For example, myfile has been produced at Satellite AAA and is needed at neighbouring
location BBB. The user at AAA must ensure that myfile has been placed in directory
%EXP_BBB%. During the next scheduled update with BBB, this file will be sent to BBB, and
received in directory %IMPORT% at location BBB. A user at BBB can then use myfile. If
myfile is to be sent on to other locations, it will need to be copied into the export directories
at BBB for those locations.
Offline locations: The TRANSFER command only copies databases and picture files to or
from the transfer directory, ready for onward manual transfer to the specified location.
Transfer of other data files must be done manually.
It is possible to assign a batch script to run both before and after the Update Event occurs.
This can be used to copy data into the EXPORT directories before the Update is executed,
and then copy it out of the IMPORT directory once the Update Event has completed. This
process will include the transfer of Other Data.
The batch scripts are assigned to an Update Event through the Create/Modify Update form,
see below.
Batch Scripts
The script itself can be of any type of batch script, for instance perl, and can be as complex
as required.
10:6
12.0
Note: Transferring of other Data uses the same communication line as Updates, and all
other Global functionality. Over use of transferring too many other files may have an
impact on the Window of Opportunity for updates.
10.9
10:7
12.0
10:8
12.0
11
Deleting Databases
The procedure for deleting a database is summarised below. If the database owns extracts,
see Deleting a Database that owns Extracts.
Note: A dB does not need to be primary at the HUB, just as long as it is not primary at the
location where it is being deallocated.
11:1
12.0
11:2
12.0
12
Database Recovery
If for any reason a database at a location is corrupt, it can be recovered by transferring the
database from a neighbouring location. It is important to remember that this could result in
loss of work. The main objective when a recovery is carried out is obviously to restore the
database(s) and minimise the work loss.
Global does not verify that the file from which the database is being recovered is a valid
database. It is the user's responsibility to ensure that this is the case. Remote DICE
checking may be used to used to verify the state of the database at the remote location from
which the database is to be recovered.
12.1
Location of Corrupt DB
Corrupt DB
Hub
3, 4, 5
Sat 1
1+2,4,5
12:1
12.0
12.2
Location of Corrupt DB
Corrupt DB
Sat 2
1+2,3,5
Sat 3
1+2,3,4
Restore from backup if all secondary databases are older than backups or if they had
not been synchronised with the primary database before it was corrupted.
Note: When a DICE report indicates that Refblocks have been lost, normally this would
require the master database to be patched. However in a Global project, this error is
non-fatal if there are working extracts. These databases are non-propagating and
only exist at the primary location of their extract owner. This results in the error
report, since the Refblocks for working extracts are not accessible at the primary
location of the master db (Refblocks are blocks of reference numbers available for
use in the extract).
12.3
12.4
12:2
12.0
12.5
12.5.1
The RENEW DELETE <DB> command may be used. This is the preferred method of
re-creating the transaction database file, as it works even when the database is too
corrupt for the daemon to run. Note that <DB> must be the transaction database for the
current location, as the command cannot be executed remotely.
Following the command, ADMIN checks that all users are logged out and that the
daemon has been shut down. Note that the check on the daemon takes up to 3
minutes. ADMIN then deletes the file for the transaction database (not its DB entry)
and prompts the user to quit and to stop and re-start the daemon. When the daemon is
restarted, it will automatically recreate the transaction database file.
The RENEW <DB> AT <loc> command may be used to renew a transaction database
remotely. Note that this command may fail, if the database corruption is severe and the
daemon at <loc> cannot be started.
If the transaction database file for a location is missing, the daemon will create a fresh
version automatically when it is started up. This means that if for some reason the
database is corrupt, the transaction database can still be renewed by deleting the
corrupt version of the file manually. (You should not delete its definition in the Global
database)
After renewing, you should copy the locations transaction database to all secondary
locations (such as the Hub), using the RECOVER command. This will prevent reverse
propagation when the database is synchronised at a secondary location.
Note: The RENEW command may remove running commands because it deletes the
transaction database.
12.5.2
12:3
12.0
database. The daemon will close the transaction database before the merge, and re-open it
afterwards.
However, the REMOTE MERGE command cannot be used when the transaction database
is full, since this command cannot be recorded properly. In this case, it may be necessary to
merge it by reconfiguring.To manage the transaction dB efficiently TRINCOs (and their child
elements) need to be deleted at regular intervals. Only completed transactions should be
deleted. It only makes sense to merge the transaction dB after TRINCOs have been
deleted, otherwise the dB will not be compacted.
12.5.3
12:4
12.0
13
13:1
12.0
13:2
12.0
14
replicate the complete Global project, including all data (except the ISO
subdirectories), to a new project
For full details of the REPLICATE command, see the Administrator Command
Reference Manual.
Note: It is very important to ensure that the replicated project has a different project UUID
to the original project, otherwise the Daemon will not run correctly.
The UUID for the project is stored in the ADUUID attribute of /*GL.
If this is unset or has not been changed use the NEWUID attribute:
/*GL
!NEW=NEWUID
ADUUID $!NEW
SAVEWORK
14:1
12.0
14:2
12.0
15
When restoring a project, be aware that you may be able to restore project databases
by using Globals Recover functionality. This may give you the opportunity to minimise
work loss.
Use the backups for a location only for that location. (In some cases your only option
may be to use backups from other locations. In this case, be aware of the implication it
could have on the amount of work lost.)
Remember, your Global database (for example abcglb) at the Hub is your master
Global database. Back this up before you carry out any major Global administration
work.
When you use databases from backups, it is feasible for a secondary database to have
newer sessions than a primary database. If so, at the next update, changes may be posted
back from the secondary database to the primary database. If new sessions have been
written at the primary location, this could cause corruption. You should therefore ensure that
your secondary database backups do not have newer sessions than the primary database.
To resolve this, it may be necessary to RECOVER some databases from the primary
location after the restore.
15:1
12.0
15:2
12.0
16
You can work on an extract at the same time as another user is working on the master or
another extract. When a user works on the extract, elements are claimed to the extract in a
similar way to simple multiwrite databases, so no other User can work on them. When an
extract User does a SAVEWORK, the changed data will be saved to the Extract. The
unchanged data will still be read via pointers back to the master DB. When appropriate, the
changes made to the extract are written back to the master. Also, the extract can be
updated when required with changes made to the master.
16.1
Using Extracts
You can use extract databases both with standard (non-Global) projects and with Global
projects. This chapter gives information about the use of extracts with Global projects. Refer
to the Administrator User Guide for information about the use of extracts with standard
projects.
16.1.1
Extract Families
A Master DB may have many extract DBs. You can create an extract from another extract,
forming a hierarchy of extracts. The hierarchy can be up to 10 levels deep. The extracts
derived from the same master are defined as an Extract Family. The maximum number of
extracts at all levels in an extract family is 8191.
The original database is known as the Master database. The Master database is the parent
of the first level of extracts. If a more complex hierarchy of extracts is created, the lower
level extracts will have parent extracts which are not the master.
The extracts immediately below an extract are known as extract children. The maximum
number of extract children is 408.
If a hierarchy of extracts is created, the parent of an extract, and its parents up to and
including the Master DB, are known collectively as the Extract Ancestors.
The following diagram illustrates an example of an extract family hierarchy:
16:1
12.0
In this example:
Label
Description
PIPES
PIPES_X1
PIPES_X10
is a child of PIPES_X1.
Note: The children of PIPES are PIPES_X1 and PIPES-X2. PIPES and PIPES_X1 are the
ancestors of PIPES_X10.
Write access to extracts is controlled in the same way as any other database:
The user must be a member of the Team owning the Extract. Extracts in the same
family can be owned by the same team or by different teams.
The user must select an MDB containing the extract (or containing its parent, if the
extract is a working extract).
Note: At this release, you can only create an extract at the bottom of an extract tree: you
cannot insert a new extract between existing generations. At the Hub, you can also
create a new master database above the original master.
16.1.2
EXTNO
Extract Number
EXTOWN
Extract Owner
EXTMAS
Extract Master
EXTALS
Extract Ancestors
EXTCLS
Extract Children
16:2
12.0
EXTDES
Extract Descendants
EXTFAM
Extract Family
ISEXOP
ISEXMP
ISEXAP
LVAR
Variant
LCTROL
Controlled
16.2
16.2.1
16.2.2
Creating Extracts
Extracts can be created at any authorised Location: the parent extract must be allocated to
the Location first.
Like other databases in a Global project, extracts have a primary Location, and this need not
be the same as the Primary location of the parent database. By default, the primary location
of the new extract will be the current location.
If you are at the Hub and creating an extract for a satellite, use the AT option in the
CREATE command. The extract will be created with its primary location at the Satellite
specified.
If you are at an administering location, you must also use the AT option if you want to
specify that the extract will be created at the administered location, otherwise the
extract will be created at the administering location (that is, the true current location,
queried using Q CURLOC). The parent extract must be allocated to the administered
location.
When you are creating an Extract at a satellite, make sure you give the CREATE
EXTRACT command only once and check that the command has completed by
issuing a Q DB dbname command. You may issue further CREATE EXTRACT
commands provided that you do not use the same db name or db number (if specified).
The daemon will assign a db number (dbno) if none is specified.
The CREATE EXTRACT command will be executed by the Daemon (which will imply a
delay in executing the command) if any of the following is true:
16:3
12.0
If the new child extract is specified to be primary at another location (AT loc option).
Note: An in-built recovery operation exists for CREATE EXTRACT and, therefore, the
PREVOWNER command is not usually needed after a failure of the CREATE
EXTRACT command. However, the automatic recovery operation does not cover
the CREATE command Allocate operation and PREVOWNER may be needed in the
unlikely event of this failing.
Note that the ALLOCATE Command allows child extracts to be allocated to a satellite
without their parent being allocated, but you will not be able to open the extract until all its
ancestors have been allocated to the location. Also note that the ancestor extracts may
need to be synchronised, if timed updates of extracts has not been implemented.
Extract creation is controlled by the NOEXTC attribute of a location. If this is TRUE, then
extract creation is disabled and extracts cannot be created by that location. However the
Hub or its administering location (if authorised) may create extracts.
The purpose of the NOEXTC attribute is to prevent a satellite from creating databases on
the fly without authorisation, and it applies to the administering location, not the
administered location. However, if the HUB is doing it, it is by definition authorised. Thus the
HUB is always able to create extracts.
Similarly, we could have a situation where one satellite AAA is administering another BBB.
Satellite AAA might have NOEXTC false, and BBB might have NOEXTC true. In this case,
AAA would be allowed to create extracts for itself and for satellite BBB.
But BBB would not be allowed to create any extracts itself. The screenshots below show
how you set the NOEXTC attribute in the Modify Location form
16:4
12.0
16.2.3
16:5
12.0
A working extract inherits the write access of the parent access. That is if the parent is
primary at the location of the working extract than it can be written to, otherwise the user will
only have read access.
16.2.4
Extract Numbers
Before you start creating extracts, you should work out an extract numbering system, and
set the extract numbers explicitly when the extracts are created.
Extract numbers must be between 1 and 8191 inclusive, for each database. You must set
the range of extract numbers available for normal extracts, and for working extracts at each
location (see the diagram below). You can do this by setting the EXTLO and EXTHI
attributes for LOCLI and LOC elements as follows:
The available numbers for extract databases at a location are defined by the EXTLO
and EXTHI attributes for the LOCLI element under the /*GL element. You must define
the range of extract numbers so that there are enough left for working extracts: see
next point.
The available numbers for working extracts at a location are defined by the EXTLO and
EXTHI attributes for the LOC elements under the LOCLI element: for each Location
you must select a range of numbers which lies within the range you have left for
working extracts, and which does not overlap with the range for working extracts at any
other Location.
Note: You can query extract number ranges by navigating to the appropriate element and
giving the commands:
Q EXTLO
Q EXTHI
When you are using the ADMIN menu bar, you can use the Location version of the Admin
Elements form to create or modify a Location. On the form, you specify the range of
numbers available for working extracts at the location. See the Global Management User
Guide for details.
16:6
12.0
16.2.5
Reference Blocks
The allocation of reference numbers is controlled by the master database. Each extract may
be allocated reference blocks from the master. Elements created in the extract will be
allocated reference numbers from the local reference block(s). If no reference block is
allocated manually, the system will allocate reference blocks as required. For a Global
project, this may require daemon activity.
To avoid this, we recommend that you should assign a block of reference numbers to the
extract when you create it, using the REFBLOCK n option. The block of reference numbers
will then be available locally. n should reflect the number of users writing to the extract, for
example, if you expect to have five users writing to the extract, set n to 5.
Note: There are 8191 reference blocks available for each extract hierarchy, so there is no
need to be conservative when allocating them.
16.3
16:7
12.0
16.4
Modify the ADMIN Module definition to give access to DICT databases using the EDIT
MODULE command in ADMIN as follows:
16.5
Set up an MDB containing the DICT database in which the UDAs are stored, and make
sure you select it as you enter ADMIN. Users will also need to have read access to the
DICT database via their MDBs.
The following simple scenario illustrates how you could use UDAs in Data Access
Control combined with extracts, to control workflow.
The Designer Role would give access to all Piping elements except those with the UDA
:ISSUED set to TRUE.
Q DBNAME
This command will return the name of the database that you are actually writing to. If the
extract is a working extract, then the name of the parent extract is returned.
Another useful querying command is:
Q WDBNAME
16:8
12.0
This command will return the name of the working extract that you are actually writing to, if
there is a working extract. If there is no working extract, then the result is the same as for Q
DBNAME.
16.5.1
Managing Extracts
If the extract hierarchy has different primary locations for different extracts, then both the
parent and child databases must be both propagating and allocated at each others
locations. If this isnt done then Claims and Flushes will fail.
Because of this, Claiming, Flushing, and Issuing should be managed by a Supervisor to
ensure Claims are handles in batches in a planned and controlled manner.
16.5.2
User Claims
Normal multiwrite databases require the user to claim an element before changing it. This is
known as a user claim. Depending on how the database is set up when it is created, user
claims can be implicit or explicit, and in either case, when a new element is created, it will be
claimed to the user who created it.
Note: In a Global project, we recommend that multiwrite databases should be created with
EXPLICIT claim mode, unless all the children are primary at the same location.
User claims can be explicitly released (unclaimed) by the user during a session, and
elements are always unclaimed when the user changes or exits from a module.
The commands for user claims are:
CLAIM . . .
UNCLAIM . . .
Extract Users can check daemon availability before claiming or flushing using the following
command line syntax:
16.5.3
Extract Claims
When you are using extracts, another type of claim, known as an extract claim, is made as
well as user claims.
If an element is claimed to an extract, only users with write access to the extract will be
able to make a user claim and start work on the element.
Once a user has made a user claim, no other users will be able to work on the
elements claimed, as in a normal multiwrite database.
If a user unclaims an element, it will remain claimed to the extract until the extract claim
is released.
Extract claims allow persistent claims across sessions.
16:9
12.0
16.5.4
Command Syntax
The command syntax for handling extract claims in DESIGN is as follows:
>- EXTRACT -+|
||
||
||
||
||
|
|
||
||
-
CLAIM --------.
|
FLUSH --------|
|
FLUSHW -------|
|
RELEASE ------|
|
ISSUE --------|
.-----<---.
| /
|
DROP ---------+-*- element -+- HIERARCHY -.
|
|
|
|
-------------|
|
|
FULLREFRESH --|
|
|
|
REFRESH ------+--- DB dbname -------------+--->
FLUSH RESET ------ DB dbname ----------------->
CLAIM
FLUSH
Writes the changes back to the parent extract. The Extract claim
is maintained. The extract is refreshed with changes that have
been made to its owning database.
FLUSHW
Writes the changes back to the parent extract. The Extract claim
is maintained. The extract is not refreshed.
FLUSH RESET
REFRESH
FULLREFRESH
ISSUE
Writes the changes back to the owning extract, and releases the
extract claim.
RELEASE
DROP
Drops changes that have not been flushed or issued. The user
claim must have been unclaimed before this command can be
given.
The HIERARCHY keyword must be the last on the command line. It will attempt to claim to
the extract all members of the elements listed in the command which are not already
claimed to the extract.
The elements required can be specified by selection criteria, using a PML expression. For
example:
EXTRACT CLAIM ALL PIPE WHERE (:OWNER EQ USERA) HIERARCHY
16:10
12.0
16.5.5
16.5.6
16.5.7
USERA creates a Pipe and flushes the database back to the parent database, PIPE/PIPE.
The results of various Q CLAIMLIST commands by the three Users, together with the
extract control commands which they have to give to make the new data available, are
shown in the following diagram.
16:11
12.0
Note that:
Q CLAIMLIST EXTRACT
tells you what you can flush; and:
Q CLAIMLIST OTHERS
tells you want you can't claim
You can query the extract claimlist for a named database. The database can be the current
one or its parent:
16:12
12.0
Databases that are going to own extracts which are primary at other locations, should
be created with explicit claim mode.
Before you make an extract claim, you should do an EXTRACT REFRESH (or an
EXTRACT FULLREFRESH, if necessary) and GETWORK.
16.5.8
Flushing Changes
When an extract user makes changes and saves them, they are stored in the extract. These
changes can be made available to users in other extracts using the EXTRACT FLUSH
command.
The FLUSH command operates on a single element or a database or a collection of
elements. The changes to these elements will be made available in the parent extract.
If changes need to be made available in the master database, it will be necessary to flush
the changes up through each level of extracts. Users accessing extracts in other branches
of the extract tree will need to use EXTRACT REFRESH to see the changes (or EXTRACT
FULLREFRESH, if the users extract is part of a multi-level extract hierarchy and is itself
owned by another extract).
The following diagram illustrates the sequence of commands that need to be given so that a
user working on extract B2 will be able to see the changes made by a user working on
extract A2.
The Global daemon will only be involved in the flush process if the user is flushing changes
to a secondary database / extract from their current primary extract.
Note: If a flush fails, the database needs to be reset, because the failed flush causes
subsequent flushes and refreshes to fail. The FLUSH RESET command is used to
undo the failed flush.
16:13
12.0
This situation can arise when more than one user is issuing the same database extract.
Flush and release commands might then be processed in the wrong order, causing a flush
to fail and preventing subsequent refreshes of the extract.
16.5.9
Releasing Claims
Elements that have been claimed to an extract will remain claimed to that extract until they
are released. Any changes must have been flushed to the parent extract before the extract
claim is released.
The EXTRACT RELEASE command operates on a single element or a database or a
collection of elements. The elements claimed will be released from (that is, no longer
claimed in) the current extract, at which point they will be claimed by the owning extract.
If elements need to be made available in the master database, it will be necessary to
release the elements up through each level of extracts.
The Global daemon will only be involved in the release process if the user is releasing
elements to a secondary database / extract from their current primary extract.
When you are flushing / releasing data from a satellite to another location, you should
check that the flush has been successful before releasing the changes.
You cannot drop an element if it owns new significant elements. You have to list all the
elements in the same EXTRACT DROP command, or drop the lower-level elements
first.
You must UNCLAIM any user claim on an element before you can drop it.
The DROP command should be used with care. Once the changes have been dropped
they can only be retrieved using session data or from backup.
REFRESH will get changes made to the parent extract only of an extract in the MDB.
FULLREFRESH will get changes made to all the extract ancestors of an extract in the
MDB.
16:14
12.0
The REFRESH command will only refresh from databases local to the satellite.
Therefore, if a secondary database has not yet been automatically updated with
changes made to the database at the primary location, then these changes will not yet
be visible at the local satellite. Extracts below the database will only see the latest
version of the secondary database when they are refreshed. To see the changes made
to the primary database, you must wait for the next scheduled automatic update before
refreshing.
16.6
Partial Operations
When named elements are specified in an ISSUE, DROP or FLUSH command, it is known
as a partial issue, drop or flush. There are some restrictions on what you can do, as follows:
Where a non-primary element has changed owner, then the old primary owner and the
new primary owner must both be issued back at the same time. Otherwise there is
potential for inconsistencies to occur.
If an element has been unnamed, and the name reused, then both elements must be
flushed back together.
If the element is included in a partial flush, then its owner must also be included.
If the owner is included in a partial drop, then the element itself must be included.
If the element is included in a partial drop, then its owner must also be included.
If the owner is included in a partial flush, then the element itself must be included.
The HIERARCHY option will scan elements in both the extract and owned extract. Thus
deleted/moved elements will be included as part of the issue/drop/flush.
You can use selection criteria to specify partial issues and flushes.
Deleted elements will be issued/dropped/flushed when the owning element is issued/
dropped/flushed. Alternatively the reference number of the deleted element may be given in
the ISSUE/DROP/FLUSH command.
16.7
Extract Sessions
When an extract is created, it is created at a particular session number in the parent extract.
This is called the linked session. As the owner extract is modified, and new sessions
added, the linked session on the child extract will not change until a refresh or flush is made.
Note that ISSUE, DROP and FLUSH cause an automatic refresh.
The following example illustrates how extract session numbers and linked session numbers
change as an extract is created and modified:
Extract session Linked session
no.
in owner
Comment
10
Extract created
10
10
16:15
12.0
Comment
15
15
Further modification.
18
18
Further modification
18
Further modification
25
While a user is making changes only to the extract, the linked session number in the owner
stays the same. On refreshing, the local extract is linked to the most recent version of the
parent extract.
The new session number linked to in the owner depends on the number of flushes done by
other users. In the example the linked session number goes from 10 to 15, indicating that
five flushes have been made by other users in the meantime (assuming that no work is
being done directly on the owner).
16.7.1
Merging Changes
When a MERGE CHANGES command is given on a DB with extracts, all the lower extracts
have to be changed to take account of this. Thus doing a MERGE CHANGE on a DB with
extracts should not be undertaken lightly.
The following restrictions apply:
In a Global project, MERGE CHANGES can only be carried out at the location at which the
database and all its descendant extracts are primary. The REMOTE MERGE command
currently only handles leaf extracts and databases which do not own extracts.
See Merging Extract Databases for more information on merging extract databases.
Note: BACKTRACK is not allowed for extract databases. You must use REVERT instead.
16.8
the parent database of the working extract must be primary at the satellite
16:16
12.0
To delete a database:
The database must not be allocated to any locations other than the Hub
The database must not own any extracts, either working or standard ones
Thus to delete a database that owns extracts (and may own working extracts) may
involve doing a number of CHANGE PRIMARY commands to get rid of any working
extracts at satellites where the database is secondary.
The procedure for deleting a database that owns extracts is summarised in the diagram
below.
No
Is the DB
allocated to a
location?
Yes
Is the DB
primary at a
location?
No
satellite
Yes
No
Does the DB
own Extracts?
Yes
No
Are they
Working
Extracts?
Yes
DELETE DB dbname
Note: A DB does not need to be primary at the HUB, just as long as it is not primary at the
location where it is being de-allocated
16:17
12.0
16.9
Variant Extracts
Variants are a special type of extract, with less rigorous control of claiming elements and
writing data back to the owning extract. They are designed to allow users to try out different
designs, which then may or may not be written back to the master.
Variants are different from normal extracts (including Working extracts) in the following
ways:
Any element can be modified without being claimed, and so different users can modify
the same element in different variants.
When data is written back to the owning database, it will overwrite any conflicting data
in the owner.
A variant can have normal extracts created from it. Note that in this case, the variant forms a
new root for claiming elements: claims in extracts below the variant will not be visible from
other parts of the extract family, and claims in other parts of the family will not be visible in
extracts owned by the variant.
It is possible to have working variants.
Cause
Unable to savework. Perhaps you have Daemon has been expunged. Modifications
been Expunged
to database (other than updates) will fail.
Flush may have overtaken another flush. In this case, the Flush will stall for a retry.
Previous flush could not be found
Previous flush failed
Unable to claim <item> because element is Valid failure - another extract or user has it
already claimed by <extract or user> from claimed
Extract <no>
Unable to claim <item> from parent extract EXTRACT REFRESH is required, to bring
<no> because element is modified in a later the child extracts view of the parent up to
session.
date
Nothing to claim locally - all claims failed in Cannot claim to child extract, because
owning extract
failed to claim anything from its parent
You cannot claim <item> without doing an The item has not been claimed into the
extract claim from the parent extract
extract before the User has claimed it. This
is only applicable to Explicit dBs.
16:18
12.0
Symptom
Cause
Unable to claim <item> from parent extract The item has been deleted in the parent,
<no> as element has been deleted in a later and the child extract has not been brought
session
up to date yet.
Element reference <item> is invalid or has The reference number of <item> cannot be
been deleted
found in the database, it is an invalid
reference number.
Element <item> has been modified, so The item must be saved the database
cannot be released. Savework must be before an extract operation can be
done first
undertaken on it.
Element <item> has been deleted by The item you are trying to Claim has been
another User
deleted by another user.
Name clash on <item>. Please rename
Cannot flush/abandon <item> as old and The parent of the owner has been changed.
new owners must both be in the list, or Both the old and the new owners need to
neither in the list
be flushed/issued/abandoned at the same
time, and the list currently only contains one
or the other.
Cannot flush/abandon <item> without it's The item is either new or moved to a
owner
another item. Both need to be flushed/
issued/abandoned at the same time.
Cannot flush/abandon <item> without it's The member list of item has changed in
members
some way. The item needs to be flushed/
issued/abandoned with it's members.
Cannot abandon/release <item>. Element is The item is claimed by a User (possible the
claimed out by a user (maybe yourself) or to user doing the EXTRACT ABANDON/
an extract
RELEASE) or to a child extract.
Element <item> kerror <no>
16:19
12.0
16:20
12.0
17
Off-line Locations
Normally there is a communications link between pairs of locations, and these locations are
referred to as on-line. (Their ICONN attribute is 1, and RHOST points to a valid computer
name.) However, Global can operate if there is no direct communications link between the
Hub and certain locations. These locations are referred to as off-line. (Their ICONN is 0,
and RHOST may be unset.)
A tape, CD or other medium is used to copy the databases from one location to the other.
It should be noted that:
Off-line locations can only be children of the Hub. An on-line satellite cannot have offline children.
Database transfer to and from the media used for communication with an off-line
location can only be made at the Hub and the off-line location.
The transfer folder is a holding area for data going to and from the satellite:
TRANSFER TO offline satellite from HUB copies satellite secondary dbs to the transfer
folder for the satellite (at the Hub)
The contents of this folder are transferred to the satellites transfer folder
TRANSFER FROM HUB at the offline satellite copies satellite secondary dbs from the
transfer folder at the satellite to the satellite project
TRANSFER TO HUB at the offline satellite copies satellites primary dbs to the transfer
folder for the Hub.
The contents of this folder are transferred to the hubs transfer folder for the satellite.
TRANSFER FROM offline satellite at the Hub copies satellites primary dbs from the
transfer folder at the Hub.
It is potentially unsafe to assume that samsys in a transfer folder is the satellite system
database. If the TRANSFER FROM step is omitted, then the local system database could
be corrupted. This is because the meaning of the file samsys is ambiguous in TRANSFER
functionality.
For this reason, the functionality of TRANSFER has been changed since previous version of
Global to enforce the use of a location suffix in the Transfer folder. All system databases in
the transfer folder always have a location qualifier, even the system database for the Offline
satellite.
17:1
12.0
17.1
Potentially, inter-db macro changes could be lost. TRANSFER FROM merges the
macros from the transfer folder into the satellites MISC database, which already might
contain local inter-db macros.
If the satellite system database is secondary, then the incoming system db transferred
from the Hub will be named with a location suffix. This would need renaming to become
the local system db.
17.2
All users must exit before the TRANSFER TO command is initiated. Otherwise, the
command may fail.
To ensure that the database file is copied to the satellite when a new database is
allocated to an off-line location, the TRANSFER TO command must be used at the Hub
after the ALLOCATE command. The TRANSFER FROM command is then used at the
satellite. The TRANSFER should precede a CHANGE PRIMARY command.
Extract hierarchies must never be partially off-line. An entire hierarchy must be on-line
or off-line.
Working extracts cannot be created for off-line locations, unless they are selfadministered. (The file for the working extract cannot be created if the Hub administers
the off-line location.)
Transfer of other data, such as ISODRAFT files, external PLOT files and DESIGN
manager files, must be done manually to an off-line location and from it.
To change a satellite from on-line to off-line, shut down its daemon and change ICONN
to 0. You should then manually copy the Global database to the off-line location. The
TRANSFER command will then work.
Picture Files and Final Designer DWG files are transferred to Offline Locations and
should be copied on CD/through ftp when they reside in the locations Transfer area.
17:2
12.0
17.3
17.4
17:3
12.0
17:4
12.0
18
Firewall Configuration
The primary objective of a firewall implementation is to provide security to an organizations
network. In simple terms, a firewall solution enables only certain applications to
communicate from the outside world (for example the Internet) to the organization's
network, and vice versa. To enable these applications to function, specific communication
ports need to be open. The fewer ports open within a firewall, the less chance there is of
security breaches.
Under situations where Global is implemented within an environment that has no firewall set
up, Global will function without any specific network configuration (other than the
requirements outlined under Global > IT Configuration on the AVEVA Support website and
in the Global User Guide). However, when a Global project is to be deployed between two or
more locations that have firewall implementations, certain ports need to be open in order for
Global to function.
RPC communications are an integral part of Global. Global uses TCP port 135 and a
dynamic range of ports above 1024 to communicate from one location to another (i.e.
through the Global daemons running at each location).
The dynamic range of ports required to be open (i.e. 1024 and above) poses a security risk.
In order to reduce this, we can force the operating systems RPC communications to use
only a specified range of ports. This drastically reduces the risk of intrusion from third
parties.
Firewall rules can also be specified to limit access to these ports to a specific program.
Global has a unique identifier (UUID) which is possible to use when defining firewall rules.
For further details, contact AVEVA Support.
18.1
18:1
12.0
The following solution can be applied to any modern firewall with the functionality of packet
filtering.
The procedure for restricting the use of dynamic ports for RPC is through additions in the
Microsoft Windows registry.
Changing the registry should not be undertaken lightly. Please note that incorrect
modification of the registry could lead to serious problems with your system. It is therefore
recommended that you back up your registry before making changes.
To change the registry, you must use REGEDT32 and not REGEDT, as the latter does not
allow you to modify the string data type. If you do not use REGEDT32, the following
message will appear on daemon startup:
HKEY_LOCAL_MACHINE\Software\Microsoft\Rpc
Under this subkey create three values with the corresponding string data:
18:2
12.0
18:3
12.0
18:4
12.0
19
19.1
Introduction
This Section gives general advice on the Housekeeping activity for Administrators running
projects. We use the term Housekeeping as a metaphor to compare the work you would
perform to create and maintain a house and its contents in a state of good repair, well
organised, tidy and clean with the similar goals for the data of an engineering project.
Similarly to a house, the larger the scale the more substantial a task this can be. However,
if you establish the basis and practices early, you can keep the task to a routine activity that
increases in efficiency with practice over the duration of the project.
This Section should be seen as supplementary material to the standard Administration
documentation and not a replacement.
As with all advice it is not mandatory and should be taken as points for consideration in
creating a stable Administration environment. Also, although efforts have been made to be
as comprehensive as possible it is not exhaustive and will be subject to modification and
addition as the base product and its use across wide industry sectors increases and
experience in good practice improves to match.
It is written for an audience who are assumed to have undertaken training in Administration
and have a thorough background in maintaining projects. Moreover, not everything
described here necessarily applies to all project set-ups under all circumstances.
However, IT managers may find it useful as background information in deciding how to
organise base product Administration.
19.2
Dice
This is the Data Integrity Checking tool supplied as part of the ADMIN module.
Its purpose is to provide a report on the base product Dabacon databases that informs the
administrator if there are any issues with the database that require extra attention. In
addition, you can also run it in a patch mode that will actually facilitate a repair on the
database.
It is recommended that a full Dice report it is run as a matter of routine daily on all databases
in the project. This includes the full extract family and secondary databases if Global is in
use.
Foreign projects, such as a centralised Catalogue, should also be Dice checked, although
the frequency should not need to be so frequent if they are not being updated on a daily
basis. Often this is done as a scheduled batch routine during no working periods.
19:1
12.0
However, if the project is in a period of intense activity and the window for running bulk
processes for reports, drawings, material take-off is small, it can be run with users and batch
processes continuing to run on the model.
Having produced the report it is imperative that it is closely scanned for issues of concern
and then action taken to address them. Ideally, the Administrator should take action to
remove all errors and warnings; however some warnings can be deemed to be acceptable
and of no risk to the healthy running of the project e.g. Element =18585/38329 WarningAttribute TREF contains invalid ref =18585/74770.
This error will also be highlighted to the normal users as they check their designs so it will be
picked up there. However, if the identical reference numbers in these messages recur the
Administrator should follow up with the last user to access the element (info in session data)
to ensure it is cleared.
The Fatal Errors listed in a Dice report are usually ones that need immediate attention and
action to repair the database will be needed. Nevertheless, on occasion the error can
either be tolerated for a period as it is not truly critical, or may have been wrongly
categorised as Fatal and constitutes only a warning e.g. Error in level 2 NAME table,
session no. 10469, page no. 42385 - incorrect value of first key on lower level page no.
42386 (extract 1).
While AVEVA provide analysis of each error message outlining how it should be addressed,
the nature of an individual project set-up can make the method on how they should be
addressed variable. Therefore it is recommended that as the Administrator becomes
familiar with the action needed to address each warning or error it is documented and
recorded in project work instructions.
Certain database errors can be fixed by running Dice again against the problem database,
this time in patch mode to repair the fix. Two typical examples are:
Element =35021/
13323 has an inconsistent entry in the name table. Name
exists on the element but is not in the name table
itself. Thus the element can not be navigated to by name
Please reconfigure this DB to resolve the problem
This work should be done when there are no Read or Write access to the database, but to
avoid a complete project shutdown it is possible to remove the problem db from all MDBs
do the repair and then replace it. Because of the additional complexity this may involve,
looking for a window in the project workload is normally the preferred choice.
Two or three days before a phase of major deliverable production it is recommended to be
especially diligent in Dice checking to ensure that all databases are in good shape and
reduce the risk of an interruption in the bulk process.
If a user reports an unusual problem with part of the project data, such as a Dabacon crash,
the first step should always be to perform a Dice check on the database(s) involved. If the
19:2
12.0
report shows issues that cannot be repaired by patching or reconfiguration then the Dice
report should be sent immediately to AVEVA support.
If after repairing the database the database is OK for a few days and then Dice reports
errors again then this may indicate a deeper issue and the Dice report together with any
background information on circumstances that are common to the error occurring e.g. same
users, same UI menu etc. should be reported to AVEVA support who may then request that
the databases be sent in for fuller investigation.
19.3
Global
This section provides information to advise Administrators on good practices.
recommend you read it fully.
19.3.1
We
Update Frequency
The idea of Global is that it provides the ability for a project split across several locations to
behave just as if it was located in one location. Therefore it is assumed that most
deployments of Global will have this objective in mind and will ensure that each location is
updated with changes from the other locations on a frequent basis, especially when they are
in similar time zones. This is particularly important when the locations are operating in the
same physical space e.g. in a compressor house one location covers the steam lines, the
other the utility lines. The aim here is, of course, to try and avoid routing pipe in the same
space as the other location. This also lies to the idea of keeping an Extract database only
local at one location i.e. if the project is process split this will incur a higher risk of clash
issues when the data eventually migrates to a higher level database shared between the
locations. If the project is split geographically e.g. each location covering complete units,
then this particular risk is reduced.
As a baseline updates between locations occurring around 4 times per working day are
reasonable with a possible escalation if significant change is occurring at critical times and
data is needed by one location faster than normal e.g. a fabrication yard when the project
data is reaching design completion. Examples of updates every 15 minutes have been seen
in this particular scenario.
Where the project is split across time zones, then timing updates to ensure data is
exchanged to suit start and close of work with attention to any time overlap is
recommended.
However, when selecting update frequencies the issue of data quantity moving across the
network should be considered too. The other idea of Global is to allow smaller chunks of
data to be transferred rather than whole databases. Therefore, if only one transfer is done
per day, the quantity of data will be large and if there has been an intense period of
modelling in one location then the update may take longer, possibly not completing in time
for drawing or review file production as expected. Therefore, doing several updates will
reduce the risk of update overlap or incompletion before deliverable production.
When different time zones are involved, it may be useful to use an intermediate satellite.
This will make it easier to transfer large amounts of data outside working hours.
19.3.2
Timing of Updates
The batches of updates that are run in one update session to keep all locations
synchronised do not have to be run sequentially. However, updates should be not started at
exactly the same time to avoid file-contention on the Global database.
19:3
12.0
If it is felt desirable to run the updates sequentially then a script will be required that uses
the EXECAfter and EXECBefore script attributes on the Update event (LCOMD) to run preand post-execution scripts on a scheduled update. This could also:
19.3.3
DBname
Session_
SAT1
2001
SAT1PIPING/UNITA
P 1134
S 1128
S 1128
S 1128
2000
SAT1PIPING/UNITB
P 684
S 679
S 679
S 679
2002
SAT1PIPING/UNITC
P 106
S 106
S 107
S 106
2003
SAT1PIPING/UNITD
P 758
S 742
S 692
S 742
2004
SAT1PIPING/UNITE
P 533
S 517
S 467
S 517
2457
SAT3PIPING/UNITA
S 1164
S 1164
P 1169
S 1164
2451
SAT3PIPING/UNITB
S 814
S 793
P 849
S 814
2432
SAT3PIPING/UNITC
S 131
S 131
P 133
S 131
2431
SAT3PIPING/UNITD
S 451
S 451
P 453
S 451
2100
HUBPIPING/UNITA
S 1148
S 1148
S 1148
P 1175
2102
HUBPIPING/UNITB
S 212
S 212
S 212
P 213
2104
HUBPIPING/UNITC
S 231
S 231
S 231
P 234
2355
SAT2PIPING/UNITA
S 560
P 562
S 560
S 560
19:4
12.0
DBnumber
DBname
Session_
SAT1
2353
SAT2PIPING/UNITB
S 288
P 328
S 288
S 324
2351
SAT2PIPING/UNITC
S 513
P 578
S 484
S 541
2351
SAT2PIPING/UNITD
S 79
P 101
S 79
S 79
2343
SAT2PIPING/UNITE
S 174
P 176
S 174
S 174
Legend
P
Primary location
Secondary location
Locations aligned
Secondary locations not aligned Update manually to synchronise
Secondary location ahead of Primary. Investigate and repair.
This macro is not a standard delivery as it needs tailoring for each project set-up. If required
you can request services from AVEVA to deliver this.
19.3.4
19.3.5
19:5
12.0
each update process completes successfully and that the realignment has been successful
before kicking off the next update.
19.3.6
Flushing/Issuing
It is common practice for all users on a project that uses Extract databases, whether Global
or not, to follow common practices of Flushing. Generally it is expected that each user will
be expected to Claim, Flush and or Issue on an object-by-object (or small group of objects).
However, it may be decided by some customers to manage the Flush and Issue on a
collective basis at managed intervals, say once a day. If this is done then the Flush or Issue
should be done at as high a level in the database e.g. SITE. This reduces both the number
of sessions created and the database file size.
Note that if the Model Object Manager software is in use, the program does background
flushing and issuing to keep the Primary data as synchronised with the Oracle data as
possible. If Model Object Manager is in use regular Global Updates will also reduce the risk
of the user viewing Oracle data that is not aligned with the Secondary view of the PDMS
data. .
19.3.7
Transaction Database
This database holds all the information about the success or failure of the updates and
remote claiming and is the first place to go to check that Global is operating successfully.
Ideally it should be regularly monitored by the Administrator responsible for each location.
Note that if an automated update fails for any reason, then there is always the option to
perform the update manually rather than waiting for the automated update to try and align
things again. By doing the update manually, the duration of locations being out of synch is
reduced and also the automated update process does not get loaded with 2 or more lots of
update data to deal with.
On a large and busy project the transaction database can become very large, so it should
be compacted on a regular basis. The recommended method of doing this is to use the
Merge-and-Purge function from the Daemon, or by selecting Utilities>Transactions in the
Admin module and the selecting the Purge/Merge transactions DB tab to display the
following window:
For a detailed description see Merging and Purging from ADMIN in the Global User Guide.
Daemon merge-and-purge can be done when DESIGN users are in the project (but not
ADMIN users) provided that they do not have the Transaction db in their MDB. If a Module
(e.g. ADMIN) is accessing the transaction db when the merge-and-purge is attempted, then
nothing will be purged.
19:6
12.0
If the merge-and-purge is interrupted e.g. by a crash of the Daemon, then one of the two
following methods could be used at each satellite after al users are out of the project and the
Daemon has been stopped: either
or
restart the Global daemon ( a new clean database will be created automatically)
The problem with this method is that incomplete transactions are lost and therefore
updates are missed and this may contribute to misaligned Primary and satellite
locations.
The ADMIN UI provides a view of the updates from the Transaction db and it is important
that the administrator checks the actual messages from these Updates because the update
may not have successfully updated ALL databases, although the overall command has
been successful.
If the MESSAGE reads 'Update All succeeded (NNNN DBs) with MMMM failures' then the
administrator MUST investigate the failures. The FAILUREs pane of the Transaction
messages form indicates this. If this check is considered to be worth separating to a distinct
procedure a macro may be written to collect TRFAIL elements below the TRINCO for the
TIMEDUPDATES user.
19.3.8
19.3.9
admnew Files
When the Daemon copies a Database usually after a session Merge (as opposed to a
session-based update), it copies to a temporary file with the suffix .admnew. When the copy
is complete, this file is renamed to replace the old database file.
19:7
12.0
These .admnew files are normally tidied up automatically. However, if the daemon has
crashed, it may leave unwanted .admnew files behind, which can prevent a subsequent
Daemon attempt to copy the database from running.
It should be ensured that the satellites and hub remove such files after a crash.
See admnew Files for a full description of .admnew files.
19.4
19.5
19.5.1
Background
On occasion there may be circumstances where after a piece of Administration work such
as session merging, the database file has been found to be locked by the Windows
Operating System. To resolve file locks, the Administrator has two options:
Reboot the computer where the databases reside (this assumes the Administrator has the
privilege to do this, for a lot of cases this is not a practical solution).
Resolve file locks using a specific tool as described in admnew Files.
Also note that if the project is not used as a foreign project, you have a third choice in the
Overwrite DB Users flag, which is the LCPOVW attribute of the LOC element.
This attribute controls whether a locked file at a location may be overwritten. If this attribute
is set TRUE and there are no database READERS in the project, then Global will overwrite
the locked file by the .admnew file.
Important: Do not do this if other projects include this database as a foreign project, since
these are valid READERS that are not recorded in the session data for the
Global project.
19:8
12.0
19.5.2
19.5.3
Do not kill (Close via top-right X button) the background DOS window.
Do not kill the main Module Windows and then directly re-enter that Module. If for some
reason this has to be done then use the Windows Task Manager to check that the
Module process has closed and if it has not then End the process.
Removing Users
After a session has been illegally exited, either deliberately or due to an unexpected system
fault, the Users who were accessing the databases may be left as phantom users (also
known as dead users) in the system. To clear these users from the databases and release
their claims the Administrator can use the Expunge syntax for all users or specific dbs (see
ADMIN Command Reference manual for details of all Expunge options, including how to set
the Overwrite DB Users option to allow non-foreign projects to copy over locked files
provided there are no users recorded in the COMMS db. Overwriting is disabled by default
because it may cause sessions of dead users to crash).
You can use the ADMIN Module for this also.
To force live rogue users out of the system who have not followed the request to leave the
system before Admin work is carried out, the Expunge User Process can be used. This will
not stop the process on the Workstation but it will sever the link with the database file and
the next time the user tries to access the process (Module Window) it will crash. After the
Expunge User Process has been done it is common practice to then use Expunge All Users
to remove any lingering phantom users and release all claims.
However, it is necessary after the Expunge processes (or other illegal exits) to ensure that
the database files have not been locked by Windows or left open and they should be closed
so that further work in the databases can be done. As the files normally reside on a
separate File Server, administration access to that server will be required.
19:9
12.0
1. Broadcast a message to all users on the project telling them that they should cleanly
exit by a required time. If the ADMIN MESSAGE command is used note that it will only
be visible to those logged in at the time, and when they change module.
2. At the advised time Lock the project via ADMIN to prevent any users accessing the
databases further.
3. If a Global project, stop the Daemon to stop updates and/or remote claiming.
4. Check the project for any users still logged in and try to get in contact with them and
ask them to leave the project cleanly.
5. Any users who cannot be contacted should be severed from the project by Expunge
User Process.
6. Expunge All Users to remove any phantom users and release any claims.
7. Using pstools PsFile check for any open or locked db files on the db File Server.
8. Using pstools PsFile close any open or locked db files on the db File Server.
Note: In truth the only databases that should not be being accessed in the project in Read
or Write mode are those on which an ADMIN task such as reconfiguration or session
is being undertaken. However to secure this without getting all users out of the
project is to isolate the databases (inclusive of the whole extract family) from use by
removing them from all MDBs and then performing steps 1-8 with the exception of 6.
Deferring them is not recommended as the user can overwrite the deferral. After
the Admin task has been performed on the specific databases they can then be readded to the MDBs.
As this adds an extra level of complexity to the Admin task it is therefore suggested that a
window of time is sought where the whole project can be shut down.
19.6
In this scenario the SAT2 users working on the EX2_SAT1 db are claiming objects from EX1
Primary at SAT1.
19:10
12.0
This can be done dynamically in Explicit Claim mode over the Daemon. However, the
response can be variable causing the SAT2 users to be unsure as to the status of their
claim. Therefore it is recommended that the project is organised in such a way that the EX1:
Primary objects to be worked on at SAT2 are identified and marked by the SAT1 users and
then an Admin process is run to Extract Claim the collection to the EX2_SAT1 Primary db at
SAT2. When the work on the objects is complete the SAT2 users mark the objects as ready
to be Issued and an Admin process is run to Extract Issue the collection back to EX1:
Primary.
19.7
19.7.1
ADMIN Lead
A single technical expert, with an in-depth knowledge of the application from a User and
Administration background is placed in charge of the whole project, has decision making
authority and is the contact for communication with the engineering and IT management for
the project. This person is to have a full-time Deputy who can stand-in in their place during
planned and unexpected absence. In a Global project the location of the Hub should be
sited at the location of this person.
This role, including the Deputy, should have a high-level of IT knowledge and be a trusted
partner of the IT group with permissions to access the application server(s) to perform
specific tasks.
It will be this role that has the main contact with AVEVA Support unless it pertains to a
specific discipline need when the Discipline SME role comes into play.
This is a full-time role on a major project.
19.7.2
19:11
12.0
opening weeks, moving to an irregular pattern of high and low workloads to be balanced
with the primary role of engineering and/or design on the project.
19.7.3
19:12
12.0
Lock
make Global
Note: The user will be prompted to close and re-open the Admin module.
unlock
savework
A:1
12.0
Select Display > Command to open the command window and then type the following
commands
q linit
Example PML to wait for command to complete:
!c = curloc
do
pause 1
session comment 'Interim savework at HUB after INITIALISE'
savework
getwork
break if (!c.linit)
!f = object FILE(!!itaSkipPath + '/skip')
break if (!f.exists())
skip
enddo
savework
***** Generate the locations at the HUB ******
/*GL
LOCLI 1
NEW LOC /PFB
LOCID PFB
DESC Piping Fabrication
RHOST sg132
CR DB TRANSACTION/PFB
GENERATE LOCATION PFB NOALLOCATE
Note: ALLOCATE will copy all the project files to the location defined by variable
{proj}_PFB.
NOALLOCATE will only copy the system DB files.
At the Satellite, use Windows Explorer to copy the files in {proj}_PFB to the location
directory where the project will reside as {proj}000 (i.e. the Satellite).
Set up the base product environment at the satellite location (executables, Project
directories etc).
set PDMSEXE=C:\ita_test_env\pdms
set {proj}000=C:\net\project\{proj}000 ..etc.
A:2
12.0
PING PFB
Example PML to wait for command to complete
do
pause 1
ping PFB
handle ANY
!f = object FILE(!!itaSkipPath + '/skip')
break if (!f.exists())
skip
elsehandle NONE
break
endhandle
enddo
INITIALISE
Having setup the environment at location
savework
getwork
/PFB
Q LINIT
** If LINIT TRUE then PFB has Initialised
Example PML to check initialisation is complete
!loc = /PFB
do
pause 1
session comment 'Interim savework at HUB after initialisation
of PFB'
savework
getwork
break if (!loc.linit)
!f = object FILE(!!itaSkipPath + '/skip')
break if (!f.exists())
enddo
session comment 'Savework at HUB after confirming
initialisation of PFB'
savework
getwork
Now allocate the required DBs to the location PFB
ALLOCATE pipeapproved/master SECONDARY AT PFB
ALLOCATE pipereview/siteufa/A SECONDARY AT PFB
ALLOCATE pipeworkarea/fabwork/A PRIMARY AT PFB etc.
A:3
12.0
do
pause 2
session comment 'Interim savework at HUB - waiting for
allocations to PFB'
savework
getwork
!location = /PFB
q var !location.members[1].members
break if (!location.members[1].members.size() ge 28) $* no.
of allocates
!f = object FILE(!!itaSkipPath + '/skip')
break if (!f.exists())
enddo
session comment 'Savework at HUB after confirming allocations
to PFB'
savework
Create Teams and Databases at the Hub and User, MDBs locally.
REPEAT FROM ****GENERATE LOCATION****, for all locations required.
A:4
12.0
B.1
The required propagation direction - always away from the Primary location of the
database
B:1
12.0
B.2
Query the primary location of the database (Q PRMLOC at the DB element for the
database; or Q DB <name> contains this information)
Query the Filename - this is useful in identifying the database in the daemon trace.
Query the NACCNT, Latest session number, HCCNT and CLCCNT for the database
Decide from this in which direction to RECOVER the database; and recover the
database.
Note that the new command Q REMOTE <locname> <dbname> FILEDETAILS may be
used to gather this information for both locations.
Note: Note that the RECOVER command is the only command which is allowed to copy
the file without a check on the propagation direction.
In general, if the Prevented Reverse Propagation message contains Copy, it is the
NACCNT attribute that is the problem. This counter is incremented by a database MERGE,
BACKTRACK (but not REVERT - the Appware uses REVERT) or Reconfiguration. In this
case, the propagation needs to copy the entire database file. However the copy has failed,
because the NACCNT is higher at the secondary location than the primary location.
The other properties are used to control normal database propagation, where only the
required sessions and the database header are sent. If the Latest session number is higher
at the secondary location than at the primary location, then database recovery is required. If
the session numbers are equal, but the HCCNT and CLCCNT attributes are higher at the
secondary location than at the primary location, then a database recovery is also required.
Usually, recovery should be made from the Primary location, unless there are good reasons
why a secondary location has the correct version of the database.
B.3
Q SESSIONS MYTEAM/DESI
B:2
12.0
The database properties NACCNT, HCCNT and CLCCNT may be queried in the normal way
by navigating to the DB element for the database, for example, /*MYTEAMDESI. Attributes.
It should be emphasised that these attributes are properties of the database file, and may
differ at each location.
Alternatively, a PML object <DB> may be constructed for the database:
!DD.NACCNT
!DD.HCCNT
!DD.CLCCNT
!DD.LatestSession()
Note that the last property is a member, not a method. Primary location and filename may
also be queried:
!DD.FileName
!DD.Prmloc
The same properties may be queried for a database at a remote location ABC by using:
local 0 remote 0
local 3 remote 2
(6) At Tue Oct 04 01:03:24 2005 Claim Changes counts: local 17 remote 1
(6) At Tue Oct 04 01:03:24 2005 Extract List counts:
local 3 remote 10
In this case this indicates that the current location has a more recent session than the
remote location. The Claim count only applies to a session, so its value will be ignored
unless the session numbers are the same. In this example, the implied propagation
direction is from the current location to the remote location.
However, before making the update, the Daemon checks the update direction, to ensure
that the propagation direction is consistent with the direction away from the primary location
of the database. If this check fails, then the Prevented reverse propagation error causes
the update to fail.
Occasionally, it is not possible for the daemon to check the Update direction (Global db may
be in use). In this case, the failure will read Update skipped. This is normally a temporary
problem, and the database will be propagated as normal on the next scheduled update.
B:3
12.0
B.3.1
/2005/OCT/5/TIMEDUPDATES/ABC
where ABC is the LOCID of the location owning the Update event (LCOMD). PML Collection
syntax can be used to extract the Failures:
B:4
12.0
C:1
12.0
C:2
12.0
D:1
12.0
endif
endif
!date = object DATETIME(!year,!month,!day,!hour,!minute,!second)
!collection = object COLLECTION()
GOTO FRSTW TRAN
!collection.scope(!!ce)
!filter = object EXPRESSION('upc(TSTATE) eq |COMPLETE|')
!collection.filter(!filter)
!collection.type('TRINCO')
!trincos = !collection.results()
!promptstr = 'Found ' & !trincos.size().string() & ' complete transactio
ns...'
$P $!promptstr
!promptstr = 'Deleting obsolete transactions more than ' & !days.string(
& ' days old...'
$P $!promptstr
!numdel = 0
!numh = 0
do !trinco values !trincos
!datecm = object DATETIME(!trinco.datecm)
!datend = object DATETIME(!trinco.datend)
if (!trinco.incsta.upcase() eq 'PROCESSED' and !datecm.lt(!date) or !t
rinco.incsta.upcase().inset('TIMED OUT','CANCELLED','REDUNDANT') and !date
nd.lt(!date)) then
!numdel = !numdel + 1
!!CE = !trinco
DELETE TRINCO
if (!!CE.members.size() eq 0) then
DELETE TRLOC
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRUSER
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRDAY
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRMONT
!numh = !numh + 1
if (!!CE.members.size() eq 0) then
DELETE TRYEAR
!numh = !numh + 1
endif
endif
endif
endif
endif
endif
enddo
$P $!numdel obsolete transactions deleted
$P $!numh associated hierarchy elements deleted
if (!numdel eq 0) then
$P No merge necessary
!!Alert.Message('No obsolete transactions found')
else
D:2
12.0
D:3
12.0
D:4
12.0
Index
Extracts . . . . . . . . . . . . . . . . . . . . . . . . 16:1
access . . . . . . . . . . . . . . . . . . . . . . 16:8
children . . . . . . . . . . . . . . . . . . . . . 16:1
claim restrictions . . . . . . . . . . . . . 16:11
creating . . . . . . . . . . . . . . . . . . . . . 16:3
creating working . . . . . . . . . . . . . . . 16:5
dropping changes . . . . . . . . . . . . 16:14
explicit claim . . . . . . . . . . . . . . . . 16:11
extract claim . . . . . . . . . . . . . . . . . . 16:9
flushing . . . . . . . . . . . . . . . . . . . . 16:13
flushing command failure . . . . . . . 16:11
hierarchy . . . . . . . . . . . . . . . . . . . . 16:7
implicit claim . . . . . . . . . . . . . . . . 16:11
issuing changes . . . . . . . . . . . . . . 16:14
master . . . . . . . . . . . . . . . . . . . . . . 16:1
merging changes . . . . . . . . . . . . . 16:16
numbers . . . . . . . . . . . . . . . . . . . . . 16:6
parent database . . . . . . . . . . . . . . . 16:1
partial operations . . . . . . . . . . . . . 16:15
querying family . . . . . . . . . . . . . . . . 16:2
reference blocks . . . . . . . . . . . . . . 16:7
refreshing . . . . . . . . . . . . . . . . . . . 16:14
releasing claims . . . . . . . . . . . . . . 16:14
sessions . . . . . . . . . . . . . . . . . . . . 16:15
user claim . . . . . . . . . . . . . . . . . . . 16:9
using in . . . . . . . . . . . . . . . . . . . . . 16:8
variant . . . . . . . . . . . . . . . . . . . . . 16:18
C
Command Processing . . . . . . . . . . . . . . . 2:1
D
Database
allocation check . . . . . . . . . . . . . . . . 5:1
allocation to location . . . . . . . . . . . . . 5:1
creating extract . . . . . . . . . . . . . . . . 16:3
creating master . . . . . . . . . . . . . . . . 16:3
de-allocation . . . . . . . . . . . . . . . 5:2, 5:3
deleting . . . . . . . . . . . . . . . . . . . . . . 11:1
macros . . . . . . . . . . . . . . . . . . . . . . 10:4
manual update . . . . . . . . . . . . . . . . 10:1
master of extract . . . . . . . . . . . . . . . 16:1
merging . . . . . . . . . . . . . . . . . . . . . . 6:1
reconfiguring . . . . . . . . . . . . . . . . . . 13:1
recovery . . . . . . . . . . . . . . . . . . . . . 12:1
recovery of global . . . . . . . . . . . . . . 12:2
recovery of primary . . . . . . . . . . . . . 12:2
recovery of primary location . . . . . . 12:2
recovery of secondary . . . . . . . . . . 12:1
synchronisation . . . . . . . . . . . . . . . 10:1
update delay . . . . . . . . . . . . . . . . . . 10:2
update protection . . . . . . . . . . . . . . 10:7
update timing . . . . . . . . . . . . . . . . . 10:4
updating . . . . . . . . . . . . . . . . . . . . . 10:1
DESIGN Manager files . . . . . . . . . . . . . 10:5
F
Firewall . . . . . . . . . . . . . . . . . . . . . . . . . 18:1
Index page i
12.0
writing to . . . . . . . . . . . . . . . . . . . . . 7:1
Global Daemon
access rights . . . . . . . . . . . . . . . . . . 3:1
diagnostics . . . . . . . . . . . . . . . . . . . . 4:1
location . . . . . . . . . . . . . . . . . . . . . . . 3:1
H
Hub
changing . . . . . . . . . . . . . . . . . . . . . . 9:1
recovering . . . . . . . . . . . . . . . . . . . . . 9:2
I
ISODRAFT files . . . . . . . . . . . . . 10:5, 17:2
K
Kernel Command . . . . . . . . . . . . . . 2:1, 7:1
L
Locations
off-line . . . . . . . . . . . . . . . . . . . . . . . 17:1
M
Macros . . . . . . . . . . . . . . . . . . . . . . . . . 10:4
P
Pending file . . . . . . . . . . . . . . . . . . . 2:1, 8:1
PLOT files . . . . . . . . . . . . . . . . . . 10:5, 17:2
Projects
backing up . . . . . . . . . . . . . . . . . . . 15:1
T
Transaction Audit . . . . . . . . . . . . . . . . . . 7:1
Transaction database
audit trail cancelled commands . . . . 7:7
audit trail dates and counts . . . . . . . 7:5
audit trail from TRINCO . . . . . . . . . . 7:2
audit trail from TROPER . . . . . . . . . . 7:4
audit trail from TROUCO . . . . . . . . . 7:3
audit trail results and messages . . . . 7:7
commands . . . . . . . . . . . . . . . . 2:1, 7:1
management . . . . . . . . . . . . . . . . . 12:3
merging . . . . . . . . . . . . . . . . . . . . . 12:3
merging and purging . . . . . . . . . . . 7:13
reading from . . . . . . . . . . . . . . . . . . . 7:1
reconfiguring . . . . . . . . . . . . . . . . . . 12:4
renewing . . . . . . . . . . . . . . . . . . . . . 12:3
Index page ii
12.0