You are on page 1of 54

ECLRUN

Version 2010.2

User Guide

ECLRUN User Guide

Copyright (c) 2010 Schlumberger. All rights reserved. Reproduction or alteration without prior written
permission is prohibited, except as allowed under applicable law.
Trademarks & service marks
"Schlumberger," the Schlumberger logotype, and other words or symbols used to identify the products and
services described herein are either trademarks, trade names, or service marks of Schlumberger and its
licensors, or are the property of their respective owners. These marks may not be copied, imitated, or used,
in whole or in part, without the express prior written permission of their owners. In addition, covers, page
headers, custom graphics, icons, and other design elements may be service marks, trademarks, and/or
trade dress of Schlumberger and may not be copied, imitated, or used, in whole or in part, without the
express prior written permission of Schlumberger.

ECLRUN User Guide

Table of Contents
Introduction ........................................................................................................................................................ 1
ECLRUN Developments ................................................................................................................................. 2
New Features ................................................................................................................................................... 2
Previous Developments .................................................................................................................................. 2

Usage ..................................................................................................................................................................... 4
Command syntax ............................................................................................................................................ 4
Options ............................................................................................................................................................. 4
General .............................................................................................................................................................. 4
Remote Submission ........................................................................................................................................... 7
Example ......................................................................................................................................................... 8
Local submission to a queue ............................................................................................................................. 8
For LSF .......................................................................................................................................................... 9
For MS HPC ................................................................................................................................................... 9
Remote Linux/UNIX with no queuing system .................................................................................................... 9
Example ......................................................................................................................................................... 9
Local queuing system ........................................................................................................................................ 9
Example ....................................................................................................................................................... 11
License resource management

................................................................................................................... 13

User Configuration File ................................................................................................................................. 14


Network configuration file ............................................................................................................................ 14
Shared drives ................................................................................................................................................. 15
Example ........................................................................................................................................................... 16
Setting up Intel MPI ....................................................................................................................................... 17
EnginFrame protocol .................................................................................................................................... 18
Support for LSF 7x HPC ............................................................................................................................... 19
Interactive run ................................................................................................................................................ 20
Multiple Realization ....................................................................................................................................... 23
Parallel run ..................................................................................................................................................... 24
Simulation output cleanup ........................................................................................................................... 24
INTERSECT .................................................................................................................................................... 25
General ............................................................................................................................................................ 25
INTERSECT simulation ................................................................................................................................... 25
Migrator ............................................................................................................................................................ 26
INTERSECT Command Editor ........................................................................................................................ 27

Installation Instructions ............................................................................................................................... 28


Windows PC installation - client

................................................................................................................. 28

ECLRUN User Guide

Windows PC installation - server ................................................................................................................. 29


UNIX or Linux installation ............................................................................................................................. 29
Test deployment ............................................................................................................................................ 30
Known Limitations ........................................................................................................................................ 32

Configuration ................................................................................................................................................... 34
Configuration variables ................................................................................................................................ 34
Configuration Files ........................................................................................................................................ 36
Example ........................................................................................................................................................... 37
eclrun.config ................................................................................................................................................. 38
SimLaunchConfig.xml .................................................................................................................................. 39

Troubleshooting .............................................................................................................................................. 41
Command not found ..................................................................................................................................... 41
Communication or connection errors ............................................................................................................... 42
Local queue errors ........................................................................................................................................... 45
INTERSECT and Migrator ............................................................................................................................... 46
Submission to MS HPC ................................................................................................................................... 46
Remote Submission ......................................................................................................................................... 46

Index .................................................................................................................................................................... 50

ii

ECLRUN User Guide

Introduction
ECLRUN is a utility for running the Schlumberger simulators and the pre- and post- processors. Simulations
can be run on local and remote servers and queuing systems (for example Platform LSF, Microsoft HPC).
The tool is used either directly at the command prompt or indirectly by GUI-based applications such as
Petrel, ECLIPSE Office and Simulation Launcher.
ECLRUN is an interface between an end user and different types of remote and local operating and queuing
systems. When running a remote simulation, ECLRUN can either copy the dataset to the remote server, or
detect the dataset on a shared network drive. If the dataset is transferred to the remote machine, ECLRUN
also fetches results back as the simulation progresses and also after it has finished.
The following workflows are supported:

Remote submission from Windows to a Linux/UNIX system with an LSF (Load Sharing Facility from
Platform Computing) queue system,

Local submission to LSF on Linux/UNIX or to MS HPC on Windows,

Submission to local Windows or Linux/UNIX machines,

Submission to remote machines running Linux/UNIX.

In the 2010.1 release, ECLRUN introduces some enhancements compared to previous releases. The
enhancements are mainly related to license checking and the fact that license aware scheduling is now
supported when submitting to MS HPC (see Configuration variables (p.34) for more details).
The 2010.1 release comes with Intel MPI 3.2.2, which is a new version. ECLRUN has ability to auto-detect
the latest installed version of Intel MPI so that no additional configuration is required. The auto detect works
on all systems that support Intel MPI (see Setting up Intel MPI (p.17)).
Another enhancement that is new in 2010.1 is the ability to archive input files using the --zip-input
command line option (see Options (p.4)).
The 2009.2 ECLRUN executable was released with INTERSECT 2010.1. This executable introduced
support for the INTERSECT simulator, Migrator and INTERSECT Command Editor. Migrator and
INTERSECT Command Editor can be only run locally while the INTERSECT simulator can be also run
remotely in the same way as the other supported simulators.
In ECLIPSE 2008.1 a new configuration file was introduced to control ECLRUN's behavior. When ECLIPSE
was installed with the macros option checked, an eclrun.config file was created in the macros directory
in the installation folder. The 2009.1 version not only supports its own configuration file but also those
introduced by Simulation Launcher. In the case where one or more configuration files is missing, ECLRUN
will work with default values and will not terminate execution which was not the case in previous releases.
Configuration files are now formatted in XML. To learn more about configuration files or the installation
process see Configuration (p.34) and Installation (p.28).

Introduction

ECLRUN User Guide

ECLRUN Developments
New Features
The 2010.1 release of ECLRUN has new enhancements:

License aware scheduling is now supported with MS HPC (see License resource management
(p.13) for more details).

The latest version of Intel MPI is automatically detected on all systems that support Intel MPI
communication (see Setting up Intel MPI (p.17) for more details).

Include files with absolute paths are now allowed when running local simulations.

Archiving input files without running a simulation is possible by using the --zip-input option (see
Options (p.4) for more details).

Support for the EFTOKEN authorization mechanism when using the EnginFrame protocol (see
EnginFrame protocol (p.18) for more details).

A major change in the 2010.1 release is in the precedence of license environmental variables.
SLBSLS_LICENSE_FILE takes precedence over LM_LICENSE_FILE. This change affects submission to
MS HPC and local simulations on Windows (see License resource management (p.13) for more details).

Previous Developments
The 2009.2 ECLRUN executable brought some new enhancements to the way the Local Queue works on
Windows:

The ability to manage more local simulations; the length of the Local Queue is limited only by system
resources (see Local Queue (p.9) for more details).

No authentication is required when running parallel simulations with Intel MPI on Windows (see Setting
up Intel MPI (p.17)).

Another important improvement is a clean up mechanism that operates when a new simulation is started
(see Output cleanup (p.24)).
The 2009.2 executable also supports INTERSECT which comprises the INTERSECT simulator, Migrator
and INTERSECT Command Editor. (See INTERSECT (p.25) for more details).
The 2009.1 version extends the existing functionality, overcomes most of the known limitations and
introduces enhancements which are briefly listed below:

launch applications in an interactive mode (see Interactive run (p.20) for details),

a more flexible local and remote version matching mechanism (see Installation instructions (p.28) for
details),

support for large input files (> 4 GB),

new command line parameters for reports (see Options (p.4) for details),

support for the PATH keyword in DATA files,

a reduced number of unsupported characters in passwords (see Known limitations (p.32) for details),

ECLRUN Developments

ECLRUN User Guide

support for both MS CCS and MS HPC,

a hierarchical structure for the configuration files (See Configuration (p.34) for details).

ECLRUN Developments

ECLRUN User Guide

Usage
Command syntax
The command syntax is:
eclrun [options] [PROGRAM] [DIRECTORY|FILE]

where

PROGRAM is one of:

ecl2ix, eclipse, e300, frontsim, flogrid, floviz, graf, grid, ix, ixce, office, pvti,
scal, schedule, simopt, vfpi to run the specified application,

check to check on the status of the previously submitted job,

kill to abort a queued or running job.

DIRECTORY is the working directory for interactive applications. The default is the current working
directory

FILE is the name of the input data file. This is required for ecl2ix, eclipse, e300, frontsim, ix,
ixce, check, kill. It may include a leading path and a trailing extension.

Options
General
The following general options are available:
-h, --help

Show a help message and exit,


-v VERSION, --version=VERSION

Version of the application to run; default=latest,


-k, --keepconfig

Keep an existing configuration ECL.CFG file if found. This is not recommended. It is better to use
ECL.CFU in your home directory, or ECL.CFA in the same directory as the data file. Note that
ECL.CFU is not transferred to remote servers, but ECL.CFA is transferable.
-m HOSTFILE, --machinefile=HOSTFILE, --hostfile=HOSTFILE

Name of the file containing the host names of the machines to be used for parallel processing. If
HOSTFILE begins with $ it is taken as the name of the environment variable from which the host
file is to be constructed.
Default=$LSB_HOSTS - this means this option is not required if you are using neither the LSF
queuing system nor a local or remote parallel run. In the first case, the host file will be created
automatically from the list of processors supplied by LSF. When running a parallel simulation on a
local or remote machine the host file will automatically be created and filled in with the list of names
Usage

ECLRUN User Guide

of the local or remote machine, depending on the run. When using MS HPC, the host file is not
required.
Note that you may have to prefix the $ with \ to avoid shell substitution.
--comm=COMM

Method of inter-process communication for parallel runs. Normally this is not necessary as the
system will determine the best available. Valid values are:

mpi (MPI with ethernet/gigabit, all systems, the default on SUN and SGI and Linux, except for
Altix when SCALI is installed)

sp2 (MPI with SP2 switch, IBM only, default on IBM)

myr (MPI with myrinet, Linux only)

sca (SCALI version of MPI with best available switch, Linux only)

alt (MPI with Numalink switch, Linux Altix 64 bit only, default on Altix Linux)

msmpi (Microsoft HPC MPI, MS Windows 64-bit only)

ilmpi (MPI with Intel, Windows and Linux 64-bit).

-e EXENAME, --exename=EXENAME

Run the executable EXENAME instead of PROGRAM.EXE. If no path is given, the command looks in
the standard locations for executables. Note that the path given must be valid on the execution
machine.
--debug=DEBUG

Debug messages. The choices are:

none

file - sends debug messages to a file with suffix eclrun_dbg

both - sends debug messages to screen and the debug file

-b BUFFER, --buffer=BUFFER

Sets the buffer size in MB for network drives


--preproc=PREPROC

Run the executable PREPROC on the dataset prior to job submission. Note that PREPROC is executed
on the client. If it does not exist a warning is generated.
--exe-args=EXE-ARGS

Pass a string of arguments (EXE-ARGS) directly, without any parsing, to the executable as
command-line parameters. Applies to both local and remote submissions. Use quotation marks to
indicate the beginning and the end of the string. Implemented for all supported programs.
--report-versions

Print out the list of all installed versions of the specified program (that is eclipse, frontsim,
office, floviz, and so on). Applies to the local machine only. The list of versions contains entries
separated by single spaces, and sorted in reverse alphanumeric order. To query for a specific
version use the '-v' option. If the program is not installed, the report prints out 'none'

Usage

ECLRUN User Guide

--install-path

Print out the full installation path to the specified program (that is eclipse, frontsim, flogrid,
and so on). The path does not contain the name of executable itself. To query for a specific version
use the '-v' option. If the program is not found, the report prints out 'not installed'.
--np=NUM_PROCESSORS

Specify the number of processors. Defaults to zero. If specified (greater than or equal to 1) this
setting overrides the default mechanism of determining the number of processors needed (see
Parallel run (p.24) for more details).
--remove-history=CASENAM

Remove the history entry for a specific data file. The DATA file has to exist.
-a ARCHITECTURE, --arch=ARCHITECTURE
Force the use of an executable for a specific architecture:

auto: on a 64 bit machine, a 32 bit executable is used only if a 64 bit one does not exist. The
default method.

64: do not use a 32 bit executable on a 64 bit machine.

32: do not search for a 64 bit executable on a 64 bit machine

-i, --interact

Use interactive mode. You are prompted for options that were not specified in command line. The
interactive mode applies only when running simulators or pre/post processors.
--mpi-args=MPIARGS

Pass additional parameters directly to the MPI command. The arguments are not parsed in any
way.
--report-queues

Displays a space-separated list of LSF queues in alphabetical order (in addition use the -s and u remote submission options).
--zip-input

Creates a zip file containing the input file as well as all include files. The resulting file's basename
will be the same as the input file's, i.e. CASE.DATA -> CASE.zip. This works with all supported
simulators on all supported operating systems.
--preserve-existing-zip

Preserves the existing zip file. Defaults to false which means that the existing zip file is deleted
before the new one is created. Takes effect only when used with --zip-input, otherwise it is
ignored.
--license-check=OVERRIDE_LICENSE_CONFIG_SETTINGS

This option overrides the configuration file variable ChkLicOnWin which enables or disables the
license check for local Windows simulations. If the license check is disabled then the LICENSES
keyword in the DATA file is also ignored. The available options are:
a. config: the configuration file controls the license check.

Usage

ECLRUN User Guide

b. allow: enables a local license check (overrides the config file).


c. disable: disables the local license check (overrides the config file).

Remote Submission
The following options apply to LSF or Windows server with MS HPC:
Linux/Unix options
-s SUBSERVER, --subserver=SUBSERVER
SUBSERVER is the IP address or name of the submission server from which you want to submit the
job. Type localhost for local submission to LSF and always when submitting to HPC.
-q QUEUE, --queue=QUEUE

Submit the simulation job to QUEUE. If the remote submission option --queuesystem equals
local then specify the waiting time as: high (60s) - default, medium (300s), low (900s)
-u USERID, --username=USERID
USERID on SUBSERVER. Defaults to local userid.
-p PASSWD, --passwd=PASSWD

Password for the submission server. Use EFTOKEN as the password to enable EnginFrame
EFTOKEN authorization (see EnginFrame protocol (p.18) for more details). Only this subset of
the ASCII character set is supported (note that it also includes the white space character) :
!#$%&()*+,-./:;<=>?@[]^_`{|}~ abcdefghijklm nopqrstuvwxyz
ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789

--protocol=PROTOCOL

Protocol and method of authentication for inter-host communication. Choices are:

unsecure: uses RSH and RCP; it must have .rhosts be set up;

password: uses SSH and SCP with password; you must use the --passwd option unless
running from a console (default).

ef: uses the EnginFrame Web Service installed on a remote machine for copying and executing
remote commands.

--localcleanup

Remove the old, local simulation output files when beginning a new simulation. The default value
is false, which means that no local cleanup is performed.
--noremotecleanup, --nocleanup

Disable the automatic removal of files on the remote machine after the run has finished. Intended
primarily for debugging purposes. The default Value is false.
--queuesystem=QUEUESYSTEM

Sets up the queue system type. It defaults to local when running local simulations on Windows;
to CCS when submitting to the queue on localhost on Windows; and otherwise to LSF. Choices
are:

Usage

ECLRUN User Guide

LSF: submitting to LSF. This is set up automatically when the remote machine is Linux.

CCS: Submitting to MS HPC. This is set up automatically when the remote machine is Windows.

local: Use local queue. This is set up automatically when running local simulations on Windows
(using the local queue system).

The parameter overrides the QueueSystem configuration file variable (see Configuration variables
(p.34) for more details).
--queueparameters=QUEUEPARAMETERS

Queue parameters typed by the user and dedicated to the specified queuing system. This way
some additional arguments can be passed to the 'bsub' (LSF) or 'job submit' (MS HPC)
command. Use double quotation marks to specify the string when it is submitted from a PC and
single quotation marks when it is submitted from Linux/UNIX. This option is not supported for the
local queue.

Example
There is no difference in the way jobs are submitted to LSF or MS HPC in terms of command line parameters.
When submitting the job to MS HPC, the -s parameter must be always specified as the localhost and the
user need to use a shared drive with write access as the work directory.
The command shown below:
eclrun -s myserver -q myqueue -u myuserid -p mypasswd
eclipse MYDATA.DATA

submits your simulation run to the queue called myqueue (when submitting to MS HPC it is the name of the
scheduler) on the remote machine called myserver. It also stores the details of your job in a file called
MYDATA.ECLRUN, and reports them to you in the message log file MYDATA.MSG. This can be displayed in
Petrel by right clicking on the simulation in the Cases pane and selecting Show simulation log from the
context menu. It is automatically displayed if it contains any error messages.
To get the results back, use the command:
eclrun -p mypasswd check MYDATA.DATA

This reads the details of your job from the MYDATA.ECLRUN file, and connects to the remote server to ask
for any results. The server zips up any results files available, and the local machine downloads them and
unzips them, unless you are running in shared network drive mode (see WindowsDirs and FileMode
variables, in the Configuration variables (p.34)). It also updates the log file with the present status of the
job.
To kill the job while it is running, use the command:
eclrun -p mypasswd kill MYDATA.DATA

You can run all these commands without specifying a password but you are prompted for it when executing.

Local submission to a queue


The following options apply on Linux/UNIX or MS HPC on Windows.
Usage

ECLRUN User Guide

To submit a job locally to an LSF or MS HPC queue, you need to specify the -s parameter as localhost as
shown below. The rest of the parameters are exactly the same for LSF and MS HPC submission.

For LSF
If you are logged on to an LSF submission server and want to submit a job, ECLRUN does not have to use
either the SSH or the RSH protocol. It means that for local submission on Linux/UNIX you do not need to
specify any password to connect:
eclrun -s localhost -q myqueue -u myuserid
eclipse MYDATA.DATA

For MS HPC
If you are logged on to an HPC submission server and want to submit a job then ECLRUN does not have
to use either the SSH or the RSH protocol. Note a password is always required for authentication purposes:
eclrun -s localhost -q myqueue -u myuserid -p password
eclipse MYDATA.DATA.

Note: The submission command for MS HPC remains the same when running on the cluster's head node
or any other machine with MS HPC Pack installed on it.

Remote Linux/UNIX with no queuing system


This use is similar to a remote server with LSF queuing, except that the --queue parameter is omitted. The
simulation is run in the foreground on the remote machine. This feature is primarily of use in testing your
connection to the remote server.

Example
Once the simulation is finished, all result files are automatically copied to the user's local working directory.
You do not need to use the check command.
eclrun -s submissionserver -u username -p password
eclipse MYDATA.DATA

Local queuing system


Specify the job priority, using -q as high (default), and specifying medium or low without the -s parameter.
This "queuing" system is designed primarily for use on multi-core Windows workstations. It simply sets a
limit on the number of simulations that can run simultaneously. When another simulation is submitted, it
checks how many are currently running and compares this to the limit. If the limit has already been reached,
the new job sleeps for 20 (high), 300 (medium) or 900 (low) seconds, then tries again. The order in which
jobs are executed is random. The limit on how many jobs can be run is set in one of the configuration files.
See Configuration (p.34).
ECLRUN uses the eclrun.mutex file in the Windows TEMP directory as a semaphore for synchronization
purposes (the MUTEX file contains a PID of the instance of ECLRUN which is using it at present). If this file

Usage

ECLRUN User Guide

exists it means that an instance of ECLRUN is checking the availability of resources and the other instances
of ECLRUN must wait until it is finished. When the check on the availability of resources is complete,
ECLRUN removes this file and another one may start the same process. Only one instance of ECLRUN
may be checking the availability of resources at the same time.
Here is the list of possible user console messages or errors when using the local queue system:
1. "The eclrun.mutex is being processed by another instance of ECLRUN. Next check
in 1 second(s)"
Another instance of ECLRUN is currently checking the status of the local queue and the availability of
resources. The one second period is the hard-coded default.
2. "Number of job(s) running equals 5 and is limited to 5. Cannot run additional 4
job(s) at the moment. Next check in 20 second(s)..."
ECLRUN cannot run more jobs than specified in ECLNJOBS (this defaults to the real number of CPU
cores available on the client's machine - see Known limitations (p.32)). By specifying -q as 'high',
'medium' or 'low' different waiting times may be forced: 20, 300 or 900 seconds.
3. "EclNJobs variable is set to 5 meaning that only 5 job(s) can be run (6
requested). Check DATA file and ECLRUN configuration files."
The number of jobs exceeds the limitation specified in ECLRUNJOBS in eclrun.config. You cannot run a
6-way parallel case when only 5 jobs can be run at the same time. ECLRUN terminates.
4. "Required resources are available. Launching in progress..."
The number of jobs requested to run does not exceed the limitation. Simulation is being launched.
5. "Unable to access the eclrun.mutex for 240 second(s). Another instance of ECLRUN
has hung or too many instances of eclrun are running now."
This may appear when one of the instances of ECLRUN in the local queue has hung when checking
resources availability. For more details go to Troubleshooting (p.41). ECLRUN terminates.
6. "EclNJobs variable contains a non-integer value. Check your ECLRUN configuration
file(s)."
This message appears if you assign a non-integer value to the EclNJobs configuration file variable.
ECLRUN terminates.
7. "Neither TEMP nor TMP environmental variable detected on this machine."
The Windows temporary folder cannot be found on your local machine. This means that ECLRUN cannot
manage the ecl_mutex.txt synchronization file required when using the local queue system. ECLRUN
terminates.
8. "Unable to check license availability. Check if lmutil.exe is present in macros
folder."
ECLRUN failed to execute 'lmutil.exe' which should be in macros directory. ECLRUN terminates.
9. "Number of jobs that can be run on this machine is less than or equal to 0. Check
your ECLRUN configuration file(s)."
A negative value was assigned to the EclNJobs configuration file variable. ECLRUN terminates.
10. "License check (it may take a while)..."

Usage

10

ECLRUN User Guide

Once ECLRUN passes the test for the number of jobs required, it then checks to find out if all of the
licenses requested by the LICENSES keyword in the DATA file are available. If the LM_LICENSE_FILE
environment variable points to a network license then it may take a while to get a response from the
server.
11. "Licenses unavailable (requested: eclipse = 1 : networks = 1 : lgr = 1 / missing:
['networks', 'lgr']). Next check in 20 second(s)..."
ECLRUN will check again after a period of time has elapsed. The wait period depends on the job priority
(60 seconds for high priority, 300 seconds for medium priority and 900 seconds for low priority).

Example
This example is to present the way two different instances of ECLRUN try to submit serial cases to local
queue when only one CPU is available and no other simulation is running at the moment.
1. Make sure that EclNJobs variable is set to 1 (see Configuration (p.34) for more details).
2. Prepare two serial DATA files called MYDATA1.DATA for ECLIPSE and MYDATA2.DATA for FrontSim.
3. Open two separate command prompt windows (click the Windows Start button) and choose Programs
| Accessories | Command Prompt).
4. In one of them type the ECLRUN command as below and finish by pressing Enter:
C:>eclrun --queue=medium eclipse MYDATA1.DATA

This produces the following:


License check (it may take a while)...
Required resources are available. Launching in progress...
Message Executing eclipse on pc system with default
architecture named mymachine
Message Process ID is 5020s
1 READING RUNSPEC
2 READING TITLE
3 READING DIMENS
4 READING OIL
...
Warnings
1
Problems
0
Errors
0
Bugs
0
PRODUCING RUN SUMMARIES
MYDATA1.SMSPEC exists
SPEC file successfully read
Opened file MYDATA1.UNSMRY
1 files exist containing
Summary file MYDATA1.UNSMRY opened
17 vectors loaded
RUN SUMMARIES COMPLETE
C:>

9 timesteps

Usage

11

ECLRUN User Guide

In the first case (MYDATA1.DATA) the simulation is launched immediately because the requested number
of CPUs (jobs) equals the number of CPUs available (EclNJobs) and so ECLRUN reports: "Required
resources are available. Launching in progress...".
5. In the other one type the ECLRUN command as below:
C:>eclrun frontsim MYDATA2.DATA

6. Press Enter. This produces the following:


The local queue is being processed by another
instance of ECLRUN.
Next check in 1 second(s).
The local queue is being processed by another
instance of ECLRUN.
Next check in 1 second(s)
Number of job(s) running equals 1 and is limited to 1.
Cannot run additional 1 job(s) at the moment.in 20 second(s)...
License check (it may take a while)...
Required resources are available. Launching in progress...
Message Executing frontsim on pc system with default
architecture named mymachine
Message Process ID is 6116
FrontSim Version: 2008.1
User
Name
: username
OS
Name
: Microsoft Windows 2000
(Build 5.1.2600 Service Pack 2)
Host
Name
: MYMACHINE
Exe
Path
: C:\ecl\2008.1\bin\pc\frontsim.exe
Current Date
: Nov 14 2007
Build
Date
: Oct 6 2007
Reading Input file: MYDATA2.DATA
...
Saving
: MYDATA2.UNRST
@--Message at time 10 January 2004 step 50:
@
RSM file generated.
-----------------------------------------------------------Production
1.15985e+005 m3 CPU Time :
0 H
0 M 1.00 S
FrontSim 2008.1 Oct 6 2007
C:>

This second instance of ECLRUN (MYDATA2.DATA) needs to wait until first one finishes checking the
number of CPUs available (number of jobs that can be run) and reports: "The local queue is being
processed by another instance of ECLRUN. Next check in 1 second(s)".
Then it finally gets access to the queue but it finds that no more jobs can be run at the moment: "Number
of job(s) running equals 1 and is limited to 1. Cannot run additional 1 job(s) at
the moment. Next check in 20 second(s)..." and after 60 seconds checks again and this time
runs: "Required resources are available. Launching in progress...". Just before the
simulation is launched, ECLRUN checks if requested the licenses (specified by the LICENSES keyword in
the DATA file) are available by parsing the output of the 'lmutil.exe' command.

Usage

12

ECLRUN User Guide

Note: The first few lines in each of your outputs may be different from the results shown here. Because
the access to the queue is randomized you may not see exactly what is shown above. However, you should
see something similar.

License resource management


The license reservation mechanism is currently supported when submitting to LSF and MS HPC
(LsfLicenses=True in the configuration file) or a local Windows machine. If the LICENSES keyword is
present in the dataset, ECLRUN will use it to build a list of required licenses (unknown license features will
be skipped). This is passed to the LSF or MS HPC queue system (depending on submission) to ensure
that the job will not start until all required licenses are available. When running a job on Windows, ECLRUN
also analyzes the availability of licenses by parsing the output of the command
<installation_path>\macros\slblmstat.exe -a -v version

where <installation_path> usually points to the C:\ecl directory. See Local queuing system (p.8)
to learn more about the local queue.
From 2010.1 onwards the SLBSLS_LICENSE_FILE environment variable takes precedence over
LM_LICENSE_FILE. If both variables exist then LM_LICENSE_FILE is ignored. If SLBSLS_LICENSE_FILE
does not exist then LM_LICENSE_FILE is expected to point to a valid license.
Because of changes to the main license environment variable, slblmstat.exe was introduced. It has the
ability to distinguish between the two license variables (SLBSLS_LICENSE_FILE and LM_LICENSE_FILE).
If the LICENSES keyword is not specified then the requested resources depend on the name of the simulator
and the number of processors required as shown:
Name of Simulator Number of processors

Licenses requested

ECLIPSE

1 x datacheck OR 1 x
eclipse

ECLIPSE

N>1

1 x eclipse AND N x parallel

E300

1 x datacheck OR 1 x
eclipse OR 1 x e300

E300

N>1

1 x eclipse OR 1 x e300
AND N x parallel

FrontSim

1 x frontsim

Comments
Data check only if the
NOSIM keyword is present
Data check only if the
NOSIM keyword is present

FrontSim only supports


serial runs

Table 3.1: Resource required for simulators and processors


From 2010.1 onwards ECLRUN supports license aware scheduling when submitting to MS HPC. A license
string interpretation tool has to be installed separately on the MS HPC headnode.
SLBSLS_LICENSE_FILE expanded the list of cluster-wide variables (see Installation Instructions (p.28)
for more details).

Usage

13

ECLRUN User Guide

From 2009.2 onwards, ECLRUN by default, does not check license availability when running locally on
Windows. The license check however can be enabled by setting the ChkLicOnWin configuration file
variable to True (defaults to False).
Note: There is no support for license aware scheduling for INTERSECT.
Note: slblmstat.exe depends on lmutil.exe. Both the executables are automatically deployed under
the macros directory at ECLIPSE installation time.
Note: In previous versions of ECLRUN the final license reservation string was influenced by the presence
of the THERMAL and COMPS keywords. The keywords are no longer searched for and therefore relevant
license feature names have to be manually appended to the list of license features in the LICENSES keyword
instead.
Note: OR operators are not supported in the license request string when submitting to MS HPC. This is
due to limitations in the license aware scheduling on this platform. Complex license request strings
(containing at least one OR operator) will be automatically simplified to return the first alternative, i.e. lic1=3
AND lic2=5 OR lic3=1 AND lic4=2 will be converted to lic1=3 AND lic2=5.

User Configuration File


The configuration file is meant to be used for debugging and testing purposes only. The procedure described
below applies to Linux and requires both eclrun and eclrun.config to be copied into a customized location
(other than the macros installation folder).
The directory which contains the eclrun and eclrun.config files must be in the user's system path.
When the eclrun command is typed into the command line, the users customized eclrun will be found rather
than the global one.
The eclrun file in the specific user location will not be modified automatically when ECLIPSE is reinstalled.
You have to upgrade this file manually.
If a user configuration file is detected, the global configuration is completely ignored.
A user configuration file is allowed but not supported. The only supported configuration file on Linux is the
global eclrun.config file.
Note: The hierarchical model of configuration does not apply to Linux. When a user configuration file is
present, this is the only configuration file processed on Linux.

Network configuration file


This file is optional and is not created during installation. The network configuration file always needs to be
explicitly created. It was designed for administrators to enable them to enforce some specific cross-site
configuration. The file will be referred to as the network file in this section.

Usage

14

ECLRUN User Guide

The network file follows the same XML format as the SimLaunchConfig.xml file and has the same
hierarchical dependencies (see Configuration (p.34) for more details).
The location of the network file is not fixed (unlike the SimLaunchConfig.xml file) and therefore in order
to be found it needs to be linked from one of the fixed configuration files. The only configuration file which
can point to the network file is SimLaunchConfig.xml. This file is located in one of the directories.
$ALLUSERSPROFILE\Application Data\Schlumberger\Simulation Launcher

or
$USERPROFILE\Application Data\Schlumberger\Simulation Launcher

The path to 'Application Data' on MS Vista is 'AppData\Roaming'.


The file may define additional queues as well as redefining selected configuration variables (see
Configuration Variables (p.34)).
If the network file is linked from the current user configuration file, it does not override the "all users"
configuration. If the file is linked from the "all users" configuration file, it overrides all of the configuration
files .
In order to link to a network file, use the <NetworkFile/> XML tag. The example below shows a fragment
of the SimLaunchConfig.xml file that points to a file called network.xml in the network path \\server
\teams\configuration:
<Configuration>
...
<NetworkFile>\\server\teams\configuration\network.xml
</NetworkFile>
...
</Configuration>

Note: The network file name is not restricted but it needs to be a valid file name on Windows
Note: Any correct Windows path may be used as a network path (including UNC paths).
Refer to the "Simulation Launcher User Guide" for additional information on the network configuration file.

Shared drives
A shared drive is a network location that can be accessed from a local Windows machine as well as from
a Linux remote server. It is realized by NetApp or Samba or any other technology which enables network
data sharing.
This means that if a file is created on a shared drive on Windows it is immediately seen on the corresponding
shared drive on Linux. If shared drives do not exist, files are transferred over the network between the local
machine and the remote cluster.

Usage

15

ECLRUN User Guide

Support for the shared drive mechanism was first introduced in ECLRUN in 2006 and was initially controlled
by the -d option to specify a Linux shared path. In this approach each user was able to control the shared
drive mechanism at submission time in command-line.
In order to make the control of the location of the shared drives more centralized and to make the mechanism
more robust, the -d option was replaced by configuration variables in the remote eclrun.config file. The
shared drives mechanism cannot be controlled by any of the local configuration files (see Configuration
(p.34) for more details).
The remote global configuration file can be modified by a power user. However, by creating a remote user
configuration file, a user can set where to mount shared drives (see Configuration files (p.36) for more
details).
If a shared drive is not detected, the remote cluster can request that the local machine sends the required
input files to it. In this case, all the input files will be transferred over the network to a uniquely named
temporary directory created by default under the folder pointed by TempRunDir configuration file variable
in eclrun.config (see Configuration variables (p.34)) on the remote submission machine.
There are three configuration variables involved in making decision about whether to use the shared drive
mechanism or to transfer files: FileMode, WindowsDirs and TempRunDir.

FileMode decides whether shared drives are to be used as the only mechanism of handling files
(SHARE) or whether the only mode allowed is file transfer (TRANSFER) or if both are allowed (BOTH). If
FileMode=BOTH then ECLRUN will try to detect shared drives first and, if that fails, it will transfer files.
If FileMode=SHARE and no file share is detected, then ECLRUN raises an error message and
terminates.

The default value is BOTH.

WindowsDirs defines a comma-separated list of possible Linux mount points (paths) to be checked
when searching for shared drives.

Defaults to user's home directory (%HOME%).

TempRunDir represents a location to transfer files to when either a shared drive is not detected
(FileMode=BOTH) or when only file transfer is allowed (FileMode=TRANSFER).

See Configuration variables (p.34)

Example
This example demonstrates how to set up a shared drive and shows the contents of the remote
eclrun.config file
FileMode=BOTH
WindowsDirs=/linux/shared
TempRunDir=%HOME%

as well as the Windows shared drive itself.


The Linux path: /linux/shared should be then mounted on Windows, that is:
X:\=\\netapp\linux\shared

Usage

16

ECLRUN User Guide

When you try to submit a DATA file which is physically located on the X:\ drive (or in any subdirectory)
then it will be automatically detected at submission time and the DATA file will not be transferred. You do
not need to specify any additional ECLRUN command line options, the shared drives detecting mechanism
will work transparently. If the DATA file is not on the X:\ drive and therefore the shared drive is not detected
then the file will be transferred to user's home directory on the remote machine.

Setting up Intel MPI


Intel MPI is the default communication type for parallel jobs on Windows and Linux (on Power PC, the
communication type defaults to Scali MPI).
ECLIPSE 2009.1 came with two types of MPI executables for parallel runs on Windows: Microsoft MPI and
Intel MPI. This was a change to the 2008.x versions which provided Microsoft MPI and Verari Systems MPI/
Pro (not covered in this manual).
In order to run ECLIPSE parallel jobs on Windows machines, the proper MPI libraries need to be installed
first. MS MPI executables should be used on a Microsoft HPC cluster which by default provides MS MPI
libraries.
The Intel MPI run-time can be installed from the ECLIPSE DVD (refer to the Install Guide for more details).
By default you do not need to register a password with wmpiregister.exe. This is a change from 2009.1
where it was mandatory. If the NoLocalQueueAuth configuration file variable on Windows is set to False
(default is True) then the password registration is required. wmpiregister.exe can be found in the bin
directory under the Intel MPI installation folder. The IntelMpiWinDir configuration variable can be used
to point to a non-standard installation directory in case it is not found automatically by ECLRUN (see
Troubleshooting (p.41) for details).
The 2010.1 version of ECLRUN has an ability to auto-detect the latest version of Intel MPI on both Linux
and Windows, removing what used to be the user's responsibility.
When ECLRUN is to launch a parallel case, it checks to see if there are enough processors to run the case.
To do this, it takes the number of ECLIPSE processes currently running and adds the number of processors
that it requires. It then compares resulting total to the value set by the EclNJobs configuration variable.
This variable defines the number of jobs that can be run at the same time on the users local Windows
machine. If the value of the EclNJobs configuration variable is greater than or equal to the number of
processors required, as totaled by ECLRUN, then the job is launched. Otherwise ECLRUN holds the run
until the number of processors requested is lower than or equal to the number available. If the number of
processors requested is greater than the number in EclNJobs, ECLRUN terminates with an error stating
that there is an insufficient number of CPUs.
The EclNJobs variable defaults to the number of CPUs of the machine. The value can be easily overwritten
by redefining it in one of the Windows configuration files (see Configuration (p.34) for more details).
To run a parallel ECLIPSE case on Windows type one of the following at the command line:
eclrun eclipse FILENAME

or
eclrun e300 FILENAME

Usage

17

ECLRUN User Guide

In addition to the syntax above user can provide a list of hosts to run the simulation on by using -m option.
The example below shows how to use -m option
eclrun -m c:\hostfile.txt eclipse FILENAME

Note: The 2009.2 version of ECLRUN introduced a configuration file variable named IntelMpiSsh to
control the communication protocol used by Intel MPI child processes on Linux (in the previous release it
was fixed to SSH). The variable only accepts True or False. If it is set to True (default) then SSH is used,
otherwise RSH is used.
Note: The 2010.1 release of ECLIPSE comes with Intel MPI version 3.2.2 which is a new version. It is
automatically installed on Linux when ECLIPSE is installed. On Windows, the user is expected to install it
separately from ECLIPSE DVD.

EnginFrame protocol
ECLRUN currently supports the three connection protocols: SSH, RSH and EnginFrame Web Services.
Use the --protocol command-line option to change the default protocol (see Options (p.4) for more
details).
SSH is the default protocol when submitting to a remote Linux cluster. ECLRUN uses the Putty SSH client
to connect to an SSH server on the remote Linux machine. The connection is encrypted.
RSH is an unencrypted protocol and therefore is not recommended. The protocol requires an RSH daemon
running on the server.
SSH uses the plink.exe and pscp.exe commands which are copied into the ecl\macros directory
during ECLIPSE installation. RSH uses the Windows built-in rsh and rcp commands. These two protocols
are ready to use practically 'out of the box'.
The EnginFrame Web Services connection protocol requires additional configuration on both Linux and
Windows sides.
Linux configuration is not covered in this manual (to learn more about server-side configuration of
EnginFrame Web Services see the NICE web page: http://www.nice-italy.com).
Windows configuration involves setting up EfPath (defaults to enginframe), EfPort (defaults to 8080)
and EfSsl (defaults to False) configuration variables and overriding the default connection protocol (-protocol=ef).
The 2009.2 version of ECLRUN is deployed with version 3.0.3 of the efeclrun plugin (developed by NICE).
ECLRUN uses the plugin to communicate with a remote EnginFrame Web Service. The current version
supports HTTP authentication (for further information on EfHttpAuth see the Configuration variables
(p.34)). On Windows efeclrun is built into eclrun.exe, while on Linux it is deployed under /ecl/
macros/efeclrun.
The 2010.1 version of ECLRUN is deployed with efeclrun version 3.0.4. This version supports a new type
of authorization that is based on a multi-session token. The token, once generated, is then reused by
efeclrun instead of using a password. The first time that the token is generated, the user is prompted for

Usage

18

ECLRUN User Guide

a remote password. To enable EFTOKEN authorization, use --passwd=EFTOKEN --protocol=ef as


shown in this example:
eclrun -s https://mysrv:9090/ef -q lsfqueue -u user
--passwd=EFTOKEN --protocol=ef frontsim CASENAME

The first time the command is called, the user is prompted for a password in a popup window. Once provided,
it will be then reused for the following sessions (i.e. Petrel can be restarted multiple times before a new
password prompt appears).
The following example demonstrates how to submit to a remote server with the EnginFrame web service
running on it. The web address of the web service is: http://myserver:8080/enginframe
eclrun -s http://myserver:8080/enginframe -q lsfqueue -u user
--protocol=ef eclipse CASENAME

--protocol=ef is mandatory.
-s may either point to a full URL or just the name of the server (-s myserver) in which case ECLRUN
will build the URL itself based on EfPath, EfPort and EfSsl configuration file variables. This is shown in
the examples below:

The configuration variables are set up as follows:


EfPath=engin_frame
EfPort=1234
EfSsl=True

If the user specifies the -s option as presented below:


-s my_engin_frame_srv

then ECLRUN will internally replace the -s value with:


https:\\my_engin_frame_srv:1234\engin_frame

See Configuration variables (p.34) for more details on EnginFrame related configuration variables.

Support for LSF 7x HPC


LSF HPC enables the monitoring and control of parallel jobs. This has two main advantages over using
LSF without the HPC extensions. Firstly information such as CPU time and resource usage is logged
correctly. Secondly all the processes that form the job are monitored and controlled together as a whole.
This means that if a parallel ECLIPSE job crashes, all the processes are cleaned up automatically. This
effectively prevents zombie processes being left on the machines.
When the simulation is finished or aborted, all the child processes are killed by LSF which prevents zombie
processes being left on the system.
In order to use these mechanisms make sure that LSF 7 HPC (the recommended version is LSF 7 update
4) is installed on your cluster. In addition, you need to set the ECL_LSFHPC environmental variable on Linux.

Usage

19

ECLRUN User Guide

The mechanism will not work properly on heterogeneous clusters where the head node's architecture is
different to any of the compute nodes.
To switch off the mechanism, unset the environment variable.
Note: LSF 7 update 5 is required if MR license scheduling is used (see Multiple Realization (p.23) section
for details).

Interactive run
Submission, check and kill actions can be performed in either batch or an interactive mode. The batch mode
is mainly used by applications based on ECLRUN as a job submission utility.
The interactive mode is one of the enhancements first introduced in the 2009.1 version to enable users to
type in all relevant options (essential for submission, check or kill operations) interactively rather than
specifying them in the command line as applications do.
In order to switch on the interactive mode a single command line parameter is required: --interact or i for short.
Any additional options typed in the command line will be also passed to ECLRUN. If some of the essential
options are already provided in the command line, then the user will not be prompted for them.
The interactive mode can be used to run any of the supported applications (pre- and post-processors and
simulators) as well as to check or kill a remote simulation. The mechanism is available on Windows and
Linux.
The example below demonstrates how to submit a job to a remote LSF queue interactively:
C:\ecl\2009.1\eclipse\data>eclrun -i
Choose operation:
submit
check
kill
[default submit]:
Choose a program to run:
e300
ecl2ix
eclipse
flogrid
floviz
frontsim
graf
grid
ix
ixce
office
pvti
scal
schedule
simopt
vfpi
weltest
[default eclipse]:

Usage

20

ECLRUN User Guide

Use the latest version [Y/n]:


Type name of an input file:ACI.DATA
Remote submission [y/N]:
Type name of the remote server:<remote_server>
Type user name [default <user_name>]:
<user_name>@<remote_server>'s password:
Submit to a queue [y/N]:y
Type the queue name (found in config file: LSFqueue1):<lsf_queue>
Analysing the input file (it may take a while)...
Connecting to server
Preparing job for submission...
Analysing the input file (it may take a while)...
Uploading job to server...
ACI_200903310856414488.zi | 4 kB |
4.0 kB/s | ETA: 00:00:00 | 72%
ACI_200903310856414488.zi | 5 kB |
5.5 kB/s | ETA: 00:00:00 | 100%
Submitting job on server...
Message Job ACI submitted to queue <lsf_queue> with job_id=27236
Message Simulation is queued
Checking status of job on server...
Downloading status & results...
ACI_2009033108564421491.z | 2 kB |
2.7 kB/s | ETA: 00:00:00 | 100%

As presented above the ACI.DATA case was successfully submitted to a queue named <lsf_queue> on
a remote server named <remote_server> for a user named <user_id>.
The user was first prompted for the type of operation to perform
Choose operation:
submit
check
kill
[default submit]:

The user pressed the Enter key to confirm the default operation 'submit' (it could also be typed in as a
string and then confirmed by pressing Enter).
In the next step, the user was prompted for the name of a program to run
Choose a program to run:
e300
ecl2ix
eclipse
flogrid
floviz
frontsim
graf
grid
ix
ixce
office
pvti
scal
schedule
simopt
vfpi

Usage

21

ECLRUN User Guide

weltest
[default eclipse]:

When a pre- or post-processor is chosen (the default choice is always 'eclipse') then the user will not
be prompted for a remote queue name, server name, user name and password which are only relevant for
simulators. In the example the default choice was confirmed with Enter key.
In the next step ECLRUN prompts for the version to run. It defaults to the latest available:
Use the latest version [Y/n]:

To run a specific version, the user must type 'n' and then type the exact name of the version to run.
Type name of an input file:ACI.DATA

Following the version to run, the user is prompted for the name of an input file. Just like in batch mode,
typing an extension is not obligatory. When submitting on Linux or to Linux, the name typed is case sensitive.
The next prompt decides if it is a remote or local run. If the user just presses the Enter key then the case
is launched on a local machine (default answer) :
Remote submission [y/N]:y

Otherwise ECLRUN will prompt for a few more inputs. The first of them is the name of a server to connect
to:
Type name of the remote server:<remote_server>

To connect to the server, the user credentials are also required:


Type user name [default <user_name>]:
<user_name>@<emote_server>'s password:

If the answer for the following prompt is the default (N), then ECLRUN carries on with submission to the
server without using a queuing system:
Submit to a queue [y/N]:y

However, in this example the user wants to submit to a queue and therefore types y and presses the Enter
key.
The very last prompt relates to queue name which has no default value so that the user needs to type it in
Type the queue name (found in config file: LSFqueue1):<lsf_queue>

At this point ECLRUN searches through SimLauncherQueues XML tags in SimLaunchConfig.xml to find
queues available on the previously specified remote server (see Configuration files (p.36) for more
details).
This is the last essential parameter needed for remote submission and once ECLRUN has got it, it then
submits the job and displays output similar to that shown below:

Usage

22

ECLRUN User Guide

Analysing the input file (it may take a while)...


Connecting to server
Preparing job for submission...
Analysing the input file (it may take a while)...
Uploading job to server...
ACI_200903310856414488.zi | 4 kB |
4.0 kB/s | ETA: 00:00:00 | 72%
ACI_200903310856414488.zi | 5 kB |
5.5 kB/s | ETA: 00:00:00 | 100%
Submitting job on server...
Message Job ACI submitted to queue <lst_queue> with job_id=27236
Message Simulation is queued
Checking status of job on server...
Downloading status & results...
ACI_2009033108564421491.z | 2 kB |
2.7 kB/s | ETA: 00:00:00 | 100%

The interactive submission workflow makes the submission process much faster especially for local runs.
Checking and killing a simulation is even easier because the only options that are really needed are the
case name, user name and user password. An example is presented below
C:\ecl\2009.1\eclipse\data>eclrun -i
Choose operation:
submit
check
kill
[default submit]:check
Type name of an input file:ACI
Type user name [default kszawala]:
kszawala@'s password:
Connecting to server
Connecting to server to request status of job...
ACI.CLIENT
| 4 kB |
4.0 kB/s | ETA: 00:00:03 | 20%
ACI.CLIENT
| 19 kB | 19.9 kB/s | ETA: 00:00:00 | 100%
Message Simulation has finished.
Downloading status & results...
ACI_2009033109463920377.z | 32 kB | 32.0 kB/s | ETA: 00:00:04 | 16%
ACI_2009033109463920377.z | 190 kB |190.6 kB/s | ETA: 00:00:00 |100%
Cleaning up...

An example of a kill operation would be similar with the difference that in the first step, instead of typing
check the user should type kill.

Multiple Realization
The 2009.2 version of ECLRUN supports license aware scheduling for multiple realization runs when
submitting to LSF. Multiple realization is a concept of running multiple ECLIPSE cases that represent
different realizations of the same model using just one set of simulation license features.
To enable multiple realization license scheduling in ECLRUN set the ECL_MR_SCHEDULING environmental
variable on the LSF submission host (unset it to disable). When the variable is set and the MULTREAL
keyword is present in the DATA file holding a valid MR key then ECLRUN will pass the shared simulation
license request to LSF (this requires preexec.sh to be configured as a PREEXEC for the queue the job is
being submitted to). The preexec.sh, if configured properly as a PREEXEC, will hold the job queued until
requested licenses are available.

Usage

23

ECLRUN User Guide

The following example illustrates how to set the preexec.sh:


Begin Queue
QUEUE_NAME = mr_queue
PRE_EXEC
= /ecl/macros/preexec.sh
End Queue

Note: The preexec.sh file is automatically deployed under /ecl/macros folder when installing ECLIPSE
on Linux.
Note: The license scheduling for MR jobs will only work on homogenous clusters where the submission
host has the same architecture as the execution hosts.

Parallel run
To run ECLIPSE in parallel mode you have to add the PARALLEL keyword to the RUNSPEC section of the
DATA file as this example shows:
PARALLEL
2 /

The keyword will then be automatically detected by ECLRUN at submission time and a valid MPI command
will be invoked.
On Windows, ECLRUN will only allow as many concurrent simulations as indicated by the EclNJobs
configuration file variable (see Configuration variables (p.34) for more details).
If EclNJobs equals two then either one two-way parallel or two serial jobs can run simultaneously. If more
jobs have been submitted then the excessive ones will be held in the Local Queue until a sufficient number
of jobs finish (see Local Queue (p.9) for more details).
When running parallel on either Windows or Linux a host file can be used. The file should contain a list of
machine names to run the job on - if it is not provided, the file will be automatically generated and filled in
with the name of the local host.
From the 2009.2 version of ECLRUN onwards, the automatically detected number of processors needed
for the run can be overridden on the command-line by using the --np option (see Options (p.4) for more
details).
Note: The only way to run INTERSECT in parallel is to use the --np option.
See INTERSECT (p.25) for further information.

Simulation output cleanup


The 2009.2 ECLRUN executable introduces a new policy for local cleanup of the old simulation output files.
When starting a new simulation locally or remotely by default there is no cleanup of the old simulation output
files - the files will be overwritten later on by new simulation output. To enforce output files cleanup use the
Usage

24

ECLRUN User Guide

--localcleanup command-line option (see Options (p.4) for more details). When local cleanup is enabled
all non-input files with the same base name as the input DATA file will be deleted (it also includes files that
are not simulation output but have the same base name).

When running remote simulation with file transfer (a unique working directory created for the run on the
remote server) by default the remote simulation directory with all its contents will be deleted when either
the eclrun check command is executed and the simulation is finished or the eclrun kill command is
called. If the simulation was submitted with either --nocleanup or --noremotecleanup, the directory will
be left intact.

INTERSECT
General
The 2009.2 executable of ECLRUN is released with INTERSECT 2010.1 The 2009.2 executable of
ECLRUN introduces support for the INTERSECT simulator, Migrator and INTERSECT Command Editor.
The three products require INTERSECT to be installed under the same directory as ECLIPSE.
INTERSECT is a black oil and compositional simulator based on next-generation architecture and solution
technologies. Refer to the INTERSECT documentation for more details.
When installing INTERSECT on Windows the eclrun.exe, plink.exe, pscp.exe and eclrun.config
files are copied into the macros directory under the INTERSECT installation folder. When installing on Linux
the files copied into macros directory are: eclrun Python file, eclrun.config and efeclrun directory.
In both cases the installation is self-contained and will deploy all the ECLRUN components needed for the
tool to run properly.
Note: On Linux MPI is installed automatically whereas on Windows Intel MPI must be installed separately.

INTERSECT simulation
The INTERSECT simulator extends the list of currently supported ECLIPSE and FrontSim simulators. The
submission, check and kill commands are very much similar and the only significant difference is the type
of input file that can be generated based on ECLIPSE DATA file.
INTERSECT uses the AFI file as its input format. The AFI file is an XML file. The file does not allow you to
specify a PARALLEL keyword and therefore the --np command line option has to be used instead.
An INTERSECT simulation may be run locally or through an LSF queue. When running locally on Windows
the simulation uses the Local Queue (jobs are kept queued until the requested number of processors are
available).
The example below shows how to run a serial case (CASE.AFI) locally:
eclrun ix CASE

Notice that the file extension can be skipped here since ECLRUN will always search for an AFI file.
The following example shows how the same test case (CASE.AFI) would be submitted to a remote queue
(lsfQueue) on a remote server (server):

Usage

25

ECLRUN User Guide

eclrun -s server -q lsfQueue ix CASE

To check the current status of the simulation use the check command as you would for the other simulators:
eclrun check CASE

The only way to run in parallel is to use the --np command line option (see Options (p.4) for more details).
INTERSECT currently supports Intel MPI on Windows and Linux, Scali MPI on Linux (see Setting up Intel
MPI (p.17) for more details).
The example below shows how to submit a two-way parallel INTERSECT case (CASE.AFI) locally to a LSF
queue named lsfQueue:
eclrun -s localhost -q lsfQueue --np 2 ix CASE

For more details on parallel runs see Parallel run (p.24).


Note: Valid installation of relevant MPI is required even when running the Migrator or INTERSECT simulator
in serial. This applies to all platforms supported by INTERSECT.
Note: There is no support for license aware scheduling when submitting to LSF queuing system.
Note: DATA file always has to be converted to AFI file before running INTERSECT simulation. This is a
separate step that involves the Migrator. It is not possible to run the INTERSECT simulator directly against
a DATA file.
Note: kill and check commands work exactly the same way as with the ECLIPSE and FrontSim
simulators.

Migrator
Migrator is a conversion and verification tool. It is mainly used to convert ECLIPSE DATA files to the
INTERSECT AFI format. It can also verify correctness of an existing AFI file.
Running Migrator is usually the first step when running a simulation; where you first convert an existing
ECLIPSE DATA file to an INTERSECT AFI file. Once the AFI file has been generated it can be then edited
with the INTERSECT Command Editor. After editing the file it can be run again against Migrator in
verification mode to make sure that the changes made to it are consistent.
The example below shows how to convert an existing DATA file named CASE.DATA to an INTERSECT AFI
file (CASE.AFI):
eclrun ecl2ix CASE.DATA

Upon a successful completion of the conversion you will see a newly generated CASE.AFI file. In the above
example the input file was specified with DATA extension, which is not mandatory - if the file extension is
omitted then ECLRUN will still search for CASE.DATA file.
Usage

26

ECLRUN User Guide

The following example shows how to verify an existing AFI file (CASE.AFI):
eclrun ecl2ix CASE.AFI

Notice that verification mode was triggered by the presence of the AFI file extension - absence of the file
extension would trigger conversion mode.
Note: Remote run of Migrator is not supported in ECLRUN.

INTERSECT Command Editor


The INTERSECT Command Editor is a tool that can be used to edit an existing AFI file; or create a new
AFI file from scratch; or to convert an ECLIPSE DATA file to AFI file and edit the latter. All the work flows
are supported in ECLRUN .
The following example presents how the INTERSECT Command Editor is launched to edit an existing AFI
file (CASE.AFI):
eclrun ixce CASE.AFI

Notice that AFI file extension was specified along with the file name.
The next example shows how to launch the INTERSECT Command Editor against an AFI file that is
generated based on DATA file 'on the fly':
eclrun ixce CASE.DATA

The last example presents how to start up the INTERSECT Command Editor without any file:
eclrun ixce

In this mode you choose the file to work with after the INTERSECT Command Editor has been launched.
Note: Remote run of the INTERSECT Command Editor is not supported in ECLRUN.

Usage

27

ECLRUN User Guide

Installation Instructions
ECLRUN is installed automatically during ECLIPSE installation. ECLIPSE must be installed on both client
and server for remote simulation job submission to work. The last installed version of ECLIPSE must be
the same on both client and server.
ECLRUN is provided as a Python script or executable file. The Python script is generally for debugging
purposes or running from Linux/UNIX machines. The eclrun.exe file is provided by the ECLIPSE installer
when installing on a Windows machine.
The 2009.1 version of ECLRUN introduces backwards compatibility with 2008.x versions running on client
Windows machines which means that submission will not be terminated with the error message
"Incompatible/Different ..."

as long as the newer version of ECLRUN is installed on a cluster.


The backwards compatibility does not apply to versions older than 2008.1.
This effectively means that client machines and servers may be updated separately. If 2008.x version of
ECLIPSE is upgraded to 2009.x on a server then Windows clients will not be affected. As soon as server
gets upgraded then upgrading clients may be done gradually. However, upgrading ECLIPSE to 2009.x on
any client will require the same version of ECLIPSE to be installed on the server.
As an alternative to using the RSH or SSH protocols to submit jobs to the Linux sever, ECLRUN may be
configured to use the EnginFrame web services interface from NICE (see Using EnginFrame Web Service
(p.18) for more details). This configuration offers the advantage of improved system security by limiting the
services that are exposed on an open port, and of a web browser interface to monitor job status on the
server without needing to log on. This option requires EnginFrame installed on the server, including the
separately licensed web services and ECLIPSE plug-ins. If you have EnginFrame installed, you may verify
that the web services interface is installed and correctly licensed and configured by browsing to the
EnginFrame administration portal at http://<location>:<port>/enginframe/admin. EnginFrame
may be obtained from NICE: please see the website for contact details http://www.nice-italy.com.

Windows PC installation - client


The installation includes plink.exe and pscp.exe. These are part of the PuTTY implementation of
OpenSSH. The full install is available from: http://www.chiark.greenend.org.uk/~sgtatham/
putty/.
The installer automatically adds the ecl\macros directory to your system PATH variable (a system reboot
may be needed for the update to take effect). You can check this in the Windows Control Panel |
System | Advanced | Environment Variables.
If submitting to MS HPC, the MS HPC Pack needs to be installed on the local machine and shared drive
with write access must be used as the work directory (no file transfer is allowed).
To run a remote simulation from Petrel, queues must be configured in Petrel: see the Petrel configuration
documentation.

Installation Instructions

28

ECLRUN User Guide

Windows PC installation - server


For local submission to Microsoft HPC Server three or four cluster-wide variables (depending on the user's
license configuration) need to be set on the MS HPC head node. These are LM_LICENSE_FILE,
SLBSLS_LICENSE_FILE, ECLPATH and F_UFMTENDIAN

The cluster wide variables are set on the cluster headnode by using the command
cluscfg setenvs LM_LICENSE_FILE=port@server
cluscfg setenvs ECLPATH=\\headnode\ecl
cluscfg setenvs F_UFMTENDIAN=big

they can be checked using the command


cluscfg listenvs

For local submission to a Microsoft HPC Server you need to use shared drives (network shared drive
mounted or local folder shared) with write access since there is no file transfer to MS HPC.
To run parallel simulations either MpiPro or MS Mpi is required (when using a MS HPC head node as a
local machine then MS Mpi is provided by MS HPC Pack and used by default).

UNIX or Linux installation


ECLPYTHON is installed automatically. This is an unmodified version of the public domain Python language
interpreter. We include it with a different name to ensure that our script has the correct version of Python
available, with no conflict with any other Python interpreter that may already be installed, and to avoid
requiring that Python is individually installed on every node of a cluster. Python and its source code are
available from www.python.org.

Either SSH or RSH daemons must be manually installed and enabled. We recommend SSH; for most Linux
distributions SSH is installed and enabled by default. RSH must usually be installed and configured
manually. For most UNIX installations neither are installed - see the system documentation for details.
The /ecl/macros/eclrun.config file is created with its default contents (see Configuration (p.34)).
Edit the system cshrc or bashrc file, depending on the shell you use. The files can be usually found under
the /etc folder (/etc/bashrc, /etc/csh.cshrc). Make sure that @eclrunsetup.csh (c-shell) or
@eclrunsetup.sh (bash) is called. The setup script is required to set environment variables and the path
so that the Python program can be found when ECLRUN starts. The change to the system files should look
something like this. The examples below assume that the default path of /ecl has been chosen for the
ECLIPSE installation. If you have installed ECLIPSE elsewhere, edit accordingly.
For /etc/csh.cshrc add the following lines to the end:
setenv PATH /ecl/macros\:$PATH
if ( -r /ecl/macros/@eclrunsetup.csh ) then
source /ecl/macros/@eclrunsetup.csh
endif

For /etc/bashrc add the following lines to the end:

Installation Instructions

29

ECLRUN User Guide

export PATH=/ecl/macros\:$PATH
if [ -r /ecl/macros/@eclrunsetup.sh ]; then
. /ecl/macros/@eclrunsetup.sh
fi

Note: The cshrc or bashrc (depending on the shell) files have to be configured on all machines in the
cluster.

Test deployment
This section contains instructions on how to perform a test deployment of ECLRUN on Linux. The aim of
the test deployment is to be able to test a specific version of ECLRUN as a part of ECLIPSE without
interfering with the production installation. The test deployment has to be performed by system
administrators.
Note: The User Configuration File (p.14) section describes how to get a custom copy of ECLRUN and the
eclrun.config file (see Configuration Files (p.36) for more details) for testing purposes (Linux only).
However, this is not an equivalent of the test deployment described here.
Why do a test deployment?
In production environments there is usually more than one version of ECLIPSE installed (/ecl/$VER). Many
versions of ECLIPSE executables are available to be used for simulation as every installation has a separate
version folder $VER (/ecl/$VER/bin/$plat). However, there is only one copy of ECLRUN (/ecl/
macros/eclrun) regardless of how many installed versions of ECLIPSE there are. Installing a new version
of ECLIPSE overwrites the /ecl/macros folder and all its contents (i.e. eclrun) and this can potentially
stop certain versions of ECLIPSE from being launched properly. No production environment can allow such
a risk.
In the 2009.1 release of ECLRUN, a version matching mechanism was introduced to ensure that the latest
version of ECLRUN is always the one on the server. This means that the server installation may be upgraded
to a newer version of ECLIPSE without upgrading clients (Windows machines running Petrel). Client
installations may be upgraded in due course. Having said that, downgrading the server installation to 2009.1
from 2010.1 would stop 2010.1 clients from submitting jobs to the cluster (see the Installation Instructions
(p.28) for more details).
Installing new software is always a risk, especially in production environments and therefore has to be
conducted with caution. A test deployment enables you to anticipate any potential issues without causing
disturbance to the production deployment.
When should you do a test deployment?

When introducing a new release of ECLIPSE.

When verifying a bug fix in ECLRUN.

When planning to make code changes in ECLRUN.

Testing purposes.

Prerequisites

Installation Instructions

30

ECLRUN User Guide

There must be a Linux server that is either a fresh installation (with no ECLIPSE simulator installed) or a
production server (with at least one version of ECLIPSE installed and being used by clients). In the case of
a fresh installation, verify that either the SSH or RSH daemon is installed and enabled.
Note: It is assumed that LSF (the Load Sharing Facility developed by Platform Computing) is already
installed on the Linux cluster and that it is configured to work with ECLIPSE (see Remote Submission
(p.7) for details on how to submit to LSF).
Steps for deploying a test installation
1. ECLIPSE installation.
a. Decide on the ECLIPSE installation folder for the test deployment (this must not be the same as
the production one!). For example choose /testecl if the production installation folder is /ecl.
b. The location has to be mounted across the cluster so that compute nodes can access it.
c. Perform a full ECLIPSE installation under the test deployment folder (e.g. /testecl).
Note: Refer to UNIX or Linux installation (p.29) for more info
2. ECLRUN installation
a. Choose a user to be associated with the test installation.
b. If the test users shell is c-shell then edit the .cshrc file as follows:
unsetenv ECLPATH
setenv PATH /testecl/macros\:$PATH
if ( -r /testecl/macros/@eclrunsetup.csh ) then
source /testecl/macros/@eclrunsetup.csh
endif

c. If the test users shall is bash then edit his bashrc file as follows:
unset ECLPATH
export PATH=/testecl/macros\:$PATH
if [ -r /testecl/macros/@eclrunsetup.sh ]; then
. /testecl/macros/@eclrunsetup.sh
fi

d. If the test user has a local copy of eclrun in his default home directory (e.g. /home/testuser) then
delete it.
Note: Remember to unset the ECLPATH environmental variable just for the test user as it may be
modified by global config files (/etc/csh.cshrc or /etc/bashrc) for all the users.
3. Testing
a. Logout and the login as the test user to open a new session with processing startup files (i.e. su
testuser).
b. Run the eclrun command and verify its build and release date to make sure that the correct test
ECLRUN is being used

Installation Instructions

31

ECLRUN User Guide

c. Go to a folder that contains ECLIPSE test dataset.


d. Run a simulation locally using the command eclrun eclipse TESTCASE.DATA and see if it works
e. Run a remote simulation from Windows (see Remote Submission (p.7)) using the same test user id.
Note: The local client version of ECLRUN must not be newer than the remote one.
Note: The test user, once associated with the test installation, will not be able to use the production
installation.
Note: The above installation procedure describes how to set up just one user account to test the test
deployment. However, the setting can be made for more users if required for testing.
Note: Refer to Troubleshooting (p.41) for troubleshooting information.
Deleting the test installation
Once the test installation has been tested it should be removed. To do this, just delete its installation folder
(e.g. /testecl).
The test users startup file (.cshrc or bashrc) has to be re-edited to point to the production installation.

Known Limitations
The following known limitations exist at present:

The number of cores automatically detected when running a local simulation on Windows is limited to
4.

Spaces are not supported in the names of referenced files. This does not apply to directory names.

Large input files are only supported when submitting from Windows to 64 bit Linux.

When submitting a simulation from Windows to Linux then the dataset's name has to match its physical
name in terms of case sensitivity. Python does not distinguish between lower and upper case names
on Windows.

ECLRUN only works if any of these shells is used:

/bin/csh

/bin/tcsh

/bin/bash

Note: ECLRUN will not work with either /bin/ksh or /bin/sh

ECLRUN does not support passwords containing any of the characters listed below (type eclrun -h
in command line for more details)

'

Installation Instructions

32

ECLRUN User Guide

The ~ notation used in Linux, for example using ~/path to represent /home/user/path, is not allowed
in the INCLUDE or SLAVE paths in the ECLIPSE dataset.

Installation Instructions

33

ECLRUN User Guide

Configuration
From the 2009.1 onwards ECLRUN supports a hierarchical structure of configuration files
(eclrun.config, SimLaunchConfig.xml network configuration file). A configuration file is not required
by ECLRUN. If a configuration file does not exist then ECLRUN will work with default values. However if a
configuration file exists then the file needs to be in the correct XML format with valid variable names and
values (described in detail in the following sections).
The hierarchy was introduced for two main reasons:

to allow an explicit administrative configuration for a user or a group of users,

to integrate with the new Simulation Launcher and enable users to control ECLRUN's configuration
through it.

The eclrun.config file is installed automatically during an ECLIPSE installation on a local or remote
machine. The default location is ecl\macros. (See UNIX or Linux install (p.29) for more information about
setting up a per-user configuration on Linux). The eclrun.config, is an XML formatted file. The new
XML format of the file is consistent across all the supported configuration files in the hierarchy.
The SimLaunchConfig.xml file is created when you first open the Simulation Launcher. The file can then
be accessed in either the $USERPROFILE\Application Data\Schlumberger\Simulation
Launcher directory or through the Simulation Launcher application.
As a result of hierarchical configuration an administrator may easily override a user's settings by creating
a configuration file under the All Users directory (the folder cannot be modified by users with limited
accounts) or even pointing to a network configuration file. To find out more about configuration hierarchy
and network file see Configuration files (p.36).
This version of ECLRUN introduces two new configuration variables. The first one named SimHistory
controls the limit of jobs presented in Simulation Launcher. The second one named IntelMpiWinDir points
to a customized installation folder of Intel MPI 3.2 on Windows.
Note: On the client machine, the only variables that have any effect are ChkLicOnWin,
EclEnableFloGridSaveRestore, EfHttpAuth, EclNJobs, EfPath, EfPort, EfSsl, IntelMpiSsh,
IntelMpiWinDir, NoLocalQueueAuth and SimHistory. All other variables are ignored The rest only
affects remote submission. If for example the FileMode variable is set to 'share' in the local configuration
file and to 'transfer' in the remote one then files will always be transferred.
Note: The names of configuration variables should follow Camel Notation.

Configuration variables
The meaning of all supported configuration file variables is presented below.

ChkLicOnWin - only accepts boolean values. Applies to Windows only. If False (default) then licenses'
availability is not checked prior to running a local Windows simulation. If True, the ecl\macros
\slblmstat.exe tool is used to determine if all the licenses requested for the run are available (if any
license is missing then the job will be held in the queue to wait until it becomes available).

Configuration

34

ECLRUN User Guide

EclEnableFloGridSaveRestore - enables or disables the 'Save and Restore' option in FloGrid.


Accepts boolean values only. Defaults to false which means that the 'Save and Restore' option is
disabled.

EclNJobs - number of jobs that can be run at once on a user's local Windows machine. EclNJobs
defaults to the real number of CPU cores available on the user's machine running Windows.

EfHttpAuth - accepts boolean values only. Applies to remote submissions using the EnginFrame
protocol. If True (default) then HTTP authentication is used, otherwise the EnginFrame internal
authentication mechanism is used.

EfPath - last part of web address of the EnginFrame Web Service which starts after or following the
port number (see EfPort) . Defaults to enginframe.

EfPort - port number under which the EnginFrame Web Service is available on the remote server.
Defaults to '8080'.

EfSsl - indicates if secured or unsecured connection of the EnginFrame Web Service is required.
Choices are:

TRUE - secured connection (https),

FALSE - unsecured connection (http).

Default to FALSE.

FileMode - choices are:

BOTH - use shared drive if possible else transfer file,

TRANSFER - only file transfer,

SHARE - only shared drives.

IntelMpiSsh - accepts boolean values. If True (default) then SSH is used for communication between
Intel MPI processes, otherwise RSH is used.

IntelMpiWinDir - specifies a customized installation path of Intel MPI 3.2 on Windows. ECLRUN first
tries to find the MPI executables automatically and if it fails then it searches under location pointed by
the variable. The default value is C:\Program FilesIntel\MPI-RT\3.2.2.006 (Program Files
(x86) on 64-bit systems).

LsfLicenses - choices are:

TRUE - use license aware scheduling,

FALSE - do not use license aware scheduling.

From 2010.1 onwards this option controls license aware scheduling in MS HPC as well as LSF.

NoLocalQueueAuth - accepts boolean values only (defaults to True). Applies to Intel MPI runs on
Windows. If True then parallel processes can only run on a local machine and therefore the user
password does not have to be registered, otherwise use wmpiregister to register your password.

QueueSystem- syntax: server_1:[LSF|CCS],...,server_n:[LSF,CCS]. If the specified


submission server is on the list then ECLRUN ignores the default mechanism of choosing remote queue
system. This variable is intended for future expansion. In this version of ECLRUN it has no effect, as
this version of ECLRUN always uses LSF when submitting to a remote queue.

Configuration

35

ECLRUN User Guide

Shell - system shell which should be used by the bsub command when submitting to LSF. The Shell
variable defaults to remote user's default shell.

SimHistory - maximum number of rows in history file. If the number is exceeded then the earliest
finished and failed simulations will be removed from the file to compensate the excess. ECLRUN will
not complain if the number of finished and failed simulations to be removed is not sufficient (the
mechanism does not remove jobs that are: not started, running or queued) . This affects the length
of list of simulations in Simulation Launcher which always reflects the current state of the file. Defaults
to 100 rows.

TempRunDir - remote work directory. A temporary directory for each run will be created in this location
if file transfer is performed. This replaces the --directory parameter in earlier versions of ECLRUN.

WindowsDirs - the list of Linux paths separated by commas which indicate the order of search for shared
drives on remote machine.

As ECLRUN reads the CONFIG file it replaces %VAR% with the value of VAR environmental variable if exists.
This applies to WindowsDirs and TempRunDir variables only.
Note: If the same variable is defined in at least two different configuration files then the one which is higher
in the hierarchy will take precedence over the lower priority one (See Configuration files for more details on
hierarchy).

Configuration Files
As a result of integration with Simulation Launcher, ECLRUN not only supports its own configuration file
(eclrun.config) but also those ones which come with Simulation Launcher itself. This allows the user to
control the behavior of ECLRUN through the GUI of the Simulation Launcher.
The table shows the hierarchical structure of configuration dependencies.
Priority

Level

Highest Network

All users

File name

Comments

network_config.xml

network_config.xml file name can


be defined in SimLaunchConfig.xml
either on All or Current user level

$ALLUSERSPROFILE\Application
Data\Schlumberger\Simulation
Launcher\SimLaunchConfig.xml

$ALLUSERSPROFILE is an
environmental variable which on
Windows XP typically points to: C:
\Documents and Settings\All
Users directory

Current user $USERPROFILE\Application Data


\Schlumberger\Simulation
Launcher\SimLaunchConfig.xml

Configuration

36

$USERPROFILE is an environmental
variable which on Windows XP typically
points to: 'C:\Documents and
Settings\<userid> directory, where
<userid> represents a name of
currently logged user

ECLRUN User Guide

Priority

Level

Lowest Machine

File name

Comments

C:\ecl\macros\eclrun.config

The drive letter may be different


depending on a specific installation

Table 5.1: Hierarchy of configuration files

Example
The table demonstrates the way the final value of a variable is calculated
Variable Name Default value
EfPath

enginframe

SimHistory

100

EfPort

8080

eclrun
config

SimLaunch
Config.xml>

ef_path

Engin (Current
user)

50 (All users)

Network
config file

10

Final
value

Comments

Engin

Configuration on
current user
level overwrites
machine level

10

Network
configuration file
overwrites
everything

8080

Default value is
used unless it is
redefined in any
of the
configuration
files.

Table 5.2: Calculation of the final value of a variable


A variable defined high in the hierarchy overrides the corresponding variable defined lower in the hierarchy.
The mechanism works on a variable to variable basis.
The hierarchy only applies to Windows. On Linux there is just eclrun.config located under ecl/
macros (a user may also define a private configuration file in customized location, see User configuration
file (p.14) for more details).
Note: In 2009.1 version ECLRUN does not require a configuration file to work properly. ECLRUN will work
even if no configuration file is present.
The algorithm for reading in configuration information is shown below:

Configuration

37

ECLRUN User Guide

eclrun.config
This is an XML file with tags following Camel Notation. The document-level element is named
<Configuration/> with one child element named <Eclrun/>. The <Eclrun/> element is a parent of all
supported variables.
eclrun.config is automatically created under the ecl\macros directory on both Linux and Windows
during ECLIPSE installation. On Linux it is the only configuration file.

ECLRUN always searches for eclrun.config in the same location as the eclrun executable. By default
it is in the macros directory. For debugging purposes a separate copy of ECLRUN and eclrun.config
can be made. It will work as long as the specified directory is on the system path before the installation
folder. See the User configuration file (p.14) for more details

Configuration

38

ECLRUN User Guide

Default contents of ecl/macros/eclrun.config on Linux:


<Configuration>
<Eclrun>
<FileMode>both</FileMode>
<WindowDirs>%HOME%</WindowDirs>
<TempRunDir>%HOME%</TempRunDir>
<LsfLicenses>False</LsfLicenses>
<EfPort>8080</EfPort>
<EfPath>enginframe</EfPath>
<EfSsl>enginframe</EfSsl>
</Eclrun>
</Configuration>

Default contents of ecl/macros/eclrun.config on Windows


<Configuration>
<Eclrun>
</Eclrun>
</Configuration>

SimLaunchConfig.xml
This is an XML file with tags following Camel Notation. The document-level element is named
<Configuration/> with potentially many child elements among which one is named <Eclrun/>.
ECLRUN variables are defined as its sub-elements (as it is done in eclrun.config).
This file is created when the Simulation Launcher is started for the first time. Variables defined here take
precedence over corresponding variables in eclrun.config. If the Simulation Launcher is not going to be
used then ECLRUN configuration should be controlled by eclrun.config
There are two more XML tags that are used by ECLRUN:

NetworkFile,

SimLauncherQueues

The NetworkFile element is used for pointing to a network configuration file. If it does then network file
settings override the local ones accordingly. See the Network configuration file (p.14) for more details.
Example
The example below shows the way network_config.xml file can be linked from the
SimLaunchConfig.xml file:
<NetworkFile>\\server\dir\network_config.xml</NetworkFile>

The SimLauncherQueues XML element is used to define a list of queue names for remote servers. Contrary
to the other settings if different queue lists are defined in different configuration files in the hierarchy then
they merge rather than override themselves. The list of queues is used by ECLRUN in an interactive mode
to provide the user with predefined queue names when submitting to a remote machine with a queue
system. This does not stop the user from choosing a queue from outside the list. See Interactive run
(p.20) for more details.

Configuration

39

ECLRUN User Guide

The example below shows a list of two queues defined on two different servers:
<SimLauncherQueues>
<Queue name="Xeon" options="" remotequeues="Xeon"
rver="remote_server"/>
<Queue name="Opteron" options="" remotequeues="Opteron"
server="remote_server2"/>
</SimLauncherQueues>

See the "Simulation Launcher User Guide" for more information about the SimLaunchConfig.xml file.
Note: The presence of an undefined variable will cause ECLRUN to terminate with an error message.
Note: Variable names are case sensitive.
Note: It is recommended that you use Simulation Launcher to modify ECLRUN's configuration.

Configuration

40

ECLRUN User Guide

Troubleshooting
Remote simulation, because it involves more than one computer system, is a complex process and
unfortunately, this complexity means there are more opportunities for things to go wrong. This section is
designed to help you solve problems, if they occur.

Command not found


Command not found or No such file or directory errors.

If you receive variants on command not found, follow the steps here to check which command is missing.
1. Check that ECLRUN is installed on your local PC.
a. Open a command prompt window (click the Windows Start button, choose Programs | Accessories
| Command Prompt).
b. At the command prompt, type eclrun. You should see the following (the build date may differ)
C:\>eclrun
Usage:
eclrun [options] PROGRAM FILE
where PROGRAM is one of:
eclipse, e300, frontsim
or check, to check on status of previously submitted job
or cleanup, to force cleanup of remote working directory
or kill, to abort a queued or running job
FILE is the name of the data file for the simulation job
It may include leading path and trailing extension
or
eclrun [options] PROGRAM [DIRECTORY]
or
eclrun [options]
where PROGRAM is one of:
office, floviz, flogrid, etc.
DIRECTORY is the working directory
default=current working directory
Use eclrun --help for more information
CONFIGURATION FILE(s) - see manual for details
( Release: 2009.1 - build date: 2009-04-02 )
C:\>

c. Similarly, type plink and pscp at the command prompt. For each command, you should receive a
usage help message.
d. Finally, type eclrun testsys 2009-04-02. You should receive the following response (build date
may differ) :
C:\>eclrun testsys 2009-04-02
@eclrun@2009-04-02
@eclrun@pc

Troubleshooting

41

ECLRUN User Guide

@eclrun@\
@eclrun@CCS

Note: If you get a command not found error message for any of the above commands, your
installation is incomplete. Follow the instructions in Installation (p.28).
2. Check that ECLRUN is installed on your remote server.
a. Log on to your server. (If you do not know how, see Communication or connection errors (p.42)).
b. At the command prompt, type eclrun testsys 2009-04-02.
You should receive a response similar to the following:
% eclrun testsys 2009-04-02
@eclrun@2009-04-02
@eclrun@linux
@eclrun@/
@eclrun@LSF
%

c. If you do not receive the above response, ECLRUN is not correctly installed on your server. Follow
the instructions in Installation (p.28).
3. Check that ECLPYTHON has been installed on your UNIX or Linux server. If you receive the following
error:
% eclrun testsys 2009-04-02
/usr/bin/env: eclpython: No such file or directory
%

This indicates that your environment settings have not been set up to find ECLPYTHON.
a. You can verify this by typing:
% which eclpython

If you receive the following response


eclpython: Command not found.
%

b. Follow the instructions in Installation (p.28)

Communication or connection errors


If you receive error messages of the form Unable to connect to myserver, try the following tests.
Note: If it is the first time you tried to submit to a particular server from your PC, try again. First-time
connections have to store the remote server's "signature" in the Windows registry, and this sometimes fails.
Often, simply repeating the submission works.
1. Check that your PC can communicate with your server.

Troubleshooting

42

ECLRUN User Guide

2. If you are using the default password protocol.


a. Open a command window and type plink servername.
b. You will be prompted for your userid and password.
c. If you have not connected to the server before, you may also be asked for permission to store the
server's signature in the local cache. Answer yes.
C:\>plink myserver
login as: user1
user1@mypc's password:
Last login: Tue Jan 10 12:01:42 2006 from mypc.bigoil.com
% exit
C:\>

If you get the error message below when trying to connect to the remote server with plink command,
despite using the correct credentials and using version 0.60 of plink, you need to downgrade to plink
version 0.58:
Unable to Authenticate

In order to downgrade, install PuTTY 0.58 on your machine (3rdparty\PC\resource\Putty


\putty-0.58-installer.exe). Also copy plink.exe and pscp.exe from the PuTTY installation
folder to your ecl\macros directory on Windows.
If this does not solve the problem, contact Schlumberger.
3. Check that your PC can communicate with your server if you are using the unsecure protocol.
a. If you have configured Petrel and ECLRUN to use the unsecure protocol, by adding the the option
--protocol unsecure to the submit command in the Petrel global configuration file, you should
test communication as follows
C:\>rsh myserver eclrun testsys 2009-04-02
@eclrun@2009-04-02
@eclrun@linux
@eclrun@/
@eclrun@LSF

Note: The Microsoft Windows version of the RSH command is very slow. You are strongly
recommended to use the secure protocol, which is both more secure and much quicker.
If you receive an error message of the form:
myserver.bigoil.com: Permission denied.
rsh: can't establish connection

This means that your server is not configured to allow you to connect. There are two possible causes
for this:

Your .rhosts file on the server does not list your PC name.

Your server is not configured to allow connections using this protocol.

Troubleshooting

43

ECLRUN User Guide

b. Consult your system administrator if you need assistance resolving these problems.
4. Your simulation runs leave files behind on the server.
If running a remote simulation in file transfer mode, ECLRUN copies your simulation input to a temporary
directory on the server, and copies the results back to your PC when you issue the eclrun check
command. If it detects that the run is finished, it will also remove the temporary directory on the server,
after it has copied the results back to the PC. If files are left behind on your server, it could be because:
a. An error occurred during the submission, simulation, or the fetching of results. In this situation,
ECLRUN does not delete any files so that you can determine the cause of the error, and salvage any
useful results from the simulation. If you know why the error occurred, it is safe to delete the temporary
directory.
b. You did not ask ECLRUN to load results after the run had finished. ECLRUN only deletes the
temporary files on the server when they have all been downloaded to the local PC after the run has
finished. If you look at the results of a simulation half way through, decide it is wrong in some way,
then go on to submit another case without ever bothering to kill or fetch the final results of the first
case, they will stay on the server. You should always fetch results or abort simulations to ensure your
server does not fill up with "orphaned" simulations.
5. ECLRUN fails to fetch results from the server.
You issue the ECLRUN check command and nothing happens. There are several possible causes for
this:
a. You are using shared network drives, so there is no need to transfer data back. Simply load the
results directly into ECLIPSE Office or Petrel.
b. The simulation is very large. If the simulation output files are very large, it may simply be taking a
long time to download them from the server.
c. The results have downloaded but Petrel is idle. There is a known issue in Petrel 2007.1 that if you
are working in another window, Petrel goes to sleep and does not always detect when it should start
loading results. Simply move the mouse, and if it is not the active application, click on the Petrel main
window title. It should wake up and load the results.
d. The ECLRUN state file has been deleted. The eclrun application stores information about a simulation
run in progress in a file called MYRUN.ECLRUN. This stores things such as the name of the server,
and the name of the directory on the server where the simulation is running. If this file is deleted
during the run, ECLRUN will be unable to download the results.
6. Do not delete the .eclrun file during a simulation run. If this has happened:
a. Open a command prompt window on the client and change to the directory where the
simulation .DATA file is stored.
b. Use ftp or a similar file transfer tool to copy the output files from the server to this directory on the
PC. The files on the server will be found in a directory called userid_myrun_yyyymmddhhmmss.
c. Remove the files from the server once they have been successfully transferred back to the client.
d. Load the results into Petrel or ECLIPSE Office.

Troubleshooting

44

ECLRUN User Guide

Local queue errors


1. Unable to access the local queue for 240 second(s). Another instance of ECLRUN
has hung or too many instances of eclrun are running now. If it happens do the following:
a. There are many instances of ECLRUN running simultaneously. Each of them wants to check the
state of the local queue to launch its job and because of the number of ECLRUN instances it takes
too much time. To solve it you can just wait until number of instances of ECLRUN running visibly
decreases and then run it again.
b. One of the instances of ECLRUN that is running has hung when checking the availability of the
requested resources. This instance of ECLRUN must be killed manually. To find out which instance
has hung you need to get the pid number from <TEMP|TMP>\eclrun.mutex (see Local queuing
system (p.9) for more details).
2. EclNJobs variable is set to 5 meaning that only 5 job(s) can be run (6
requested). Check DATA file and ECLRUN configuration files.
Your machine does not allow more than five jobs to be run at once. This is determined by the
EclNJobs configuration file variable which defaults to the number of CPU cores available (limited to 4).
To solve this error, do one of the following:
a. Change the number of CPUs required by the DATA file by editing the PARALLEL keyword,
b. Edit the number of EclNJobs in one of the ECLRUN configuration files. See Configuration (p.34) for
more information.
3. EclNJobs variable contains non integer value. Check your configuration files on
Windows.
This message means that the EclNJobs configuration file variable is initialized with a non-integer value.
Only positive integers are accepted.
a. Edit the number of EclNJobs in one of the ECLRUN configuration files. See Configuration (p.34) for
more information.
4. Neither the TEMP nor a TMP environmental variable was detected on this machine. ECLRUN cannot
create a mutex file, which is used for synchronization, because there is no way to get the location of
the Windows temp directory. To get this, ECLRUN checks either the TMP or the TEMP environmental
variable which should point to the temp directory on Windows.
a. To solve this problem create either a TMP or a TEMP environmental variable pointing to the existing
temp directory, with write access granted for all Petrel users.
5. Unable to check license availability. Check if slblmstat.exe is present in
macros folder.
This error message appears when ECLRUN fails to execute 'slblmstat.exe' which should be in the
macros directory. The executable depends on lmutil.exe. Both files have to exist in the macros folder.
slblmstat.exe and lmutil.exe are deployed automatically in the macros folder when ECLIPSE is
installed.
a. To solve this problem you need to reinstall ECLIPSE.
6. Number of jobs that can be run on this machine is less or equal 0. Check your
ECLRUN configuration file(s).

Troubleshooting

45

ECLRUN User Guide

A negative value was assigned to the EclNJobs configuration file variable. Only positive integers are
accepted.
a. Edit the EclNJobs in one of the ECLRUN config files and assign a positive integer value. See
Configuration (p.34) for more information.

INTERSECT and Migrator


1. Unable to find executable for program(s): '['afiHandler']' version ...
a. afiHandler is an INTERSECT tool that sits in the INTERSECT bin directory. The tool is called by
ECLRUN to obtain a list of simulation input or output files (it is always called by ECLRUN when
running INTERSECT simulations).

Submission to MS HPC
1. Specified directory is not a network path: D:\. Only network paths with read/write access are allowed
when submitting to Microsoft HPC. When submitting job to MS HPC Server, the user needs to use a
shared network drive with write access as his/her work directory. This is because only shared drives are
supported and no file transfer is permitted otherwise. In order to solve this issue do one of the following:
a. Share your work directory and set "change" permissions for it.
b. Mount a network drive with write access.
2. Cannot create test file: "user_dataset_20080313140934084" at: "\\server\shared
drive". Only network paths with read/write access are allowed when submitting to
Microsoft HPC (total path length must not exceed 260 characters). or
Cannot open dataset.ECLRUN file. Check your permissions to this directory. If
your operating system is MS Windows then check if the absolute path is not longer
than 260 characters.

Such error messages mean that the total path length of one of ECLRUN's output files exceeds the
Windows path length limitation and cannot be created. This may appear when running a simulation for
a dataset with very long filename. Remember that ECLRUN needs to create some temporary files whose
names may be even longer than the dataset's filename. In order to solve this issue, do one of the
following:
a. Make the dataset filename shorter.
b. Move your dataset to another folder to get total path length under 260 characters.

Remote Submission
1. No write access to directory: /home_dir on the server: lsf-headnode. Check your
configuration in eclrun.config on the server.
This error message means that the user has no write access to the remote work directory pointed to by
TempRunDir or WindowsDirs in eclrun.config on the remote server. To solve this problem do one
of the following:
a. If you use your own remote copy of ECLRUN or eclrun.config, make sure that current
configuration points to a directory(ies) with write access.

Troubleshooting

46

ECLRUN User Guide

b. If you use the global ECLRUN or eclrun.config in the macros directory, contact your network
administrator to point the remote work directory to a location with write access to it.
2. Resources on: machinename are not sufficient to create the ZIP file.
The internal Python zipfile library used by ECLRUN works entirely in memory.
a. Increase the amount of memory on your machine.
3. Unable to create a ZIP file. The size of input files on Linux cannot exceed 2GB.
Large input files are only supported on Windows and Linux 64 bit. When submitting large input files to
Linux 32 bit, use shared drives. See Shared drives (p.15) for more details.
4. Unable to import efeclrun library, this means that the EnginFrame support is
disabled. See ECLRUN manual for more details.
The efeclrun library is built in eclrun.exe. However, it is not in eclrun on the remote Linux machine
where it is required to be in the macros directory. The library is copied along with ECLRUN during
installation.
a. Reinstall ECLIPSE on the remote machine to re-copy the eclrun script and the efeclrun library.
5. Unable to connect to the the EnginFrame Web Service at: http://
remoteserver:port/path. Please make sure that your -s/--subserver option points
to a correct IP or name of an existing server running the EnginFrame Web Service
(and then the missing parts of the URL will be automatically completed based on
EfSsl, EfPort and EfPath variables set in eclrun.config) or to a full URL of the
Web Service. The EnginFrame Web Service by default gets installed under: http://
SERVER:8080/enginframe. See ECLRUN manual for more details.
Unable to connect to EnginFrame Web Service. This means that either the Web Service is not installed
on the remote server or the address typed as -s or --subserver is incorrect.
a. Make sure that the EnginFrame Eclipse Plug-in 3.0.0 or greater is installed on the remote server.
b. Type the full address of the Web Service in your web browser to check that it displays the EnginFrame
welcome page and then correct your local eclrun.config file accordingly (see Configuration (p.34)).
6. EnginFrame Error - ...
Any error message of this form comes directly from the EnginFrame system.
a. Contact either NICE, who develop the EnginFrame, or Schlumberger. In both cases send the
ECLRUN_DBG file.
7. Invalid input DATA file(s) - DATACHECK keyword specified in some of input files
only. If you specify DATACHECK keyword then it must be done in MASTER and all
SLAVES datasets.
The error message may appear only when a reservoir coupling case is being submitted. If DATACHECK
is specified under the LICENSES keyword in any of the DATA files, it must be repeated in all of the files.
8. Incompatible/different versions of eclrun detected - client_build_date/
server_build_date: 2007-05-18/2009-04-02.
Incompatible versions detected. For 2007.2 versions and backwards the local and remote versions must
be matched. The 2009.1 version introduces more flexibility into the version matching mechanism which

Troubleshooting

47

ECLRUN User Guide

allows 2008.x clients to work with 2009.x server. In this particular case the local ECLIPSE should be
upgraded either to 2008.x or 2009.1.
9. FileMode variable not specified or incorrect in your ECLRUN configuration file
(s). Check your debug file for more details.
2008.x versions of ECLRUN require the the FileMode variable to be set up in the remote configuration
file. In 2009.x onwards it is not required.
10. No shared network path was found (No file transfer allowed). Make sure that
WindowsDirs variable in ECLRUN configuration file(s) points to correct
locations. Check your debug file for more details.
If FileMode=SHARE and no shared drive has been detected, the simulation cannot start due to a lack
of input files. Either make sure that the shared drive is mounted correctly (see Shared drives (p.15)) or
replace SHARE with TRANSFER or BOTH (see Configuration Variables (p.34) for more details on the
FileMode variable).
11. Only a subset of ASCII character set is allowed in password. See the built-in
help as well as the ECLRUN manual for more details.
Here is the subset of characters that are supported in passwords (includes a white space):
!#$%&()*+,-./:;<=>?@[]^_`{|}~ abcdefghijklmnopqrstuvwxyz
ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789

12. Unable to locate Intel MPI installation folder on this machine. See
troubleshooting section of the ECLRUN manual for more details.
The error message applies to Windows only. ECLRUN first looks into the Windows registry to find the
Intel MPI installation location. If it fails, ECLRUN tries to get the latest version from the default location
(if exists) under:
C:\Program Files\Intel\MPI-RT\

If it fails to find the libraries at this stage, it then searches under the location to pointed by the
IntelMpiWinDir configuration file variable. To learn more about the variable see Configuration
Variables (p.34).
13. Unable to get the list of serverwide variables on <schedulername>. Check if the
Microsoft HPC Pack is installed properly.
The installation process was not completed on the MS HPC server. Three cluster wide variables need
to be set up on the server:
LM_LICENSE_FILE, ECLPATH and F_UFMTENDIAN.

If any of the above variables are not set, ECLRUN will not be able to submit any job to the specific MS
HPC cluster. From 2010.1 onwards there is a fourth (optional) cluster wide variable named
SLBSLS_LICENSE_FILE. This variable takes precedence over LM_LICENSE_FILE if exists. See
Windows PC install - server (p.28) for more details.
14. The filename: 'NonASCIICaseName' contains unsupported characters (detected in
input file). Only ASCII characters are supported in filename.

Troubleshooting

48

ECLRUN User Guide

Only ASCII characters are allowed in the input filename, as well as in included filenames. Another
variation of the error message may be displayed when a non-ASCII name is used in the included file
name (INCLUDE keyword).

Troubleshooting

49

ECLRUN User Guide

Index
LSF ............................................................................. 19

A
afiHandler ................................................................... 46

Autodetect the latest version of Intel MPI ................... 17

Migrator ...................................................................... 26

Multiple Realization .................................................... 23

Command syntax ......................................................... 4

Mutex file ...................................................................... 4

Configuration .............................................................. 34

Configuration files ....................................................... 34

Network configuration file ........................................... 14


New features ................................................................ 2

D
Debug file ..................................................................... 4

Parallel on Windows ................................................... 17

eclrun.config ............................................................... 34

Parallel run ................................................................. 24

EFTOKEN Authorization ............................................ 18

Password restrictions ................................................. 32

EnginFrame Connection Protocol .............................. 18

R
Removing old simulation output files .......................... 24

F
File transfer ................................................................ 15

Shared drives ............................................................. 15

General Options ........................................................... 4

SimLaunchConfig.xml ................................................ 34

Host file ........................................................................ 4

Test deployment ......................................................... 30


Troubleshooting .......................................................... 41

I
U

Installation
Test deployment ..................................................... 30

User configuration files ............................................... 14

Installation instructions ............................................... 28


Intel MPI ..................................................................... 17
Interactive run ............................................................. 20
INTERSECT ............................................................... 25
INTERSECT Command Editor ................................... 27

L
Limitations .................................................................. 32

50

You might also like