Está en la página 1de 256

User Guide

CMOST
Enhance & Accelerate Sensitivity Analysis,
History Matching, Optimization
& Uncertainty Analysis
Version 2013



Computer Modelling Group Ltd.












This publication and the application described in it are furnished under license
exclusively to the licensee, for internal use only, and are subject to a confidentiality
agreement. They may be used only in accordance with the terms and conditions of
that agreement.
All rights reserved. No part of this publication may be reproduced or transmitted in any
form or by any means, electronic, mechanical, or otherwise, including photocopying,
recording, or by any information storage/retrieval system, to any party other than the
licensee, without the written permission of Computer Modelling Group.
The information in this publication is believed to be accurate in all respects. However,
Computer Modelling Group makes no warranty as to accuracy or suitability, and does
not assume responsibility for any consequences resulting from the use thereof. The
information contained herein is subject to change without notice.

Copyright 1987-2014 Computer Modelling Group Ltd.
All rights reserved.

The license management portion of this program is based on:

Reprise License Manager (RLM)
Copyright 2006-2014, Reprise Software, Inc.
All rights reserved

Trademark of Computer Modelling Group Ltd.
Other company, product and service names are the properties of their respective
owners.

Computer Modelling Group Ltd.
200, 1824 Crowchild Trail N.W.
Calgary, Alberta Canada T2M 3Y7


Tel: (403) 531-1300 Fax: (403) 289-8502 E-mail: cmgl@cmgl.ca




CMOST User Guide Contents i



Contents
1 Whats New in CMOST 1
1.1 Whats New in CMOST 2013.12 ...................................................................... 1
1.1.1 Differential Evolution ......................................................................... 1
1.1.2 Copy Parameter Data to Other Parameters ......................................... 1
1.1.3 Resolve Reuse Pending for Multiple Experiments ............................. 1
1.1.4 Input Section Head Nodes .................................................................. 2
1.1.5 Editing Master Dataset Using Builder ................................................ 2
1.2 Whats New in CMOST 2013.11 ...................................................................... 2
1.2.1 Basic Concepts .................................................................................... 2
1.2.2 User Interface ...................................................................................... 4
1.2.3 Study Type and Engine ....................................................................... 5
1.2.4 Creating and Editing Input Data ......................................................... 9
1.2.5 Managing Experiments ..................................................................... 10
1.2.6 Reusing and Restarting ..................................................................... 10
1.2.7 Proxy Dashboard .............................................................................. 11
1.2.8 Viewing and Analyzing Results ....................................................... 11
1.2.9 Converting Old CMOST Files to New CMOST Files ...................... 11
2 Welcome 15
2.1 Introduction ..................................................................................................... 15
2.2 What You Need to Use CMOST ..................................................................... 15
2.2.1 General .............................................................................................. 15
2.2.2 Configuring Launcher and CMOST ................................................. 15
2.2.3 Computers and Licenses ................................................................... 15
2.3 About this Manual ........................................................................................... 15
2.4 Getting Help .................................................................................................... 16
3 CMOST Overview 17
3.1 Introduction ..................................................................................................... 17
3.2 What is CMOST? ............................................................................................ 17
3.2.1 Sensitivity Analysis (SA).................................................................. 17
3.2.2 History Matching (HM) .................................................................... 18
3.2.3 Optimization (OP) ............................................................................ 18
3.2.4 Uncertainty Assessment (UA) .......................................................... 18



ii Contents CMOST User Guide



3.3 Generalized CMOST Study Process ............................................................... 19
3.4 CMOST Components and Concepts ............................................................... 20
3.4.1 Project Components ......................................................................... 20
3.4.2 Base Files ......................................................................................... 20
3.4.3 File System ....................................................................................... 21
3.4.4 Study Types and Engines ................................................................. 23
3.4.5 Study Workflow ............................................................................... 23
3.5 CMOST Master Dataset (.cmm) ..................................................................... 24
3.6 CMOST User Interface ................................................................................... 30
3.7 Best Practices for Using CMOST ................................................................... 32
4 Getting Started 35
4.1 Introduction ..................................................................................................... 35
4.2 Opening and Navigating CMOST .................................................................. 35
4.3 Opening a CMOST Project ............................................................................. 37
4.4 Creating a CMOST Project ............................................................................. 39
4.5 Using the Study Manager................................................................................ 41
4.5.1 To Create a New Study .................................................................... 41
4.5.2 To View a Study ............................................................................... 46
4.5.3 To Change the Display Name of a Study ......................................... 46
4.5.4 To Add an Existing Study to the Current Project Session................ 46
4.5.5 To Load/Unload a Study .................................................................. 46
4.5.6 To Exclude a Study .......................................................................... 47
4.5.7 To Import Data from a Study ........................................................... 48
4.5.8 To Copy a Study ............................................................................... 49
4.6 Common Screen Operations and Conventions ............................................... 49
4.6.1 Buttons and Icons ............................................................................. 49
4.6.2 Plots .................................................................................................. 49
4.6.3 Names ............................................................................................... 52
4.6.4 Required Fields ................................................................................ 52
4.6.5 Default Field Values ......................................................................... 53
4.6.6 Tab Display ...................................................................................... 53
4.6.7 Tables ............................................................................................... 54
4.6.8 Validation tab ................................................................................... 57
4.7 Closing CMOST ............................................................................................. 57
5 Creating and Editing Input Data 59
5.1 Introduction ..................................................................................................... 59
5.2 General Properties ........................................................................................... 59
5.2.1 General Information Area ................................................................ 59
5.2.2 Base SR2 Information Area ............................................................. 60
5.2.3 Field Data Information Area ............................................................ 60
5.2.4 Advanced Settings ............................................................................ 61



CMOST User Guide Contents iii



5.3 Fundamental Data............................................................................................ 61
5.3.1 Original Time Series ......................................................................... 62
5.3.2 User-Defined Time Series ................................................................ 64
5.3.3 Property vs. Distance Series ............................................................. 67
5.3.4 Fluid Contact Depth Series ............................................................... 70
5.4 Parameterization .............................................................................................. 72
5.4.1 Parameters ......................................................................................... 72
5.4.2 Parameter Correlations ..................................................................... 78
5.4.3 Hard Constraints ............................................................................... 80
5.4.4 Pre-Simulation Commands ............................................................... 82
5.5 Objective Functions ......................................................................................... 88
5.5.1 Characteristic Date Times ................................................................. 88
5.5.2 Basic Simulation Results .................................................................. 90
5.5.3 History Match Quality ...................................................................... 91
5.5.4 Net Present Values ............................................................................ 96
5.5.5 Advanced Objective Functions ....................................................... 100
5.5.6 Global Objective Function Candidates ........................................... 105
5.5.7 Soft Constraints .............................................................................. 107
6 Running and Controlling CMOST 111
6.1 Introduction ................................................................................................... 111
6.2 Control Centre ............................................................................................... 111
6.3 Engine Settings .............................................................................................. 114
6.3.1 Introduction ..................................................................................... 114
6.3.2 General Settings .............................................................................. 116
6.3.3 Engine-Specific Settings ................................................................. 117
6.4 Simulation Settings........................................................................................ 126
6.4.1 Schedulers ....................................................................................... 127
6.4.2 Simulator Settings ........................................................................... 129
6.4.3 J ob Record and File Management .................................................. 131
6.5 Experiments Table ......................................................................................... 131
6.5.1 Navigating the Experiments Table ................................................. 132
6.5.2 Creating Experiments ..................................................................... 139
6.5.3 Configuring the Experiments Table ................................................ 143
6.5.4 Checking Experiment Quality ........................................................ 146
6.5.5 Exporting the Experiment Table to Excel ....................................... 147
6.5.6 Viewing the Simulation Log ........................................................... 147
6.5.7 Reprocessing Experiments .............................................................. 147
6.6 Proxy Dashboard ........................................................................................... 147
6.6.1 Opening the Proxy Dashboard ........................................................ 147
6.6.2 Building a Proxy Model through the Proxy Dashboard .................. 149
6.6.3 Interacting with the Proxy Model ................................................... 151
6.6.4 Changing the Proxy Role ................................................................ 152



iv Contents CMOST User Guide



6.7 Simulation J obs ............................................................................................. 152
7 Viewing and Anal yzing Results 155
7.1 General Information ...................................................................................... 155
7.1.1 Display of Multiple Plots ............................................................... 155
7.1.2 Screen Operations .......................................................................... 156
7.1.3 Navigating the Tree View .............................................................. 156
7.2 Parameters ..................................................................................................... 157
7.2.1 Run Progress .................................................................................. 157
7.2.2 Histograms ..................................................................................... 158
7.2.3 Parameter Cross Plots ..................................................................... 159
7.3 Time Series ................................................................................................... 160
7.3.1 Observers ........................................................................................ 160
7.4 Property vs. Distance .................................................................................... 162
7.4.1 Observers ........................................................................................ 162
7.5 Objective Functions ...................................................................................... 163
7.5.1 Run Progress .................................................................................. 163
7.5.2 Histogram ....................................................................................... 164
7.5.3 Objective Function Cross Plots ...................................................... 165
7.5.4 OPAAT Analysis ............................................................................ 165
7.5.5 Proxy Analysis ............................................................................... 167
8 General and Advanced Operations 173
8.1 CMM File Editor .......................................................................................... 173
8.1.1 Introduction .................................................................................... 173
8.1.2 Working with Comments ............................................................... 175
8.1.3 Working with Include Files ............................................................ 175
8.1.4 Navigation Tools ............................................................................ 177
8.1.5 Other Functions .............................................................................. 177
8.1.6 Keyboard Shortcuts ........................................................................ 180
8.2 Handling Large Files..................................................................................... 181
8.3 Formula Editor .............................................................................................. 181
8.3.1 Parts of a Formula .......................................................................... 181
8.3.2 Constants in Formulas .................................................................... 181
8.3.3 Functions in Formulas .................................................................... 181
8.3.4 Variables in Formulas .................................................................... 182
8.3.5 Operators in Formulas .................................................................... 182
8.3.6 Formula Calculation Order ............................................................. 183
8.3.7 List of Built-in Functions in CMOST ............................................ 183
8.4 Using J script Expressions in CMOST .......................................................... 189
8.4.1 Transferring Data from CMOST to User J Script code ................... 190
8.4.2 Accessing Simulation J ob Input and Output Files ......................... 191



CMOST User Guide Contents v



8.4.3 Transferring Data from J Script Code to CMOST ........................... 192
8.4.4 Starting a New Line in the Dataset ................................................. 192
9 Configuring Launcher and CMOST to Work Together 193
9.1 Introduction ................................................................................................... 193
9.2 Configuring Launcher ................................................................................... 193
9.2.1 Launcher ......................................................................................... 193
9.2.2 CMG J ob Service ............................................................................ 193
9.2.3 Use Launcher Embedded Mode for Submitting J obs ..................... 194
9.2.4 Use CMG J ob Service for Submitting J obs .................................... 195
9.2.5 Submitting J obs to a Remote Computer ......................................... 196
10 Troubleshooting 199
10.1 Introduction ................................................................................................... 199
10.2 Failed and Abnormal Termination J obs ........................................................ 199
10.3 Exception Reports ......................................................................................... 201
11 Theoretical Background 203
11.1 Probability Distribution Functions ................................................................ 203
11.1.1 Uniform Distribution ...................................................................... 203
11.1.2 Triangle Distribution ...................................................................... 203
11.1.3 Truncated Normal Distribution ....................................................... 203
11.1.4 Truncated Log Normal Distribution ............................................... 204
11.1.5 Deterministic Distributions ............................................................. 204
11.1.6 Custom Distribution ........................................................................ 204
11.1.7 Discrete Probability Distribution .................................................... 205
11.2 Objective Functions ....................................................................................... 205
11.2.1 History Match Error ........................................................................ 205
11.2.2 Net Present Value ........................................................................... 207
11.3 Sampling Methods ......................................................................................... 209
11.3.1 One-Parameter-at-a-Time Sampling ............................................... 211
11.3.2 Latin Hypercube Design ................................................................. 212
11.3.3 Classical Experimental Design ....................................................... 216
11.3.4 Parameter Correlation ..................................................................... 217
11.4 Proxy Modeling ............................................................................................. 218
11.4.1 Response Surface Methodology ..................................................... 218
11.4.2 Types of Response Surface Models ................................................ 218
11.4.3 Normalized Parameters (Variables) ................................................ 219
11.4.4 Response Surface Model Verification Plot ..................................... 220
11.4.5 Summary of Fit Table ..................................................................... 220
11.4.6 Analysis of Variance Table ............................................................. 222
11.4.7 Effect Screening Using Normalized Parameters ............................. 223



vi Contents CMOST User Guide



11.4.8 Linear Model Effect Estimates ....................................................... 224
11.4.9 Quadratic Model Effect Estimates ................................................. 225
11.4.10Reduced Model Effect Estimates ................................................... 228
11.5 Optimizers ..................................................................................................... 229
11.5.1 CMG DECE ................................................................................... 229
11.5.2 Latin Hypercube plus Proxy Optimization ..................................... 230
11.5.3 Particle Swarm Optimization ......................................................... 232
11.5.4 Differential Evolution .................................................................... 233
11.5.5 Random Brute Force Search........................................................... 233
12 Glossary 235
13 Index 245




CMOST User Guide Whats New in CMOST 1



1 Whats New in CMOST
1.1 Whats New in CMOST 2013.12
The differences between CMOST 2013.12 and 2013.11 are outlined below:
1.1.1 Differential Evolution
The Differential Evolution (DE) optimization algorithm has been introduced as a new engine
for use in minimizing/maximizing objective functions during history matching and
optimization tasks.
DE is a powerful global optimization algorithm that was introduced by Storn and Price
(1995). It has three control parameters: Scaling Factor (F), Crossover Rate (Cr), and
Population Size (NP). See Differential Evolution (DE) for further detail.
1.1.2 Copy Parameter Data to Other Parameters
This feature allows users to copy data from a parameter with a specific source type to another
parameter with the same source type, as shown below:
Copy Data from Copy Data To Copied Data
Continuous Real Continuous Real Data Range Settings, Discrete Sampling and
Prior Distribution Settings
Discrete Real Discrete Real Real Value and Prior Probability
Discrete Integer Discrete Integer Integer Value and Prior Probability
Discrete Text Discrete Text Text Value, Numerical Value and Prior
Probability
Formula Formula J script Code
Refer to Copying Parameter Data for further information.
1.1.3 Resolve Reuse Pending for Multiple Experiments
In the previous version, if new parameters were added, you needed to resolve reuse pending
experiments by providing parameter values for each selected experiment. In this release, you can
select multiple experiments to be resolved then provide parameter values only once. Refer to the
Status | Reuse Pending in Experiments Table Columns for further information.



2 Whats New in CMOST CMOST User Guide



1.1.4 Input Section Head Nodes
Descriptive information has been added to the input section head nodes: Fundamental Data,
Parameterization, and Objective Functions. Users can find explanations about subnodes on
these pages. Refer to CMOST User Interface for further information.
1.1.5 Editing Master Dataset Using Builder
The Builder button in the Parameters page can be used to open the master
dataset (.cmm) in Builder for parameterizing the dataset.
1.2 Whats New in CMOST 2013.11
The differences between CMOST 2013.11 and earlier versions are highlighted in the
following sections:
1.2.1 Basic Concepts
A CMOST 2013.11 project consists of a number of studies, which can, for example, be a mix of
sensitivity analyses, history matches, optimizations, uncertainty assessments, and user-defined
studies, each with its own engine settings. A study contains all of the input information that
CMOST needs to run a particular kind of task. Information can be copied between studies.
Study types can easily be switched. The new study type will use as much information from the
previous study as possible. Studies consist of experiments, and each experiment is based on a
distinct set of input parameters and objective functions. Experiment details are stored and
tracked in the studys Experiments table, described in Experiments Table.

Project
Studym
Study2
Study1 Experiment1.1
Experiment1.n
Experiment2.1
Experiment1.2
Experimentm.1
...
... ...
...

NOTE: Users have flexibility in the naming of projects and studies.



CMOST User Guide Whats New in CMOST 3



1.2.1.1 File System and Folder Structure
At the highest level, a CMOST project folder is organized as shown in the following
example:

ProjectName:SAGD_2D_UA
ProjectFolder:SAGD_2D_UA.cmpd
BestPractice:Allfilesrelatedtotheproject
shouldbestoredintheprojectfolder.
ProjectFile:SAGD_2D_UA.cmp

The files in the project folder are as shown in the following example:

StudyName:BoxBen
StudyFolder:BoxBen.cmsd
Warning:Donotmodifyordeletefilesinthe
studyfolderunlessyouunderstandthe
ramifications.
StudyFile:BoxBen.cms
StudyFileAutoBackup:BoxBen.bak
NewCMOSTmasterdataset(CMM),base
dataset,andbaseSR2filesarestoredinthe
projectfolder.

If there is an error during the run, CMOST will try to save the study file to a .bak file. The
.bak file is the last valid file and it has the same format as a study file.



4 Whats New in CMOST CMOST User Guide



The files in a study folder are as shown in the following example:

VectorDataRepositoryFile:*.vdr
Note:VDRfilesstorecompressedsimulation
datarequiredforobjectivefunctioncalculations
Warning:DonotmodifyordeleteVDRfiles
manually

VDR files are compressed simulation data that is used to calculate objective functions. The
files are compressed to reduce disk space and runtime.
1.2.1.2 Study Data Model
CMOST is hierarchically organized onto different pages, with information grouped together
on the pages. Pages are accessed through the tree view, as shown below:

Whiletheengineis
running,thisdataisread
only.
Somedatacanbe
changedduringtherun;
forexample,experiments
canbeadded.
Onceavailable,results
canbeviewedwhileruns
areinprogress.

1.2.2 User Interface
The CMOST 2013 user interface is significantly different from previous versions.
The main screen has a Study Manager tab through which you can create, add, load and
unload, exclude, import, and copy studies. For further information, refer to Using the Study
Manager.



CMOST User Guide Whats New in CMOST 5



In addition to the Study Manager tab, the main CMOST project screen contains study tabs,
each of which has a tree view and, based on the type of node selected, a configuration, status,
or results page. Tree view nodes are tagged to indicate errors and warnings. The main screen
can be organized to accommodate the workflow, and for presentation purposes. For further
information, refer to Getting Started.
As mentioned above, if study-setting errors or issues are identified, these are highlighted in
the associated tree node, and information about the error or warning is presented in color-
coded messages in the Validation tab at the bottom of the study tabs. For further information,
refer to Validation tab.
1.2.3 Study Type and Engine
The study types and engines available for CMOST 2013.11 are shown below:

SA
OneParameterAtATime
ResponseSurfaceMethodology
UA
MonteCarlousingReservoirSimulator
MonteCarloSimulationusingProxy
HM&OP
LHDPlusProxy
DECE
RandomBruteForce
PSO
UserDefined
ExternalEngine
ManualEngine
DE(2013.12)

The study type and engine are specified through the New Study dialog box or through the
Engine Settings page. The engine settings are configured through the Engine Settings page.
1.2.3.1 Features Availabl e for All Engines
The following Experiments Management settings are available for all engines:
Number of failed jobs to exclude an experiment.
Number of perturbation experiments for each abnormal experiment. These
experiments will appear in the Experiments table as Perturbed experiment.
Using any engine (SA, HM, OP, UA and User Defined), you can use both continuous and
discrete parameters in the same study.



6 Whats New in CMOST CMOST User Guide



1.2.3.2 New Sensitivity Analysis (SA) Workflow
The SA workflow has changed, as shown below:

DefineInput SelectEngine
ResponseSurface
Methodology(RSM)
Resultsand
Analysis
OneParameterAtA
Time(OPAAT)
WhenredefiningSAinputs,
theprevioussettingswillbe
usedasthestartingpoint.

The advantages of the new SA workflow are as follows:
Flexibility in the order in which the steps are carried out.
Parameters and objective functions can be added and the SA study rerun.
Problematic job submissions are handled rationally and constructively, to obtain
reliable results.
Using the RSM engine, you can specify:
Desired accuracy, based on which the engine will create and run the necessary
experiments.
1.2.3.3 New Uncertainty Assessment (UA) Workflow
The UA workflow has changed, as shown below:

DefineInput SelectEngine
MonteCarloUsing
Proxy
Resultsand
Analysis
MonteCarloUsing
ReservoirSimulator
WhenredefiningUAinputs,
theprevioussettingswillbe
usedasthestartingpoint.

The advantages of the new UA workflow are as follows:
Flexibility in the order in which the steps are carried out.
Parameters and objective functions can be added and the UA study rerun.
Problematic jobs are handled rationally and constructively to obtain reliable results.
Parameter correlations can be defined. Some parameters, based on their petrophysical
meaning, are correlated with each other, for example, permeability and porosity. This
correlation can be measured through other means, such as lab experiments. The user
can enter the desired parameter rank correlations through the Parameter Correlations



CMOST User Guide Whats New in CMOST 7



page. CMOST algorithmically adjusts the rank correlation of the Monte Carlo-
generated sets of parameters so they honour the desired rank correlation settings. For
further information, refer to Parameter Correlation.
When using the MCS-Proxy engine to perform a UA, you can specify:
Desired accuracy, based on which the engine will create and run the necessary
experiments.
When using an MCS-Simulator engine to perform a UA, note the following:
Engine performs a predefined number, set by the user, of Monte Carlo simulations,
using the simulator.
Use the MCS-Simulator method if:
- You want to validate an MCS-Proxy result.
- Building a proxy is not feasible, for example, when multiple geostatistical
realizations or history-matched models are used.
1.2.3.4 Changes to HM and OP Algorithms
HM and OP support the engines shown below:

HM&OP
LHDPlusProxy
DECE
RandomBruteForce
PSO
DE(2013.12)

HM and OP engines support the following optimization settings:

AllHM/OPenginesusethesamestop
criterion
GlobalObjectiveFunctionthattheuserwants
tooptimize,asdefinedintheGlobal
ObjectiveFunctionspage.
MaximizeorMinimize

The following important change has been made to the DECE optimizer:
DECE now uses, in the case of continuous parameters, the Total Number of
Experiments, specified by the user, as the stop criterion. In the case of discrete
parameters, the total number of parameter combinations takes priority.



8 Whats New in CMOST CMOST User Guide



The following important changes have been made to the proxy optimizer:
Ability to handle continuous parameters together with discrete parameters.
In the case of continuous parameters, uses Total Number of Experiments,
specified by the user, as the stop criterion. In the case of discrete parameters, the
total number of parameter combinations takes priority.
The following important changes have been made to the PSO optimizer:
Ability to handle continuous parameters together with discrete parameters.
In the case of continuous parameters, uses Total Number of Experiments,
specified by the user, as the stop criterion. In the case of discrete parameters, the
total number of parameter combinations takes priority.
PSO can make use of the results of all previous experiments, which helps PSO
converge to the optimal solution more quickly.
The following options are available with the DE optimizer (2013.12):
Ability to handle continuous parameters together with discrete parameters.
In the case of continuous parameters, DE uses Total Number of Experiments,
specified by the user, as the stop criterion. In the case of discrete parameters, the
total number of parameter combinations takes priority.
DE can make use of the results of all previous experiments, which helps DE
converge to the optimal solution more quickly.
1.2.3.5 User-Defined Study Type
CMOST 2013 supports user-defined study types, as shown below:

UserDefined
ExternalEngine
ManualEngine
Noautomaticcreationofexperiments.All
experimentsarecreatedexplicitlybytheuser
through:
Classicalexperimentaldesign
Latinhypercubedesign
Manualcreation
Allowsuseofusersownoptimizationalgorithm

You may use the manual engine:
To use classical experimental design for SA and UA.
To have precise control of the number of Latin hypercube experiments.
To run additional experiments after a SA/UA/HM/OP run is complete.
Use the external engine to implement your own optimization algorithms. Refer to External
Engine for further information.



CMOST User Guide Whats New in CMOST 9



1.2.4 Creating and Editing Input Data
1.2.4.1 Field Data Management
Field history file (FHF) and well log data need to be imported before they can be used by
CMOST.
Once imported, the data is stored internally and its use in defining the HM Error objective
function will be performed automatically. You will not need to worry about which file
contains which type of data.
If further changes are made to the original FHF or well log files, you will need to click the
Reload button in the General Properties tab to merge the changes. This allows weights to be
preserved during the reload. If you want to reset the weights, you will need to clear all imported
data and then reload.
1.2.4.2 Change in Use of Speci al Property
A property name is now required when the origin type is SPECIALS. This change is needed
to support synthetic (SPECIALS) properties.
1.2.4.3 Parameter Definition
Continuous Parameters
In the case of continuous parameters, you can define:
Parameter lower and upper limits, which set the sampling range used by study
engines to create experiments.
Number of discrete levels, which are used by some engines to generate initial
screening experiments.
Parameter prior distribution, used only by Monte Carlo simulation (either proxy or
simulator).
Auto synchronization between prior distribution and data range settings. If set to
True, changes in prior distribution settings will automatically be reflected in the
data range settings, and vice versa. If set to False, then changes in one will not be
reflected in the other.
Discrete Parameters
In the case of discrete parameters, you can:
Define whether the discrete parameters are real, integer, or text.
Insert the desired number of discrete values in the parameter values table.
Enter prior probabilities for each parameter value (required only for UA).
For each discrete text value, enter a corresponding numerical value. This is needed
because all of the algorithms used by the CMOST optimizers work only with
numerical values.
For further information, refer to Parameters.



10 Whats New in CMOST CMOST User Guide



1.2.4.4 Characteri stic Date Times
The use of characteristic date times makes defining objective functions easier and less prone
to error. There are three types of characteristic date times:
Built-in fixed date times, which are automatically input from the base dataset, for
example, base case start and stop.
Fixed date times, which are dates defined by the user.
Dynamic date times, which are dates based on the value of the data in an original
or user-defined time series; for example, the date on which the cumulative oil
produced by a certain well, or a group of wells, exceeds a certain value.
Refer to Characteristic Date Times for further information.
1.2.4.5 User-Defined Time Series
You can define a time series, calculated from available SR2 time series and use it to calculate
the value of an objective function. Once you have defined the time series, you can compare it
graphically with field data (if available). Refer to User-Defined Time Series for further
information.
1.2.5 Managing Experiments
You can manage experiments through Control Centre | Experiments Table. Experiment
settings and status are displayed in the Experiments Table. For further information, refer
to Experiments Table.
1.2.5.1 Experiment Status and Result Status
Once the engine is started, the status of the experiments will be changed, as described
in Experiment Status.
1.2.5.2 Experiment Filter
If you have a large number of experiments, you can use the Experiment Filter to view or
export experiments of interest. Filters are used to filter the Experiments Table view. They
do not affect the data contained in the table; i.e., experiments that are filtered and hidden from
view still exist in the table. Refer to Experiment Filter for further information.
1.2.5.3 Base Case Experiment
By default, the base case experiment, defined by ID=0, is listed first in the table. It uses
default parameter values, but this can be changed at any time through the context menu by
selecting the base case experiment and then selecting Edit to open and edit the Experiment
Parameter Values dialog box.
1.2.6 Reusing and Restarting
After you finish running an engine, you can go back and change the input data. Experiments
inside the study will automatically be reused.



CMOST User Guide Whats New in CMOST 11



If new parameters are added, you will need to resolve reuse pending experiments as
outlined in Resolve Reuse Pending.
You can reuse data from other studies, as long as they are in the same project. Refer to To
Import Data from a Study for further information.
After you finish the changes, click the Start button in the Control Centre page to restart
the engine.
1.2.7 Proxy Dashboard
A Proxy Dashboard is provided so that users can more easily assess the fit of generated proxy
models with simulation results. Through the Proxy Dashboard, you can:
Use preliminary proxy models to begin predicting reservoir behavior.
Investigate the effect of varying input parameter values (by entering a what-if
scenario), thereby improving your understanding of the reservoir and how proxy
modeling works.
Define and add training or verification experiments to the study.
Switch between and compare different proxy models.
1.2.8 Viewing and Anal yzing Results
Results objects are created dynamically on-the-fly using results data stored in the
Experiments Table.
Results objects will vary with study type; for example, HM and OP will automatically have
sensitivity and proxy results if enough experiments have been completed.
1.2.9 Converting Old CMOST Files to New CMOST Files
You can convert CMOST task (CMT) and results (CMR) files to a CMOST .cmp project file.
To convert a CMT file, note the following:
Base case and corresponding SR2 files must exist; if not, you will need to open the
old CMT file then select a base case for it. In old CMOST, base case is an optional
field and may not have been entered; however, new CMOST needs this file, so you
will have to provide it.
Objective functions that use raw simulation time series objective term(s) cannot be
converted.
Formulas need to be checked manually to make sure they are correct after the
conversion.
To convert a CMR file, note the following:
CMOST will first look for a CMT file with the same name within the same folder
as the CMR file and convert the CMT file first.



12 Whats New in CMOST CMOST User Guide



If a CMT file is not found, CMOST will create a CMT file based on the CMR file
then will convert it to a project file.
CMOST will then import the results from the CMR file into the newly created
CMOST study file. Note that imported experiments can be used to build up proxies
in the proxy dashboard only if the corresponding time series data was stored in the
CMR file; i.e., if observers are defined.
The procedure for converting old CMOST task files to new CMOST project and study files is
outlined in the following example, where we will convert SAGD_2D_HM.cmt (and related
files) to the new CMOST files:
1. Your starting CMOST files will be organized as shown in the following example:

2. Open the CMOST application. The main CMOST screen is displayed.
3. In the menu bar, select File | Convert CMT/CMR File. A Windows Explorer
dialog box opens.
4. Browse to and select the CMOST task (CMT) file and then click Open. The task
and results files will be converted to the new CMOST files and folders, as shown
below for the above example:

NewCMOSTprojectfile
CopyofnewCMOSTmasterfile
NewCMOSTprojectfolder




CMOST User Guide Whats New in CMOST 13



The CMOST project folder contains copies of the base dataset files, and the
following CMOST files:

NewCMOSTstudyfile
NewCMOSTstudyfolder,initially
empty.Asexperimentsarerun,VDR
fileswillbesavedtothisfolder.
NewCMOSTback-upfile

5. The converted project file will open in CMOST. You can browse the tree view to
verify that the following data and settings have been imported:
General Properties: Master dataset, base dataset, base session file, and
field history file have been copied in to the project and their details
recorded in this page. No changes necessary.
Fundamental Data | Original Time Series: In our example, there is an
error on this node because data with SPECIALS origin type now requires
the property to be named. To clear this error, select Steam-oil ratio: SOR
(Injector)/(PRODUCER) CUM in the Property column.
Parameterization | Parameters: Parameters POR, PERMH, PERMV,
HTSORW, and HTSORG have been imported from the old CMOST
files. No changes necessary.
Objective Functions | Characteristic Date Times: BaseCaseStart and
BaseCaseStop have been generated automatically. No changes necessary.
Objective Functions | History Match Quality: HM errors and original
time series terms have been imported from the old CMOST files. No
changes necessary.
Objective Functions | Global Objective Function Candidates: Global
objective function candidate GlobalHmError has been imported from the
old CMOST files. No changes necessary.
Control Centre | Engine Settings: Settings have been imported from the
old CMOST files. No changes necessary.
Control Centre | Simulation Settings: There is a warning. Click the
active check box beside the Local scheduler name to clear this warning.
Control Centre | Experiments Table: No experiments have been
defined. In our example, a set of CMG DECE experiments will
automatically be defined when you start the CMOST engine.
6. In the Control Centre node, click the Start button to start the history match.
You can monitor the progress of the run in the Experiments Table. Refer to
for Experiments Table more information.



14 Whats New in CMOST CMOST User Guide



7. Once the run is complete, you can view the results through the Results & Analysis
node. For further information, refer to Viewing and Analyzing Results.
Important Information about the CMT Convertor
Some elements of previous CMOST task or result files cannot be converted:
1. If an objective function contains a time series type Raw Simulation Result
Objective Term, consider using a user-defined time series objective function.

2. If multiple types (categories) of local objective functions are used to calculate the
global objective function, the weight factor of each local objective function may
not be converted automatically. We recommend you check the definition of the
converted global objective function.

3. If a local objective function uses objective terms with a Conversion Factor, you
will need to revise the formula of the converted local objective function because
conversion factors for the terms are not automatically converted.





CMOST User Guide Welcome 15



2 Welcome
2.1 Introduction
This user guide provides information on how to use CMOST. A basic knowledge of other
CMG simulation and visualization products is recommended.
2.2 What You Need to Use CMOST
2.2.1 General
To make best use of CMOST, you should develop a good understanding of the CMG
reservoir model that you are working with; in particular, you should understand the
parameters that need to be adjusted, and the impact of making these adjustments. You should
also have a clearly defined project goal.
2.2.2 Configuring Launcher and CMOST
CMOST relies on either CMG Launcher or the CMG J ob Service to run jobs. Before CMOST
can be used, you will need to configure Launcher and the CMG J ob Service to work properly.
Refer to Configuring Launcher and CMOST to Work Together for further information.
2.2.3 Computers and Licenses
CMOST can make full use of available computers and licenses. Once a job has been created
by CMOST, it will automatically submit simulation jobs and check their status periodically.
Once simulations have completed, CMOST will automatically process the results required for
the CMOST study.
2.3 About this Manual
This manual is designed to quickly get existing CMOST users up to speed, and to help new
users get started. The route you take through the manual will depend on your familiarity with
CMOST. The key features and elements of the manual are summarized below:
Important Information for Existing Users provides information to help existing
CMOST users quickly get up to speed with the new version. New users should
skip this chapter.



16 Welcome CMOST User Guide



CMOST Overview is intended to provide new users with a high-level overview of
CMOST.
Getting Started provides an introduction to the new version of CMOST
application, in particular, it shows users how to:
- Open the CMOST application and navigate the user interface.
- Open an existing project and create a new one.
- Use the CMOST Study Manager, to create, view, rename, add,
load/unload, exclude, import, and copy studies.
- Use common user-interface operations.
- Close the application.
As well, the chapter provides an overview of CMOST task processes and links to
more detailed information.
The organization of the central chapters parallels the organization of the CMOST
user interface tree view which, in turn, parallels the order in which users will
configure, run and analyze CMOST studies:
- Creating and Editing Input Data
- Running and Controlling CMOST
- Viewing and Analyzing Results
General operations, applicable across CMOST processes are described in General
Operations.
Information about getting CMOST and Launcher to work together at your facility
are described in Configuring Launcher and CMOST to Work Together.
Troubleshooting provides directions for resolving common CMOST problems that
you may experience. If you are having problems, try to resolve them or get as much
information as you can before contacting CMG Support.
Theoretical information is provided in Theoretical Background. Where
appropriate, links to this information is provided from other chapters.
Hyperlinks are provided throughout the manual to help you navigate the content.
A table of contents and index are provided to help you quickly find and link to
information.
A glossary of terms is provided to help new users understand CMOST terminology.
To report errors in the manual or to provide suggestions for improvement, please contact
CMG Support, as outlined in Getting Help.
2.4 Getting Help
If you need help with the CMOST application or with this manual, please contact CMG
Support using the contact information provided on our Web site at www.cmgl.ca.



CMOST User Guide CMOST Overview 17



3 CMOST Overview
3.1 Introduction
This chapter provides introductory information about the following:
What CMOST is, and what it is used for
General CMOST process and work flow
CMOST inputs
CMOST concepts and components
CMOST user interface
Recommended practices for using CMOST effectively
The level of material presented in this chapter assumes the reader is familiar with CMG
simulators and datasets but may not be familiar with CMOST.
3.2 What is CMOST?
CMOST is a CMG application that works in conjunction with CMG reservoir simulators to
perform sensitivity analyses, history matches, optimizations, and uncertainty assessments.
3.2.1 Sensitivity Anal ysis (SA)
Sensitivity analyses are used to determine the variation of simulation results under different values
of the input parameters, reservoir properties for example, and to identify which parameters have
the greatest effect on user-defined objective functions, such as history match error. Sensitivity
analyses use a limited number of simulation runs to determine the parameters that should be
varied in subsequent studies and over what range. This information is then used to design history
matching or optimization studies, for example, which require a greater number of simulation runs.



18 CMOST Overview CMOST User Guide



3.2.2 History Matching (HM)
History matching provides an effective way to match simulation results with production
history data. Using CMOST, users create and run experiments using a version of the base
dataset which has embedded instructions that tell CMOST where to substitute parameter
values. As simulation jobs complete, CMOST analyzes the results to determine how well they
match production history. An optimizer then determines parameter values for new simulation
jobs. As more simulation jobs complete, the results converge to one optimal solution which
should provide a satisfactory history match if user-specified parameters and parameter ranges
have been appropriately defined.
3.2.3 Optimization (OP)
Optimization studies are used to produce optimal field development plans and operating
conditions that will produce either a maximum or minimum value for objective functions.
These objective functions can be physical quantities such as cumulative oil produced,
recovery factor, and cumulative steam-oil ratio. CMOST also allows monetary values to be
assigned to these physical quantities, so optimizations can be carried out using net present
value calculations.
3.2.4 Uncertainty Assessment (UA)
Uncertainty assessments are used to determine the variation in simulation results due to residual
uncertainty, the uncertainty that remains after history matching and optimization, usually about
the value of some reservoir variables. Uncertainty assessment involves the following:
1. Use available simulation results to develop a response surface (RS) for each
objective function of interest (such as NPV, CSOR, and cumulative oil production)
with respect to each of the uncertain variables (e.g. porosity, permeability,
endpoint saturations, and oil viscosity).
2. Using the response surface, conduct a Monte Carlo simulation to select large numbers
(tens of thousands) of variable value combinations and determine the value of the
objective functions for each combination. The results of uncertainty assessments are
probability and cumulative density functions for each objective function.
In addition to Monte Carlo simulation results, effect estimates and response surface results
are available from uncertainty assessment studies so that sensitivity information can be
obtained. Detailed response surface statistics provide valuable information about the
suitability of using the response surface model as the proxy for the reservoir model.



CMOST User Guide CMOST Overview 19



3.3 Generalized CMOST Study Process
The generalized CMOST study process is illustrated below:

AnalyzeResults
RunSimulation
Substitute
ParameterValues
intoSimulation
Dataset
DefineandSelect
ParameterValues
STUDY
PROCESS
Useresultsasbasisfor
decisionsorasinputfor
otherstudies
Startorenter
fromotherstudy

The above diagram shows how CMOST supports the definition of parameters and parameter
ranges which are, in turn, used to define study experiments. The experiments are run through
the simulator. The simulation results, which will vary depending on the study type, are
analyzed and used, as necessary, to:
Define further experiments
Form the basis for subsequent studies
Formulate production plans



20 CMOST Overview CMOST User Guide



3.4 CMOST Components and Concepts
3.4.1 Project Components
The hierarchy of CMOST project components is shown below:

Project
Studym
Study2
Study1 Experiment1.1
Experiment1.n
Experiment2.1
Experiment1.2
Experimentm.1
...
... ...
...

A CMOST project consists of a set of studies, which can, in turn, be a mix of sensitivity
analyses, history matches, optimizations, uncertainty assessments, and user-defined studies.
Studies are defined by the configuration of:
Type and source of the input data required for the study.
CMOST engine used to process the input data.
Type of output data produced by the study.
Configurations can be copied from one study to another. Study type can be changed, in which
case the new study type will re-use as much of the information from the original study as
possible.
Studies consist of experiments, each of which is defined by a distinct set of input parameters
and output objective functions.
3.4.2 Base Files
Each CMOST study starts with a previously completed (base) simulation dataset. CMOST
needs access to files from this base simulation dataset. It may also require or make use of
other files, such as production history files in the case of history matching studies. The input
files used by CMOST are described below.



CMOST User Guide CMOST Overview 21



3.4.2.1 Base Dataset
A base dataset must be available before you configure and then run CMOST experiments.
The base dataset can be any valid dataset for any CMG simulator. The base dataset is used to
create the base SR2 (simulation results) files. The base dataset is also used to create
the CMOST Master Dataset.
3.4.2.2 Base SR2 Files
The base IRF (.irf, Indexed Results File) provides CMOST with basic information about the
dataset, such as the simulator type used, well lists, and simulator start and end dates. The base
MRF (.mrf, Main Results File) is the binary file that contains the simulation data. The base
SR2 files are required components.
From the base SR2 files, CMOST can obtain and display observers (simulation outputs) and
calculate base case objective functions (expressions or quantities that the user wants to
minimize or maximize). By CMOST convention, the base case is defined as the experiment
with ID=0.
3.4.2.3 Base Session Fil e
A base session file can be used by CMOST but is not required. The base session file is
created by CMG Results Graph using the base SR2 files. CMOST uses the base session file
to quickly display plots in Results Graph using results of simulation runs generated by
CMOST experiments.
3.4.2.4 Production Hi story Fil es
Production history files, such as field history or well log files, are required for history
matching. The files that are needed will depend on the type of history match being performed.
3.4.3 File System
At the highest level, a CMOST project folder is organized as shown in the following
example, for project SAGD_2D_UA:

ProjectName:SAGD_2D_UA
ProjectFolder:SAGD_2D_UA.cmpd
BestPractice:Allfilesrelatedtotheproject
shouldbestoredintheprojectfolder.
ProjectFile:SAGD_2D_UA.cmp




22 CMOST Overview CMOST User Guide



The files in the project folder are as shown in the following example:

StudyName:BoxBen
StudyFolder:BoxBen.cmsd
Warning:Donotmodifyordeletefilesinthe
studyfolderunlessyouunderstandthe
ramifications.
StudyFile:BoxBen.cms
StudyFileAutoBackup:BoxBen.bak
NewCMOSTmasterdataset(CMM),base
dataset,andbaseSR2filesarestoredinthe
projectfolder.

NOTE: If there is an error during a run, CMOST will try to save the study file to a .bak file.
The .bak file is the last valid file and it has the same format as a study file.
An example of the files that are stored in the study folder is shown below:

VectorDataRepositoryFile:*.vdr
Note:VDRfilesstorecompressedsimulation
datarequiredforobjectivefunctioncalculations
Warning:DonotmodifyordeleteVDRfiles
manually

NOTE: VDR files are compressed simulation data that is used to calculate objective
functions. The files are compressed to reduce disk space and runtime.



CMOST User Guide CMOST Overview 23



3.4.4 Study Types and Engines
The study types and engines available with CMOST are shown below:

SA
OneParameterAtATime(OPAAT)
ResponseSurfaceMethodology
UA
MonteCarlousingReservoirSimulator
MonteCarloSimulationusingProxy
HM&OP
LHDPlusProxy
DECE
RandomBruteForce
PSO
UserDefined
ExternalEngine
ManualEngine
DE

Information about the above engines can be found in Theoretical Background and directions
for configuring them can be found in Engine Settings.
3.4.5 Study Workflow
The study workflow varies depending on study type and the selected engine; for example, the
simplified sensitivity analysis workflow:

DefineInput SelectEngine
ResponseSurface
Methodology(RSM)
RunMultiple
Simulations
OneParameterAtA
Time(OPAAT)
WhenredefiningSAinputs,
theprevioussettingswillbe
usedasthestartingpoint.
Resultsand
Analysis




24 CMOST Overview CMOST User Guide



and the simplified uncertainty assessment workflow:

DefineInput SelectEngine
MonteCarloUsing
Proxy
RunMultiple
Simulations
MonteCarloUsing
ReservoirSimulator
WhenredefiningUAinputs,
theprevioussettingswillbe
usedasthestartingpoint.
Resultsand
Analysis

A more detailed general workflow can be found in CMOST User Interface.
3.5 CMOST Master Dataset (.cmm)
The master dataset, which is a required component, is a version of the base dataset that has
been modified with embedded instructions that tell CMOST where to substitute different
input parameter values at runtime.
You can create the master dataset in the following ways:
CMOST CMM File Editor
Builder (refer to chapter Setting Up Datasets for CMOST in the Builder Users
Guide for more information)
Text editor, such as Notepad.
This section provides an overview and examples of inserting CMOST parameters into the
master dataset using the CMM File Editor. CMM File Editor provides more information and
detailed instructions for doing this.



CMOST User Guide CMOST Overview 25



To illustrate the embedding of formulas in the master dataset, consider the following example,
where we have inserted formulas for parameters Porosity, PERMH_L1, PERMH_L2 and
KvKhRatio into the master dataset:

CMOST
parametersadded

Anywhere CMOST is required to substitute a value or text into the master dataset, a CMOST
formula should be entered. CMOST formulas can appear anywhere in the master dataset;
however, each CMOST formula must be completed in a single line.
NOTE: The first date/time keywords for a master dataset must be *DATE. Errors will be
encountered if *TIME keywords are used.
The following diagram shows how CMOST produces and submits experiment datasets to
simulators then receives and processes the results (for a single processor):

MasterDataset(CMM)
CMOSTsubstitutesexperiment
parametervaluesfromthe
ExperimentsTableintotheCMM
toproducetheexperimentdataset
CMOSTsubmitsthe
experimentdatasettothe
scheduler/simulator
CMOSTreceivesand
processesresultsof
experiment
StudyExperiments
Table
Scheduler/
Simulator
NextExperiment




26 CMOST Overview CMOST User Guide



3.5.1.1 Master Dataset Syntax
The master dataset syntax is shown in the following examples:
Example 1:
In the original dataset, POR is defined as follows:
POR CON 0. 20
In the master dataset, we want to vary the value of porosity across a number of experiments,
so a formula is inserted into this line in the dataset, as follows:

Simulator
Keywords
CMOST
Start
Original(Default)
ValueinDataset
Variable
Name
CMOST
End

NOTE: Spaces are not allowed in the CMOST portion, and variable names are case sensitive.
Example 2:
Formulas can also be used with one or more variables, as follows:

Simulator
Keywords
CMOST
Start
Formula CMOST
End

In the above example, parameter Por osi t yMul t i pl i er will be multiplied by 0.2 before the
simulation is run.
NOTE: Default values, equal to the original value entered in the base dataset, are optional.
Example 3:
Values in regions of the reservoir can be modified using the MOD simulation keyword, as
shown in the following example:

BlockRanges
I:IJ:JK:K




CMOST User Guide CMOST Overview 27



3.5.1.2 CMOST Formul as
All CMOST formulas that are entered into the master dataset must be nested within a start tag
<cmost > and an end tag </ cmost >. It is optional to use t hi s[ Or i gi nal Val ue] = to start a
formula. The original value indicates the value that was used by the base dataset. Also, if the
syntax t hi s[ Or i gi nal Val ue] =is used, the variable t hi s can be used in the formula to
reference the original value that was entered in the dataset. If the original value is a string
(text), it should be enclosed in double quotation marks. For example, CMOST can correctly
handle the following lines:
I NCLUDE ' <cmost >t hi s[ " por 50. i nc" ] =PORI NC</ cmost >'
PERMI CON <cmost >t hi s[ 5000] =PERMH</ cmost >
If the CMOST original value will not be used in a formula, the above two lines could be
simplified by omitting t hi s[ ] = as follows:
I NCLUDE ' <cmost >PORI NC</ cmost >'
PERMI CON <cmost >PERMH</ cmost >
CMOST cannot handle the following line because the original text is not enclosed in double
quotation marks.
I NCLUDE ' <cmost >t hi s[ por 50. i nc] =PORI NC</ cmost >'
CMOST formula syntax and a description of available built-in functions are provided
in Formula Editor.
3.5.1.3 CMOST Formul a Exampl es
The following examples are included to illustrate the insertion of formulas into the master
dataset for different purposes.
Example 1: To substitute the value of parameter var A directly into the dataset:
<cmost >t hi s[ 1. 0] =var A</ cmost >
<cmost >var A</ cmost >
Example 2: To substitute the result of adding parameter var B to the original value:
<cmost >t hi s[ 1] = t hi s + var B</ cmost >
Example 3: To substitute the result of var Avar B+5:
<cmost >t hi s[ 26] =var A- var B+5</ cmost >
<cmost >var A- var B+5</ cmost >
Example 4: To substitute the result of

:
B var
A var
79 . 179
248 . 0
|
.
|

\
|


<cmost >t hi s[ 203. 9] =179. 79*POWER( var A/ var B, 0. 248) </ cmost >



28 CMOST Overview CMOST User Guide



Example 5: If the result of var A multiplied by var B is greater than 1200, the multiplied
parameter will be substituted; otherwise, 1200 will be substituted:
<cmost >t hi s[ 1800. 0] =MAX( var A*var B, 1200) </ cmost >
Example 6: If var A is greater than or equal to 600, OPEN will be substituted; otherwise,
CLOSED will be substituted:
<cmost >t hi s[ " OPEN" ] =I F( var A>=600, " OPEN" , " CLOSED" ) </ cmost >
Example 7: If var B matches any of the values in the first set of values, the corresponding
value in the second set of values will be substituted; for example, if var B was set to 7.3, the
value 188.75 would be substituted:
<cmost >t hi s[ 402. 57] =LOOKUP( var B, {3. 0, 5. 0, 7. 3}, {524. 62,
402. 57, 188. 75}) </ cmost >
Example 8: If var A matches any of the entries in the first set, the corresponding entry in the
second set will be substituted; for example, if var A was set to "por Mi d. i nc", per mMi d. i nc
would be substituted into the dataset.
<cmost >t hi s[ " per mMi d. i nc" ] =LOOKUP( var A, {" por Low. i nc" ,
" por Mi d. i nc" , " por Hi gh. i nc" }, {" per mLow. i nc" , " per mMi d. i nc" ,
" per mHi gh. i nc" }) </ cmost >
Example 9: A range of acceptable values is set for var A. If var A is less than 0, the value 0 will be
substituted. If var A is greater than 1, the value 1 will be substituted. If var A is between 0 and 1,
the value of var A will be substituted:
<cmost >t hi s[ 0. 68] =MAX( MI N( var A, 1) , 0) </ cmost >
3.5.1.4 Inserting Include Fil es into the Master Dataset
If large arrays of data need to be substituted, it may be easier to use include files; for
example, porosity may have a value for each grid block in a reservoir. It would be unrealistic
to create a parameter for each grid blocks porosity. Multiple include files can be used where
each one can contain a different geostatistical realization for porosity.
Include files can be used anywhere in a dataset. The syntax that is used to enter an include
file into a master dataset is:
*I NCLUDE ' <cmost >Ar r ayI ncFi l e</ cmost >'
The parameter Ar r ayI ncFi l e would then be defined as a Text parameter through
the Parameters page with the include files listed as candidate values.
The include files contain the block of text that is to be substituted into the dataset. For
example, porosity could be used in the following include file for a reservoir model with
dimensions ni =10, nj =3, nk =2:



CMOST User Guide CMOST Overview 29



*POR *ALL
. 08 . 08 . 081 . 09 . 12 . 15 . 09 . 097 . 087 . 011
. 15 . 134 . 08 . 087 . 157 . 145 . 12 . 135 . 18 . 092
. 074 . 12 . 12 . 154 . 167 . 187 . 121 . 122 . 08 . 08
. 095 . 13 . 12 . 157 . 17 . 18 . 184 . 122 . 084 . 09
. 11 . 12 . 134 . 157 . 157 . 18 . 18 . 098 . 09 . 09
. 08 . 09 . 144 . 143 . 123 . 16 . 165 . 102 . 10 . 10
Other include files could be created with similar syntax but with different values entered.
3.5.1.5 Handling Files Referenced by Master Dataset
CMOST uses the following logic to handle a path and its associated files for INCLUDE,
BINARY_DATA, and FILENAMES INDEX-IN keywords:
If the path is an absolute path:
CMOST will not copy the associated files to the corresponding study folder. CMOST
will not change the path; for example, CMOST will keep the following line unchanged:
FI LENAMES I NDEX- I N ' \ \ comput er 8\ d\ Test \ punq- hi st or y. i r f '
Since it is an absolute path, CMOST will not copy its referenced files (.irf and .mrf).
If the path contains file name only (no directory information):
CMOST will copy the associated files to the corresponding study folder. CMOST will
not change the path; for example, CMOST will keep the following line unchanged:
I NCLUDE ' Por Low. i nc'
CMOST will, however, copy file PorLow.inc from the original directory to the study
folder.
If the path is a relative path:
CMOST will not copy the associated files to the corresponding study folder. CMOST
will re-base the relative path to the study folder when creating datasets; for example,
CMOST will re-base the following relative path:
I NCLUDE ' i ncf i l es\ Por Low. i nc'
to the study folder by modifying the line to:
I NCLUDE ' . . \ i ncf i l es\ Por Low. i nc'
Since it is a relative path, CMOST will not copy the referenced include file.
NOTE: Even though BINARY_DATA not followed by a path name is acceptable to the
simulators, CMOST does not allow this in the master dataset because it would require making
a copy of the binary data file for each dataset that is created by CMOST. This defeats the
purpose of creating the .cmgbin file to save space.



30 CMOST Overview CMOST User Guide



3.6 CMOST User Interface
An example of the CMOST user interface is shown below:

The CMOST user interface has been designed to make the understanding of and access to the
complex, powerful functionality of CMOST as intuitive and simple as possible. Commonly
used operations are available from the menu bar and toolbar. For basic information about
navigating the CMOST user interface, refer to Getting Started.
The CMOST user interface is hierarchically organized through the tree view on the left in a
way that parallels the general study workflow. Configuration and results pages are accessed
through tree view nodes, as shown below:

Whiletheengineis
running,thisdataisread
only.
Somedatacanbe
changedduringtherun;
forexample,experiments
canbeadded.
Onceavailable,results
canbeviewedwhileruns
areinprogress.




CMOST User Guide CMOST Overview 31



Each major node in the Input section provides links to subnodes as well as information about
the purpose of the subnode, as shown in the following example:

High-levelinformationabout
theCharacteristicDate
Timespage
Youcandouble-clickto
opentheHistoryMatch
Qualitypage

Through the Input node, you will:
Specify input files, including the base and master datasets, base session file, and
field data files. Refer to General Properties for more information.
Specify the fundamental data (time, distance or depth data) that is obtained or
calculated from SR2 files and used for sensitivity analysis, history matching and
optimization. Refer to Fundamental Data.
Define and specify the input parameters that will be varied by CMOST in study
experiments, including their insertion into the master dataset. Refer to Parameterization.
Define the objective functions that you want to minimize or maximize. In the case
of history matching, for example, you may want to minimize the error between
field data and simulation results. In the case of optimization, you may want to
maximize net present value. Refer to Objective Functions.
Through the Control Centre node, you will:
Define and configure study and engine types. Refer to Engine Settings.
Specify and configure simulation settings. Refer to Simulation Settings.
Specify the study experiments and, once the study engine has started, monitor their
progress and, if necessary, make adjustments. Refer to Experiments Table.



32 CMOST Overview CMOST User Guide



Start, stop, pause, and monitor the CMOST engine. Refer to Control Centre.
Monitor the status of the proxy model development and, if necessary, make
adjustments to the experiments. Refer to Proxy Dashboard.
Monitor the progress of simulation jobs. Refer to Simulation J obs.
Through the Results & Analyses node, you will:
View and interpret the results of your study. As they are produced, results can be
viewed on-the-fly. The types of results that will be displayed will vary with the
study type; for example, if you are running a sensitivity analysis using the One-
Parameter-At-A-Time engine, an OPAAT plot will be produced. Refer to Viewing
and Analyzing Results for further information.
3.7 Best Practices for Using CMOST
Configuring the I/O Control Section of the Master File
Configure the Input/Output Control section of the .cmm file properly to keep the simulation
output files (.irf, .mrf, .rst, .out) as small as possible. Large simulation output files slow down
concurrent simulation runs and often trigger intermittent file I/O problems that cause jobs to
fail. For details, refer to WRST, OUTSRF, WSRF, OUTPRN, and WPRN keywords in the
appropriate simulator manual. Usually, it is unnecessary to write restart records at each
DATE/TIME, GRID quantities (value per grid block), or the .out file for jobs submitted by
CMOST.
Running Multiple Concurrent Remote Jobs
If you want to run multiple (>=5) concurrent remote jobs, CMG recommends using a
Windows 2003 or 2008 File Server to store CMOST input/output files (.cms, .vdr, and
simulation input/output files). This is because, for Workstation-type Windows operating
systems (Windows XP, Windows Vista, and Windows 7), the maximum number of allowed
remote sessions from remote computers is 10. This limit includes all transports and resource
sharing protocols combined. Therefore, for more than four concurrent remote jobs, a
Windows workstation may not be adequate for use as a CMOST file server. For more
information about the inbound connections limit in Windows XP, Windows Vista, and
Windows 7, view Microsoft Knowledge Base article Q314882.
Checking Available Disk Space
Occasionally, check the available disk space in the C: drive of all Windows compute nodes. If the
C: drive of the compute node is low in available disk space, remove unwanted files in the
C:\ProgramData\CMG\CopyLocalJobs (Windows 2008, Windows Vista, and Windows 7) or
C:\Documents and Settings\All Users\Application Data\cmg\CopyLocalJobs (Windows 2003 and
Windows XP) folder. For Linux compute nodes, check the available disk space in the /tmp folder.
Unwanted simulation output files (.irf, .mrf, .rst, .out) can be removed from the /tmp folder.



CMOST User Guide CMOST Overview 33



Using Cumulative Rate Data for History Matching Type Objective Functions
Rates (e.g. oil rate, water rate, or gas rate) are not recommended for use in constructing
history matching error type objective functions because they are usually discontinuous step
functions. The nature of a step function means that the rate value is not well defined at the
boundary of each interval (small time disparities). The step function disparities could cause
inaccurate calculation of the history matching error. Since cumulative quantities (cumulative
oil, cumulative water, and so on) are continuous in time, they are recommended as a
replacement for rates in history match error calculations. If the entire curve of a cumulative
quantity is matched perfectly, this guarantees the corresponding rate curve will be matched as
well. Similar recommendations can be made for other instantaneous quantities, such as
water cut, or instantaneous GOR or SOR.
Use of History Matching Error for Sensitivity Analyses
In sensitivity analysis, the history matching error is not recommended as an objective
function type; instead, direct physical quantities such as cumulative oil, pressure, or
temperature are recommended. There are two reasons for this. First, history-matching error-
type objective functions transform linear functions into non-linear functions. Second, for
direct physical quantities, understanding and applying sensitivity analysis results can often be
supported by the fundamental theory in reservoir simulation.




CMOST User Guide Getting Started 35



4 Getting Started
4.1 Introduction
This chapter provides basic information about the following:
Opening and Navigating CMOST
Creating a CMOST Project
Using the Study Manager
Common Screen Operations
Closing CMOST
4.2 Opening and Navigating CMOST
To open CMOST:
Open Launcher then double-click the CMOST icon. The CMOST splash screen appears
while the application is loading, then the CMOST main screen is displayed:

Menubar
Statusbar
Titlebar
Toolbar




36 Getting Started CMOST User Guide



The elements of the CMOST main screen are as follows:
Menu bar:
- Through the File menu, you can initiate the following commands:
Command Description
New | Project Create a new CMOST project. Refer
to Creating a CMOST Project.
New | Study This selection, available once you have a
project open, will add a new study to the
project. Refer to To Create a New Study.
Open | Project Open an existing CMOST project. Refer
to Opening a CMOST Project.
Open | Existing Study This selection, available once you have a
project open, will open an existing study.
Save | Current Study Save the currently selected CMOST study.
Save | Save All Save changes to all studies.
Convert CMT/CMR File Convert CMOST task (CMT) and results
(CMR) files to a CMOST CMP project file.
Refer to Converting old CMOST Files to
new CMOST Files.
Exclude Existing Studies Exclude an existing study from the project.
Refer to To Exclude a Study.
Close Project Close the current CMOST project. You will
be prompted to save your changes.
Recent Files Open one of the last (up to 5) projects that
you have opened.
Exit Close the CMOST session. You will be
prompted to save your changes.
- Through the Help menu. You can initiate the following:
Command Description
Index Open CMOST help information with the index showing.
Search Open CMOST help information with the search
window showing.
Contents Open CMOST help information with the table of
contents showing.
About View information about this version of CMOST, link to
CMG website, and an email address for CMG support.



CMOST User Guide Getting Started 37



Status bar: Displays CMOST status information, which will vary according to the
page selected.
Toolbar: The following buttons are provided in the toolbar. Some of these buttons will
not appear until you have opened or created a project, and opened or created a study:
Button Description
New Project Open a dialog box through which you can create a
new project. If you have another project open, you
will be prompted, as necessary, to stop the study
engines and save.
Open Project Open an existing CMOST project. If you have
another project open, you will be prompted, as
necessary, to stop the study engines and save.
New Study Once a project is open, you can click this button to
open a dialog box through which you can create a
new study.
Save Current Study Save the selected study settings and results, but do
not close the project.
Save All Save the settings and results for all studies, but do
not close the project.
Start Engine Start the CMOST engine for the selected study.
This button will not be available if the study engine
is already running or is not ready to start.
Pause Engine Pause the CMOST engine for the selected study.
This button is only available if the study engine is
already running.
Stop Engine Stop the CMOST engine for the selected study.
This button is only available if the study engine is
already running.
Help Open CMOST help information to the front page,
with the table of contents displayed.
4.3 Opening a CMOST Project
To open an existing CMOST project:
1. Click File | Open Project.
NOTE: If you have another project open, you will be prompted to save it first.



38 Getting Started CMOST User Guide



2. Browse to the project file and then click Open. The project will open to the Study
Manager folder.
NOTE: If you open a project that is already opened by you or someone else, you will be
restricted from making edits or saving the project file, as indicated below.

Locksymbolsontabsindicate
projectislockedbyanother
activesession,andeditsand
savesarenotpermitted.
MessageinStatusBarindicates
editsandsavesarenot
permitted.

NOTE: You can also open a CMOST project file in the following ways:
1) From Explorer, drag a .cmt or .cmr file onto the CMOST desktop icon. In this case, the
file will only open if a .dat file is available.
2) From Explorer, drag a .cmp file onto the CMOST desktop icon.
3) Right-click a .cmp file then select Open with | CMG.CmostPlus.Studio.View.



CMOST User Guide Getting Started 39



4.4 Creating a CMOST Project
To create a new CMOST project:
1. In the menu bar, click File | New | Project. The Create New Project dialog box is
displayed:

NOTE: As shown above, required fields are outlined in red. As well, move the pointer over
the icon to view tips and information about the field setting.
2. Enter the fields in the Create New Project dialog box, as follows:
Project name (required): Enter the desired project name, any
combination of keyboard characters, including spaces.
Base Dataset (required): Click Browse to the right of the field, browse to
and select the base dataset, and then click Open. The location of the base
dataset will be displayed. The SR2 files should also be in this location.
Copy base case to project folder: If you select this check box, the base
case files will be copied into the project .cmpd folder. If you do not
select it, then the base case files will be used from their original location.
Project Location: Click Browse, then browse to and select the folder,
and then click Open.
Project file: The project .cmp file will automatically be entered based on
Project name and Location.
Project folder: The project .cmpd folder will automatically be entered
based on Project name and Location.
Comments: Enter project comments as necessary. These comments will
be shown in the Study Manager tab.



40 Getting Started CMOST User Guide



Once you have filled in the required fields, the OK button is enabled as shown in
the following example:

3. Click OK to create and save the CMOST project file and project folder. The main
screen will now appear as shown in the following example:

Copiedinfrom
commentsentered
inCreateNew
Projectdialogbox
Nostudiescreated
orimported
Studyinformation

As shown above, the comments you entered in the Create New Project dialog box
are carried forward to the Project comments area in the Study Manager tab.



CMOST User Guide Getting Started 41



The base dataset files are copied into the project folder, which is designated with
the extension .cmpd for CMOST project directory. The following shows an
example of the folders and files that are created after you create a project:

BaseDatasetFiles,
copiedin
ProjectFile
ProjectFolder
contains

4.5 Using the Study Manager
Through the Study Manager tab, you can:
Create a new study
View study details
Change the name of a study
Add an existing study to the project
Load/unload a study
Exclude a study
Import data from a study
Copy a study
NOTE: Some Study Manager functions, such as creating a new study or adding an existing
study, are accessible through the File menu and by right-clicking in the study view; however,
the procedures in this section are based on access to these functions through the buttons and
icons in the Study Manager tab.
4.5.1 To Create a New Study
1. Click the New Study button in the right side of the Study Manager tab, the New
Study button in the toolbar, or click File | New | Study in the menu bar. The
New Study dialog box is displayed:



42 Getting Started CMOST User Guide




NOTE: As shown above, required fields are outlined in red. As well, move the pointer over
the icon to view tips and information about the field setting.
Fill in the fields as follows:
Name: Enter a name for the study. Once you click OK, a study icon is
added to the Study Manager tab, and a study tab created, both based on
this name.
Base dataset: Browse to and select the base dataset; for example, if you
have copied in the dataset, you should browse to and select the copy.
Automatically create master dataset (.CMM) using the base dataset: If
you select this check box, a master dataset will automatically be created
from the base dataset. The master dataset will be saved in the same folder
as the base case. If you do not select this check box, you will need to
specify the master dataset through the General Properties subnode.
Type: From the drop-down list, select the study type, one of:

NOTE: You can later change the study type through the Engine Settings node.



CMOST User Guide Getting Started 43



If you click the button to the right of Advanced Settings, you can access
the following study settings:
AdvancedSettings

Special dictionary file: CMOST supports special simulator versions,
such as STARS-ME. If a special dictionary file is required to process
SR2 files produced by a special simulator, you will need to select
Special dictionary file required for the study and then browse to and
select the dictionary file.
SR2 processing stack size: Stack size (MB) used by the SR2 reader to
read SR2 files. The default stack size is 40 MB.
2. Click OK. The new study is created, in particular :
New study files and folders are created, as shown in the following example:

Studyfolder.Initiallyempty,willcontainstudy
VDR,IRF,MRFandLOGfiles,asspecified
inSimulationSettingsnode.
Studymasterdataset,ifAutomaticallycreatemaster
dataset(.CMM)usingthebasedatasetwasselected
CMOSTstudyfile
Ifthereisanerrorduringtherun,CMOSTwilltry
tosavethestudyfiletoa.bakfile.

NOTE: As shown above, if there is an error during the run, CMOST will try to save the study
file to a .bak file. The .bak file is the last valid file and it has the same format as a study file.



44 Getting Started CMOST User Guide



A study tab is added to the project view as shown in the following
example, which has the General Properties page open:

Inputnode
icons
Studynotes
Studytree
view

The study tree view contains the following primary nodes:
Input: Through this node and its subnodes, you create and edit input
data for the study.
Control Centre: Through this node and its subnodes, you configure,
run, monitor and control the CMOST study.
Results and Analyses: Through this node and its subnodes, you can
view and analyze the results of the CMOST study.
To open study node pages, you can click the node or subnode in the
study tree view or, in the case of the Input node page, shown above,
click the subnode.
In the Input page, numerical information may be superimposed on an
icon. The following example indicates that two date times have been
defined in the Characteristic Date Times node:

Status icons may also be superimposed on the icons that precede each
node and subnode name in the study tree view:



CMOST User Guide Getting Started 45



Icon Meaning
Error Errors have been identified with the node or
subnode. Click the Validation tab at the bottom
of the node pane for further information.
Warning Warning about the settings or configuration of the
certain nodes. In the case of the Simulation
Settings node, this icon is displayed if you have not
selected a scheduler. Click the Validation tab at the
bottomof the node pane for further information.
Information Page contains information that may be useful for
the user. Click the Validation tab at the bottom
of the node pane for further information.
No errors or
warnings
There are no errors or warnings associated with
the current node or subnode.
A study icon is added to the Study Manager tab, as shown in the
following sample project, which has five studies one sensitivity analysis
(SA) study [shown selected], two history match (HM) studies, one
optimization (OP) study, and one uncertainty assessment (UA) study:

StudyIcons
StudyTabs
Informationaboutselected
studyOPAAT




46 Getting Started CMOST User Guide



4.5.2 To View a Study
Click the study icon to view study details at the bottom of the page.
NOTE: If Auto load is checked, then when you open the project, the study will automatically
be loaded, regardless of its prior status. If Auto load is not checked, the study will revert to
the Load/Unload status that it had when you exited the last session.
Double-click the study icon, or right-click the icon and then select Go to Study, to view the
study tab.
4.5.3 To Change the Display Name of a Study
Click and then edit the name of the study in the associated icon to change its display name.
This changes the name on the study icon and study tab, but does not change the names of the
associated study files and folders.
4.5.4 To Add an Existing Study to the Current Project Session
Click the Add Existing button or click File | Open | Existing Study in the menu bar to open
a Windows Explorer window, browse to the existing study, which has to be in the project
folder, and then click Open.
4.5.5 To Load/Unload a Study
You can unload or remove a study from the current project session when, for example:
You need to reduce the amount of memory used by the project.
Project contains studies that have been run and do not need to be run again.
Project contains studies that are out of date or no longer valid.
Once a study is unloaded:
Study icon in the Study Manager will change to indicate that they study has been
unloaded, as shown below.
Study tab is removed from the session and the study details are not accessible.
Study cannot be copied to a new study.
Data cannot be imported from the study.



CMOST User Guide Getting Started 47



To unload a study, right-click its icon and then select Unload, or click the study and then
click the Unload button. The study icon will change to indicate that it is now unloaded:

StudyUnloaded StudyLoaded

To load the study, right-click the study icon and then click Load.
4.5.6 To Exclude a Study
You can exclude a study from the project, in which case it will not be viewable in the project
screen and will not use any computer memory or processing. You may, for example, have
created and run a study and no longer need to view it:
1. Right-click the study then select Exclude or click the study icon and then click the
Exclude button on the right side of the Study Manager tab. In either case, you
will be asked if you want to remove the selected record(s).
2. Click Yes. The study icon is removed from the Study Manager tab, and the study
tab is removed. The study folder and files are not deleted so the study can be added
back into the project later.
You can also exclude multiple studies through the menu bar, as follows:
1. Click File | Exclude Existing Studies. The Exclude Existing Studies dialog box
is displayed:

2. In the To Exclude column, select the study (or studies) you want to exclude and
then click OK. The study is removed from the project view but the study folder
and files are not deleted. You can add the study back into the project at any time
through the Add Existing button in the Study Manager tab, as outlined above.



48 Getting Started CMOST User Guide



4.5.7 To Import Data from a Study
You can import data from one (loaded in the same project) study into another:
1. Right-click the study that you want to import data to then select Import. The
Import Study Data dialog box is displayed, showing the studies that are loaded
and the fields available for import:

LoadedStudies
Fieldsavailablefor
importintoselected
study

2. Select the source study.
3. Set the fields you want to import to True and the ones you do not want to import to
False, as shown in the following example:

Parameterizationand
ExperimentTablewill
beimportedfromthe
OPAATstudy

4. Click OK. In the above example, Parameterization data and the Experiments
Table are imported from the OPAAT study into the selected study.



CMOST User Guide Getting Started 49



4.5.8 To Copy a Study
You can copy a (loaded) study to a new study in the project, as follows:
1. Right-click the study that you want to copy and then select Copy to New Study. The
Change Name dialog box is displayed, with New study name set to the original
study name appended with a copy number, as shown in the following example:

ChangeNewstudy
nameasnecessary

2. As desired, change New study name to a name that is not already being used and
then click OK. A new study will be created that is identical to the original study,
i.e., all data is preserved. A new study icon is added to the Study Manager tab and
a study tab is added.
4.6 Common Screen Operations and Conventions
The information displayed varies, depending on the study node; however, a number of
operations are common across all pages:
4.6.1 Buttons and Icons
When you click a button with a beside the label, it will open a drop-down list of options, as
shown below:

4.6.2 Plots
4.6.2.1 To Copy an Image to the Clipboard
To copy the image of a graph into a file:
1. Right-click anywhere in the plot then select Copy Image to Clipboard.
2. In the target application (Word, for example), click Paste (or CTRL+V).
4.6.2.2 To Save an Image
To save a plot to an image file:
1. Right-click anywhere in the plot then select Save Image.
2. In the dialog box, select the desired file type, browse to the folder, and then click Save.



50 Getting Started CMOST User Guide



4.6.2.3 About Data Points and Curves
Data curves and points are displayed in graphs using the following conventions.

BaseCase
FieldHistory
GeneralSolution
VerificationTest
HighlightedExperiment
TrainingExperiment
OptimalSolution

If you move the pointer over a data point, it will be changed to red and the points data values
will be displayed. If you click the data point, it will be changed to have a black border.
4.6.2.4 To Highlight an Experiment
To highlight an experiment, right-click the associated data point or curve then select
Highlight the Experiment. If you highlight an experiment on one graph, then it will be
highlighted where the experiment appears in other graphs. This allows you to view the results
of an experiment across multiple graphs. To cancel, right-click the highlighted point or curve,
then clear Highlight the Experiment. In some cases, you may need to click to refresh the
graph. You can also highlight or unhighlight experiments through the Experiments Table.
4.6.2.5 To Zoom In and Out of Pl ots
When viewing plots of CMOST Result Observers, you can zoom into any area of the plot, as
shown below.
To zoom in:
You can zoom in in two ways. First of all, you can define the area that you want to magnify:
1. In the plot, click the lower left corner of the area you want to zoom in on, then drag
the cursor to the upper right of the area, for example:



CMOST User Guide Getting Started 51




2. Release the mouse button. The zoomed-in section will be displayed, for example:

If you have a mouse with a wheel button, you can zoom into an area, while maintaining the
position of the x and y coordinates of the cursor. Move the cursor to the place in the plot that
you want to maintain as fixed then rotate the wheel to zoom in.
To zoom out to full size:
Right-click the plot and then select Un-zoom to 100%.



52 Getting Started CMOST User Guide



4.6.3 Names
Names must be entered for each defined parameter, objective function, objective function term,
time-series observer, and fixed date observer. The following guidelines must be followed:
First character of a name must be a letter. Remaining characters in the name can be
letters, numbers, and underscore characters.
Names are case sensitive.
Spaces are not allowed. Underscore characters may be used as word separators, for
example, perm_h and perm_v.
Simulator keywords can be used as parameter names. This has advantages for
users who are familiar with simulator keywords because it will clarify the meaning
of such defined parameters.
Do not use the following names because they are internal keywords used by
CMOST: this, Status, Dataset, Scheduler, Computer, Pattern, Source, Average,
Maximum, Minimum, and Target.
Names must be unique; that is, if a name is already used for a parameter, it should
not be used for an objective function or a result observer.
4.6.4 Required Fields
In dialog boxes, required fields have red borders, for example:

Requiredfields
haveredborders
Movepointerover
fortip




CMOST User Guide Getting Started 53



4.6.5 Default Field Values
In default tables, when data is set to its default value, the field name will be bolded, as shown
in the following example:

Thesefieldsare
notsettotheir
defaultvalues
Thesefieldsare
settotheir
defaultvalues

4.6.6 Tab Display
If you right-click the Study Manager or a study tab, the following commands are available:
Close: You cannot close the Study Manager tab, but you can close study tabs. By
closing a study tab, you are closing the display of the study tab; i.e., you are not
deleting the study. To re-open a closed study tab, double-click its icon in the Study
Manager tab, or right-click the icon and then select Go to Study. You can also
close a study tab by clicking Close in the tab.
Float: A tab can be floated from the main screen, and it can then be dragged to any
location within the main CMOST screen. You can float a tab in the following ways:
- Drag the study tab up or down. As you do this, the study tab will change
to a dialog box.
- Right-click the tab and then select Float. Drag the study dialog box to
the desired location.
Dock: To dock a floating dialog box, right-click the study dialog box and then
select Dock.



54 Getting Started CMOST User Guide



New Horizontal Tab Group, New Vertical Tab Group: You can organize tabs
into different groups, in either horizontal or vertical groupings, as shown in the
following example:

You can have a combination of horizontal and vertical tab groups, and you can
move tabs between groups. Click the Active Files button at the end of a tab
group to view a list of tabs in the group and to select one of them. Regardless of
which tab group the Study Manager is in, it displays all studies in the project.
4.6.7 Tables
4.6.7.1 To Insert, Delete and Repeat Rows
In general, click Insert to enter a row in a table below a selected row. Click Delete
to delete the selected row from the table. Click Repeat to repeat the
selected row but, for example, with a different Origin Type, if available.
4.6.7.2 To Enter Cell Data
Depending on the table cell, you will enter cell data in one of several ways:
In some cases, the contents of the cell cannot be directly modified, for example,
the specifications for a file that you have entered.
In some cases, when you click the cell, you are able to open a drop-down list of
options. Double-clicking cells with drop-down lists will also open the list. Select
the desired option.



CMOST User Guide Getting Started 55




In some cases, you type in the cell contents directly.
4.6.7.3 To Adjust Table Columns
In tables you can drag the sides of columns to adjust their widths.
4.6.7.4 To Order Tabl e Headings
You can click on table headings to order the rows in the table, for example:
Rows are displayed in the order in which they were originally entered.
Rows are displayed in ascending order of the items in the selected column.
Rows are displayed in descending order of the items in the selected column.
4.6.7.5 To Organize Table Rows and Columns
Some tables support sorting of rows on the basis of one or more columns, as illustrated in the
following example:
1. Open a study to the Experiments Table:

2. You can reorder the columns by dragging the column headings to the left or right.
In the following example, we have moved the parameter columns (ModKH1,
ModCH13, and so on) to the left:



56 Getting Started CMOST User Guide




NOTE: If you export the Experiments Table to Excel, the column ordering will be
maintained.
3. You can drag column headings to the area above the table to hierarchically organize
the experiments. In the following example, we are sorting the experiments on the
basis of first ModKH1 and then ModKH13:

4. You can close groupings of rows to focus on certain groups of experiments. In the
following example, we have only opened rows with experiments where
ModKH1=0.25 and ModKH13=0.25:



CMOST User Guide Getting Started 57




4.6.8 Validation tab
The Validation tab provides details of outstanding errors and warnings, as shown in the
following example:

Errorthatavalueisneeded
foraGlobalObjective
FunctionName
Warningthatinthe
SimulationSettingspage,
noschedulerissettoActive
Informationthatbecausea
parameterhasbeenadded,
thereareexperimentsthat
willneedtobereprocessed

NOTE: Errors must be resolved before you can start the CMOST engine. Warnings are
optional.
4.7 Closing CMOST
1. You can click Save at any time to save your changes without closing the CMOST
session.
2. When you want to end the session, click the Close button at the top right of
the main screen or click File | Exit in the menu bar. You will be asked if you want
to save your changes. Click Yes.




CMOST User Guide Creating and Editing Input Data 59



5 Creating and Editing Input Data
5.1 Introduction
The following sections describe the configuration of the CMOST pages used to create and
edit input data, and prepare it for CMOST runs.
5.2 General Properties
Through the General Properties page, you enter the general data and files that will be used
in the project studies. This page needs to be filled in for all study types.
The General Properties page is shown below:

Tableofimported
fielddata
Clicktoimportfieldhistory
file
Clicktoeditmasterdataset
inCMMEditor
Clicktoimportwelllogfile
Read-onlydatafrom
baseSR2file
Browsetoandenter
masterdataset
Clicktoreloadallfielddata
files
Clicktoremoveallfield
datafilesfromstudy
Enterlocationof
basedataset
ClicktoreloadSR2files
Clicktoview/editadvanced
(study)settings
Enterlocationof
basesessionand3tp
files

5.2.1 General Information Area
Unit system for reading SR2: Specifies the SR2 output data units. It does not affect
the input and output data units used by the simulators. Therefore, the CMOST unit
system only affects the data units of objective functions and observers defined in the
study file. For example, if the unit system is chosen as SI, the unit for oil rate will be
m
3
/day and the oil price for NPV calculation will be $/m
3
. Similarly, if the unit
system is Field, oil price will be $/bbl.



60 Creating and Editing Input Data CMOST User Guide



Master dataset relative path: The master dataset can be created automatically
when you create the new study or you can choose to use an existing master dataset
file. The study master dataset (.cmm) is a version of the base dataset that has been
modified to instruct CMOST where to enter parameter values into the dataset. The
Master dataset relative path can be entered manually or by using the Browse
button. If the file is not in the study folder, CMOST will copy the file into the
folder automatically. The master dataset is a required component.
Use the Edit button to open the master dataset in the CMM Editor for editing. Refer
to the Master Dataset section for more information about the master dataset file.
Base dataset relative path: The base dataset is entered when you create the new
study. The base dataset is a required component. The Base dataset relative path
can be entered manually, or by clicking the Browse button. See the Base Dataset
section for more information about this file.
NOTE: Changes made in the base dataset will not be reflected in the master dataset or vice
versa. Each must be edited separately.
Base session file relative path: Base session and base 3tp files are not required
but are useful for analyzing simulation results, since they can be used as the basis
for displaying plots in Results Graph and Results 3D of simulation runs created by
CMOST. See the Base Session File section for details.
5.2.2 Base SR2 Information Area
The information in this area, which is read only, provides CMOST with basic information
about the dataset, including the type of simulator that was used, and the simulation start and
end dates. It also points to the base SR2 files, which can be displayed in CMOST plots for
comparison purposes.
If changes are made in the master dataset that were not included in the SR2 files when the
study file was created, new SR2 files must be imported to update different sections of the
study file.
For example, if a new well is added to the master dataset after the study file was created, new
SR2 files must be imported if the user wants CMOST to use results from the new well. To do
this, the base dataset will need to be updated to contain the new well and then be run through
a simulator. The new SR2 files can then be imported by clicking the Reload SR2 button.
5.2.3 Field Data Information Area
In this area, you can import field history and well log files into the study folder. These files
are most often required for history matching of objective functions, but they can also be used
during sensitivity analysis. You can reload these files if they have been changed, or remove
them from the study.



CMOST User Guide Creating and Editing Input Data 61



5.2.4 Advanced Settings
You can click the Advanced button to open the Advanced Settings dialog box:

NOTE: Select fields in the Advanced Settings table to display information about the settings,
as shown in the above example. These settings are entered when a new study is created.
Stack Size (MB) for SR2 Reader: Stack size (MB) used by the SR2 reader to read
SR2 files. The default stack size is 40 MB.
Special Dictionary File Full Path: This needs to be specified only if a special
dictionary file is required to read the SR2 files; otherwise, leave this field blank.
Formula Coding Language: This setting applies to all formulas used in the study.
Only J Script is supported in the current version of CMOST.
Data Compression Algorithm: Data compression algorithm used by CMOST
object serialization and deserialization, either Deflate or NoCompression.
SR2 Data Filtering: If checked, data filtering will be carried out on the SR2 data
to reduce data redundancy.
Validate User-Defined Jscript Formulas: If checked, CMOST will validate user-
defined script formulas when starting the engine. If a validation error is found, the
engine will not be started.
5.3 Fundamental Data
Through the Fundamental Data pages, specify the simulator output data series that you are
using in your study.



62 Creating and Editing Input Data CMOST User Guide



5.3.1 Original Time Series
Original time series are time series data produced directly from simulator SR2 files. To enter
an original time series in a study, select the Fundamental Data | Original Time Series node.
The Original Time Series page displays, as shown in the following example:

Forselectedoriginaltimeseries,
plotcomparingdataproduced
bysimulator(black)withfield
data(bluecircles)
Plotlegendandsourcefilesfor
simulatoroutputandfielddata
Clicktoviewfielddatafor
selectedoriginaltimeseries,if
available
Tableoforiginaltimeseries
usedinthestudy

Original Time Series Table:
- To add an original time series to the table, first click the Insert button to
insert a row and then select values for Origin Type, Origin Name and
Property from the drop-down lists in the corresponding cells. If there are
already rows in the table, clicking Insert will insert a row below the
selected (shaded blue in the above example) row.
- To delete an original time series, select the row and then click Delete to
remove it from the table. You can use the SHIFT and CTRL keys to
select multiple rows for deletion. When you are prompted to confirm the
deletion, click Yes to proceed, No to cancel.
- To repeat a row, select the row and then click Repeat. This only works if
there is more than one origin name for the origin type. You will be
prompted with a list of origins of the same type and with the same
property defined, which do not already appear in the table.



CMOST User Guide Creating and Editing Input Data 63



Base Case and Field Data Plot:
- For the selected origin, the plot compares data from the simulator output
file (base SR2 files) with field data, if available. This plot shows how close
the simulator output is to the field data for the selected original time series.
- If you click a point on a field history curve (if one is available), the point
will turn red and its date, data value, and HM weight will be displayed,
as shown in the following example:

Field Data tab: Click the Field Data tab at the left of the plot to open a table of
field data for the selected origin, for example:

As shown above, the table contains a HM Weight for each data point, which is initially
set to the default value, 1. HM Weight is used in the calculation of the History Match
Error. If you select a point (row) in the Field Data table, the corresponding point will be
highlighted in the plot in red with a black border. If you reduce the value of the HM
Weight for the point, the size of the data point on the plot will also be reduced. This
provides a visual indication of the importance of each of the field history data.
Through the Field Data table, you can edit the HM Weight for any field data
points by one of the following methods:
- Type the new weight directly into the cell and then click another cell.
- Select and then right-click a row or multiple rows in the table. The
Modify weight to dialog box is displayed:



64 Creating and Editing Input Data CMOST User Guide




Enter the new HM Weight and then press Enter. The HM Weight will
be updated in the table and in the plot.
5.3.2 User-Defined Time Series
Through the User-Defined Time Series page, you can define a time series that is not directly
available from the SR2 files, but which can be derived from available SR2 data.
To illustrate, consider the following example, where we define a cumulative GORSC time series:
1. Select Input | Fundamental Data | User-Defined Time Series.
2. In the User-Defined Time Series table at the top of the page, click to
insert a new row in the table.
3. Select the new row and then define the new time series, as shown in the following
example:

- Name: A name for the new time series, CumGORSC in the above
example.
- Calculation Start, Calculation End: The start and end dates for the new
time series.
- Calculation Frequency: The times at which you want the time series to
be defined, one of:
Every
Common Data
Point
Time series data will be calculated at times where
the data required for the calculation is available,
between Calculation Start and Calculation End.
Every Minute,
Every Hour,
and so on
Time series data will be calculated at the times
specified between Calculation Start and
Calculation End. Since original time series data
may not be available at all of these times, the
calculation may be based on the setting of
Transformation.
- Transformation: If data is not available at all of the desired points, it
will derived using the transformation method, as shown below:
None Time series data will only be calculated if data is
available for the date.



CMOST User Guide Creating and Editing Input Data 65



Numerical
Integration
User-defined time series data is determined using the
numerical integration method, for which case, you
will need to specify the Numerical Integration
Option (one of Backward Rectangle, Forward
Rectangle, or Trapezoidal Rule), and Time Interval
Unit.
Numerical
Differentiation
User-defined time series data is determined using the
numerical differentiation method, for which case,
you will also need to specify the Numerical
Integration Option (one of Backward Difference,
Forward Difference, or Central Difference), and
Time Interval Unit.
Moving
Average
User-defined time series data is determined using the
moving average method, for which case, you will
need to specify the Moving Average Window.
- Unit Label: Units for the user-defined time series. These units will be
displayed in the Base Case and Field Data Plot Preview. In our
example, the units are ft
3
/bbl.
4. Specify the original time series data that will be used to calculate the user-defined
time series, using the drop-down lists for Origin Type, Origin Name, and
Property, and by typing in the VarName. The VarName is the name that will be
used for the variable in the formula. In the example, we have inserted two time
series, Cumulative Gas SC (VarName CumGas) and Cumulative Oil SC (VarName
CumOil), as shown below:

5. In the Formula pane, enter the formula for the user-defined time series data where
indicated. Refer to Formula Editor for information about entering CMOST
formulas. Our example is shown below:

Variablesthat
areavailablefor
theformulato
use
Formulaforuser-
definedtime
series




66 Creating and Editing Input Data CMOST User Guide



6. Using the drop-down lists in the field data table at the lower left, specify the field
data Origin Type, Origin Name, and Property Name associated with the user-
defined time series, as shown for our example:

7. To view the fit between the user-defined time series calculated fromthe base case and
the field data, click the Base Case and Field Data Plot Preview tab. For our example:

This plot shows, for the user-defined time series, a comparison of data obtained
from the SR2 files and field history data.
8. If you click the Field Data tab on the left, you will open a table of the field history
data you are comparing with the user-defined time series:




CMOST User Guide Creating and Editing Input Data 67



Through this table, you can set HM Weight for each data point to values between
0 and 1, for use in the calculation of the history match error, as described
in History Match Error.
5.3.3 Property vs. Distance Series
A property vs. distance series can be used to calculate a history matching error. Property vs.
distance data is retrieved from the SR2 files and compared with data from a well log file. The
relative error between the simulated data (SR2 file) and field data (well log file) is calculated.
To create a property vs. distance series:
1. Click Fundamental Data | Property vs. Distance Series.
2. In the Property vs. Distance Series page, click to insert one or more
property vs. distance series in the table. The Insert Property vs. Distance
Definitions dialog box is displayed:

3. Select the desired option, one of:
- Insert new property vs. distance definitions: Select the number of
items you want to insert then click OK. The rows will be entered with
default settings. You will have to enter the Well Name and Property and
then adjust the other settings as necessary.
- If you select Insert multiple property vs. distance definitions using
existing selected well log records, available well log records will be
displayed for selection. As noted in the dialog box, only well log records
with standard property names (as defined in the CMG dictionary file) are
supported by CMOST. Select the well log records then click OK.
4. Configure the settings in the table, as follows:
- Name: Enter a name for the property vs. distance data, for example,
GasRateRC_2001_12_17, which incorporates both the property and the
date.



68 Creating and Editing Input Data CMOST User Guide



- Well Name: In the drop-down list of available wells (available origin
names for the WELLS origin type), select the one that you want to use
for the property vs. distance series.
- Property: Select the property. Not all properties in the drop-down list
may have property vs. distance or well log data available, in which case
they cannot be used for this analysis. If the data does not exist, it will not
be displayed in the plot.
- Log Date Time: Specify the date time on which you want to define the
property vs. distance series. Again, the data will be available for certain
date times only.
- Data Path: Specify the method used to retrieve property vs. distance
data from the SR2 files, one of Well Log, Linear Path, Well Path, or
Trajectory. If Trajectory is selected, the source of the trajectory for the
well must be specified, by clicking the Trajectory button.
See To import a trajectory file for information on how to import a well
trajectory file. If a trajectory file is not available, the default option Well
Path will be shown.
- TVD or MD: Specify either TVD (true vertical depth) or MD (measured
depth) as the distance coordinate.
- Use Block Center: Indicate whether the spatial property is to be read at
block center only. If Use Block Center is not selected, spatial properties
are read at block entry and exit points.
- Use Accumulation Flow: Specify whether fluid volumes should be
accumulated as you travel upwards from the deepest point in the well path.
- Use Normalized Flow: Specify whether fluid volumes are accumulated
and then normalized with the total value as you travel upwards from the
deepest point in the well path.
- Smoothing Method: Select a method for smoothing the property vs.
distance data, one of None, Moving Average, Linear Aitken, Akima, or
Cubic Spine. Refer to Smoothing Methods for further information about
these smoothing algorithms.
To import a well trajectory file:
1. Click the Trajectory button. The Well Trajectory Files dialog box is
displayed.
2. Click Insert to insert a row in the table:



CMOST User Guide Creating and Editing Input Data 69




3. Enter the trajectory file details as shown below, and then click OK.
- File Type: Select the trajectory file type from the drop-down list.
- File Path (Relative to Project Folder): Enter the file or relative path
and name of the trajectory file, relative to the project folder.
- 2nd File Path (Relative to Project Folder): Some trajectory file formats
(Production Analyst Format, for example), have two files, an XY file and
a Deviated file. Enter the full or relative path and name of the second file
in this cell.
- File Type: Select the trajectory file type.
- XY Unit: Select the units used in the trajectory file to specify the x and y
coordinates.
- MD/Z Unit: Select the units used in the trajectory file to specify MD
(measured depth) and the z coordinates.
- Auto Traj Cleanup: Select to eliminate surplus trajectory nodes while at
the same time preserving deviation data.
An example of a populated Property vs. Distance Series page is shown below:




70 Creating and Editing Input Data CMOST User Guide



5.3.4 Fluid Contact Depth Series
If the SR2 files contain fluid saturation data, CMOST can calculate gas-oil, water-oil, and water-
gas contact depths at defined well locations. These depths, which are calculated for each time
step that the fluid saturation data is available, are used as time series data for history matching.
To set up a fluid contact depth series:
1. Select the Fundamental Data | Fluid Contact Depth Series page. The Fluid
Contact Property table contains, for user selection, the fluid contact types that are
available for CMOST studies:

2. In the Calculate column, select the fluid contact property or properties that you
want to calculate.
3. Configure the fluid contact property as follows:
- Calculate: If selected, CMOST will calculate the contact depths at the
well location(s).
- Fluid Contact Property: The read-only name of the fluid contact
property, used in preview and observer plots.
- Saturation Property: Define the saturation type, one of SG, SO, or SW.
The saturation type will be used by the algorithm to determine if there is
a phase transition along the length of the well.
- Porosity Type: Define the porosity type, one of Matrix or Fracture.
- Method: Select the calculation method, one of Predefined Threshold,
Maximum Derivative over n-Point Moving Average, or Maximum
Different Between Previous and Next n-Point Averages (where n is
defined by N Smoothing Points). For a description of these calculations,
refer to Results Graph User Guide, in the section Using Results
Graph | Working with Curves | Adding Curves | To create a fluid
contact depth parameter.
- N Smoothing Points: This parameter is used in the calculation of
Maximum Derivative over n-Point Moving Average, or Maximum
Different Between Previous and Next n-Point Averages.
- Threshold: This parameter is used in the calculation of Predefined
Threshold. It can be set to any value between and including 0 and 1.
- First/Last: If this field is set to First, then the first point that meets the
criterion is used as the contact depth. If the field is set to Last, then the
last point that meets the criterion is used as the contact depth.
- MD/TVD: Select true vertical depth (TVD) or measured depth (MD).



CMOST User Guide Creating and Editing Input Data 71



- Data Path: Defines the path through the grid, one of:
Along Perforation: This option can be used if the data file contains well
perforations. If LAYERXYZ well data is available, a well path is defined
by joining perforation block entry and exit points. Alternatively, if
LAYERXYZ is not available, a well path is defined by joining each
perforation from block center to block center. Distances will be relative
to the depth of the first perforation.
Along Trajectory: This option can be used if a trajectory file is available.
Click the button to open the Well Trajectory Files dialog box
and import this file. Refer to To import a well trajectory file for further
information.
4. Beside Preview Fluid Contact Data For, select the well for which you want to
view the fluid contact depth series, for example:

As shown above, the predicted fluid contact data is shown compared with field
history data. If you click the Field Data tab on the left, you can adjust the HM
Weight setting for each data point.



72 Creating and Editing Input Data CMOST User Guide



5.4 Parameterization
5.4.1 Parameters
Through the Parameters page, shown below, you enter parameters and specify their properties.
Parameters are generally first entered into the master dataset, then imported into CMOST (refer
to To import a parameter from the master dataset); however, in some cases, you may have
reason to define the relationships between parameters using intermediate parameters that are not
entered in the CMM file (refer to To add an intermediate parameter).

Clicktoinsertrowintable
Clicktodeleterowintable
Clicktomoveselectedrowup
ordownintable
Studyparameters
Clicktocreateandeditstudy
parametersinCMMeditor
Clicktoimportparametersfrom
theCMMfile
Graphofselectedparameter
priorprobability
Parametercandidatevalues
ClicktoopentheCMMfilein
Builder

NOTE: For further information about opening and editing the CMM file in Builder, refer to
chapter Setting up CMOST Master Datasets in the Builder User Guide.
5.4.1.1 Adding New Parameters
To import a parameter from the master dataset:
1. Enter the parameters and their default values in the master dataset using the CMM
File Editor. See Names for more information about using names in CMOST.
2. Save the CMM file.
3. Import the parameters from the master dataset into the Parameters table by
clicking the Import button. If the master dataset is large, it may take
several seconds for CMOST to read and find all of the parameters. For further
information refer to Importing Parameters from the Master Dataset.



CMOST User Guide Creating and Editing Input Data 73



In the Parameters table, the imported parameter names and default values will be
displayed, consistent with those in the master dataset. You should only change
these through the master dataset.
4. Edit the parameter fields in the Parameters table as outlined below:
Name: The name of the parameter is imported from the master dataset.
You should not need to change the parameter name.
Comment: As necessary, record any pertinent information about settings,
assumptions and rationale.
Active: When you import a parameter from the master dataset, Active
will be checked by default. The Active check box determines whether
CMOST will vary the value of the parameter when substituting it into the
master dataset. If Active is checked, CMOST will assign candidate
values to the parameter. If Active is not checked, CMOST will assign the
default value to the parameter for every experiment that is run.
Default Value: The parameter default value is imported from the master
dataset. The default value should produce the original value that is
entered in the base dataset. This value is only used when Active is not
checked. If you need to edit a default value, click the Edit
button, edit the default value in the master dataset, and then import the
new parameter default value.
If a more complicated CMOST formula is used, the default value may
not be equal to the original value in the dataset. For example, if the
following CMOST formula was entered into the master dataset:
<cmost >t hi s[ 1] =LOG10( Keq) </ cmost >
the default value for the parameter Keq will be 10 since this value
produces the original value of 1 when the CMOST formula is evaluated.
Source: This column tells CMOST how to assign values to a parameter.
Source can be set to one of:
Continuous
Real
In this case, you will need to configure the settings in
the Candidate Values area at the lower left of the page:

IfTrue,changesmadeto
DataRangeSettingswillbe
synchronizedwiththePrior
DistributionFunctionsettings
andviceversa.
UsedinUncertainty
Assessmentstogenerate
MonteCarlostatistics
Refertoguidelinesinstep5.

As you make changes, these will be reflected in the
Prior PDF graph that is displayed to the right of the
settings area.



74 Creating and Editing Input Data CMOST User Guide



Discrete Real In this case, you will need to enter a table of candidate
discrete real values, and will be given the option of
assigning a prior probability to each. See the note below.
Discrete
Integer
You will need to enter a table of candidate integer
values and a prior probability for each. See the note
below.
Discrete Text You will need to enter a table of text values, a unique
numerical value for each text value, and optionally, a
prior probability for each value. See the note below.
Formula You will be able to enter a formula for the variable
using the Formula Editor.
NOTE: The prior distribution is used to generate values for uncertainty assessments; i.e., the
parameter values that are generated for the uncertainty analysis will follow this distribution.
If you enter prior probability values, a Prior PDF graph will be displayed to the right.
When deciding what source type to use, consider the following guidelines:
Continuous
or Discrete
Real
Use for parameters that can have decimal values such
as those representing porosity or permeability. This is
the default source type.
Discrete
Integer
Use for parameters that cannot have decimal values such
as those representing rock types or a block location.
Discrete Text Use for parameters that have text values. The values
should always be enclosed in double quotation marks
(for example OPEN). The Import option will
automatically assign the Discrete Text type to any
parameter that has a default value enclosed in double
quotation marks in the master dataset.
Formula Formulas can be entered for a parameter, in which
case a formula edit session will appear. Any of the
other parameters can be used in the formula, as well as
any CMOST function. Refer to Formula Editor for
more details on entering formulas for parameters.
5. In the case of real, integer and text values, a Candidate Values table is displayed at
the lower left of the Parameters page. Through the Candidate Values table, enter
parameter values that you want CMOST to substitute into the master dataset. The
candidate values that you choose will depend on the study type, as outlined below:



CMOST User Guide Creating and Editing Input Data 75



Sensitivity
Analysis
For sensitivity analysis studies, to investigate main
(linear) effects, only two values need to be entered for
each parameter. If interaction effects and non-linear
(quadratic) effects are also to be studied, at least three
different values will need to be entered for each
parameter. All values should be within the reasonable
range for the property represented by the parameter,
and all sample values should be different.
History
Matching
and
Optimization
An unlimited number of discrete entries, or two values
(lower and upper limits) for continuous entries, can be
added to the Candidate Values table for history
matching and optimization studies; however, it will
take longer for the optimizer to converge on a solution
as more candidate values are added.
Uncertainty
Assessment
For an uncertainty assessment study, to capture
interaction effects and non-linear (quadratic) effects,
three different values are required for each parameter.
The low value should represent a value near the lower
limit for that parameter, the high value should represent
a value that is near the upper limit, and the middle value
should be somewhere in the middle of the range.
NOTE: If the parameter type is Discrete Text, an equivalent numerical value will have to be
added with the candidate value. If there is a value that fits with the text value, that number
should be entered.
To add an intermediate parameter:
NOTE: Starting with CMOST 2013, a parameter has to be Active to be used as an intermediate
parameter.
1. Click Insert to enter a new row in the Parameters table.
2. Enter the name of the intermediate parameter and set the Source to Formula.
3. Define the formula for the intermediate parameter.
4. Define the formula for the parameter or parameters that depend on the value of the
intermediate parameter.
To illustrate, assume Parameter_A and Parameter_B are already defined in the master dataset.
Let us also assume that Parameter_B is a function of an intermediate variable C, as follows:
_ = __
2




76 Creating and Editing Input Data CMOST User Guide



Intermediate_Parameter_C is, in turn, a function of Parameter_A, as follows:
__ = 4 log (_)
The Parameters table for the above would appear as follows:

where Intermediate_Parameter_C is defined through the formula editor as:

and Parameter_B is defined as:

The benefit of this feature is more evident when the relationships are more complex.
5.4.1.2 Prior Probability Distribution Functions
Prior Probability Distribution Functions are only available for continuous real numbers. The
distribution given in this section should represent the probability that specified values can
occur. There are several different distribution types available for CMOST:
Unspecified
Deterministic
Uniform
Triangle
Normal
Log Normal
Custom
All of the associated prior probability distribution function configuration items must be filled
in for each parameter. More information for each of the prior probability distribution function
types can be found in the Probability Distribution Functions section.
5.4.1.3 Deleting a Parameter
In the Parameters table, select any cell in the row of the parameter to be deleted then click
the Delete button. If you delete a parameter, the corresponding column will be
deleted in the Experiments Table, however, you will need to manually delete the parameter in
the CMM file (if the parameter is used in the file).
Multiple parameters can be removed from the table by clicking the cell to the left of the parameter
Name and then dragging the cursor up or down. SHIFT and CTRL functionality is also
supported. The Delete button or DELETE key can then be used to delete the rows.



CMOST User Guide Creating and Editing Input Data 77



5.4.1.4 Moving Parameters in Table
To move a parameter row up or down in the Parameters table, select any cell in the row that
you want to move, then click the Up or Down button. Parameters can be listed in any
order.
NOTE: Click the table headers to order the parameters alphabetically in ascending or
descending order.
5.4.1.5 Copying Parameter Data
You can copy data from one parameter to another, as follows:
1. In the Parameters table, right-click the number of the parameter whose data you
want to copy.
2. Select Copy Parameter Data to Other Parameters to open the Copy Data to
Parameters dialog box, shown in the following example:

3. Select the parameters to which you want to copy the data and then click OK.
The data that will be copied is summarized in the following table:
Copy Data from Copy Data To Copied Data
Continuous Real Continuous Real Data Range Settings, Discrete Sampling and Prior
Distribution Settings
Discrete Real Discrete Real Real Value and Prior Probability
Discrete Integer Discrete Integer Integer Value and Prior Probability
Discrete Text Discrete Text Text Value, Numerical Value and Prior
Probability
Formula Formula J script Code
5.4.1.6 Editing a Master Dataset
The master dataset can be opened from the Parameters page to view or edit it. To open the
master dataset in the CMM File Editor, click the Edit button.



78 Creating and Editing Input Data CMOST User Guide



5.4.1.7 Importing Parameters from the Master Dataset
CMOST can automatically copy all parameters that are present in the master dataset to the
study file. To do this, click the Import button (it may take a few seconds for
CMOST to read large master dataset files). If the this[OriginalValue] syntax is used in the
master dataset, the default value will usually be copied in from the file. Refer to Adding New
Parameters for further information. CMOST will assume that the default value is equal to the
original value when importing parameters from the master dataset. The imported default
values should be checked since this may not always be the case.
The parameter source type should also be checked for errors. When importing, CMOST will
set any parameters that have original values with quotation marks to Discrete Text. All other
parameters will be set to Continuous Real, so parameter types may need to be changed
manually after importing.
The Active check box is automatically checked when you import a parameter; however, Prior
Probability Distribution Functions will have to be entered.
5.4.2 Parameter Correlations
Using the Monte Carlo method, CMOST generates experiment sample sets for use in
uncertainty assessments, for parameters which have been set, through the Parameters page,
to Continuous Real and with defined prior probability distributions. When proxy models are
used to calculate objective functions (i.e., simulators are not used), the number of
experiments can be very high. In CMOST, the number of UA experiments for Monte Carlo
Simulation Using Proxy is set to 65,000 (not configurable).
Through the Parameter Correlations page, CMOST can algorithmically adjust the rank
correlation of the Monte Carlo-generated sets of parameters so they honour the desired rank
correlation settings. This requires that all parameters entered in the table of rank correlations
have prior probability distributions.
For further information, refer to Parameter Correlation.



CMOST User Guide Creating and Editing Input Data 79



If you have not yet specified rank correlation values, the Parameter Correlation page will
be similar to the following:

Tableofdesiredrank
correlations(none
entered)
Tableofrank
correlationsofthe
MonteCarlo-
generatedsamples
ClickApplyChangesif
youhavemadechanges
tothedesiredrank
correlationmatrix
Plotshowsnorank
correlationofselected
samplepairs
(POR,PERMV).
PlotshowsnoR
2

correlationofselected
samplepairs
(POR,PERMV)

In the above example:
The (POR, PERMV) cell in the Realized Rank Correlation table on the right has
been selected.
The realized rank correlations are very close to the desired rank correlations. Click
the Apply Changes button whenever you make changes to the desired rank
correlation. When you click Apply Changes, the table of realized rank
correlations, and the associated plots, will be refreshed.
In our example, the prior probability of POR is normally distributed, with a mean
of 0.28 and a standard deviation of 0.03. The prior probability of PERMV is also
normally distributed with a mean of 2400 and a standard deviation of 387. These
prior probabilities are consistent with the Actual Monte Carlo Samples plot.
No desired rank correlation for (POR, PERMV) has been specified and this is
reflected in the Rank Monte Carlo Samples plot.



80 Creating and Editing Input Data CMOST User Guide



If you now define a desired rank correlation for (POR, PERMV), the UA samples will be
algorithmically ranked accordingly. This is illustrated in the following example:

Desired(POR,PERMV)
rankcorrelationhas
beenchangedfrom0to
0.9.
OnceApplyChangesis
clicked,samplesetsare
algorithmicallychanged
torealizethedesired
rankcorrelation.
Plotshowingdesired
rankcorrelation.
Plotofsample
(POR,PERMV)pairs,
nowshowingR
2

correlation.

In the above example:
The relative ranking of the samples has been algorithmically adjusted to be
consistent with the desired rank correlation.
Sample sets are now ready for uncertainty assessment.
5.4.3 Hard Constraints
You can define hard and soft constraints for history matching and optimization studies. The
purpose of hard constraints is to prevent unnecessary simulation runs when defined
constraints are violated. Refer to Soft Constraints for information about soft constraints.
Hard constraints may be appropriate if you have many simulations to run. If a hard constraint
is violated, the simulation run will not take place because hard constraints are checked by
CMOST before starting each run.
For example, if a CMOST optimization study is set up to work with a SAGD case, it may be
known beforehand that the production wells involved should not be less than a certain distance
apart from each other:
W1_I W2_I >40
For this particular case, W1_I and W2_I are parameters that refer to the block address of wells
W1 and W2 in the I direction, respectively. The constraint formula shown above
indicates that there should always be at least 40 grid blocks between W1 and W2 in the I
direction. If the condition W1_I W2_I <=40 is encountered, the simulation should not be
run. Both W1_I and W2_I were specified as parameters to be modified by CMOST in the
master dataset. This example is illustrated in the following procedure:



CMOST User Guide Creating and Editing Input Data 81



To specify a hard constraint:
1. Open the Parameterization | Hard Constraints page.
2. Click the button. A hard constraint is entered in the Constraints table.
You can edit the name and enter a comment, if necessary. Selecting Active
instructs CMOST to check for the constraint violation before each simulation run
is started.
3. When you create a hard constraint, a CMOST Formula Editor session is also opened.
Enter the formula for the hard constraint where indicated. The definition of a
constraint may require more than one line. Formulas can be entered using any of the
available functions and variables available by clicking the button, for example:

The list of variables includes previously defined parameters. An alternate method
of entering the constraint formula is to manually type it in.
For our example:

Variablesusedin
formulaforhard
constraint
Formulaforhard
constraint




82 Creating and Editing Input Data CMOST User Guide



As entered, HardConstraint001:
W1_I W2_I >40
must be met or the simulation will not be run.
5.4.4 Pre-Simulation Commands
Occasionally, users may need to modify the datasets created by CMOST before they are
submitted to a scheduler. For example, users may want to adjust variogram parameters in
history matching. In this case, an external geological modeling package, such as GOCAD, is
used to generate porosity and/or permeability arrays for each dataset created by CMOST.
After that, CMOST will submit a simulation job using the modified dataset. The process is
illustrated in the following diagram:

Pre-simulation dataset processing commands are used to modify datasets before they are
submitted to a simulator:
1. Commands will be executed sequentially once the original dataset is created by
CMOST.
2. CMOST will wait for each command to exit before starting the next command.
3. CMOST will submit a simulation job using the final dataset, once all commands
are executed.



CMOST User Guide Creating and Editing Input Data 83



Some important information regarding pre-simulation commands:
If relative path is used to specify the path to the executable, it should be based on
the directory of the project file.
The working directory will be set to the study directory file when CMOST runs the
executable.
Command execution sequences are determined by their order in the
Pre-Simulation Dataset Processing Commands (Commands) table.
5.4.4.1 Adding a New Pre-Simul ation Dataset Processing Command
Click Insert to inset a new pre-simulation dataset processing command in the
Commands table. Three types of commands can be added: Run CMG Builder Silently,
Run GOCAD Silently and Run User Defined Command.
Name
The command name is used to identify each command. Command names should be unique if
multiple commands are used. Command names are case sensitive.
Type
There are three types of pre-simulation commands:
Run CMG Builder Silently
Run Builder to perform formula calculations specified in the dataset, or update
relative permeability tables.
Run User Defined Command
Execute a user-defined application to generate a job name tagged dataset by
modifying the source dataset file. The source dataset can be generated from
CMOST or the previous dataset processing command.
Run GOCAD Silently
Trigger the Paradigm GOCAD program to carry out calculations using the
GOCAD script and workflow (optional) file. GOCAD output files are exported and
used as include files in datasets.
Active
If the Active check box is selected, CMOST will execute the presimulation dataset
processing command; otherwise, the command will not be executed.
Maximum Execution Time
CMOST will wait for a command to exit before running the next command (or submitting a
job if the last command is executed). Maximum execution (waiting) time (in minutes) can be
set for each command. If a command has not finished executing within its maximum
execution time, the CMOST engine will stop and an error message will be displayed in the
Engine Events table on the Control Centre page.



84 Creating and Editing Input Data CMOST User Guide



5.4.4.2 Moving Commands in Table
To move a command up or down in the Pre-simulation Dataset Processing Commands
table, select any cell in the row of the parameter that needs to be moved, and then click the
or buttons. Commands may be listed in any order, as required.
NOTE: Command execution sequences are determined by their order in the Pre-simulation
Dataset Processing Commands table.
5.4.4.3 Deleting a Command
To delete a command, the entire command row must be selected. To do this, click the grey
cell to the left of the command name. The entire row will become highlighted and the Delete
button will be enabled. Click the Delete button to delete the row. The DELETE
key can also be used.
Multiple commands can be removed by clicking on the grey cell to the left of a commands
name and dragging the cursor up or down. SHIFT and CTRL functionality is also available.
The Delete button or DELETE key can then be used to delete the rows.
5.4.4.4 Run Builder Sil ently Command Configuration
Builder can perform formula calculations specified in the dataset, or update relative permeability
tables. Once it is added, no further configuration needs to be done for the Run Builder Silently
command.

5.4.4.5 Run User Defined Command Configuration
Users can write their own program (executable) to modify the dataset file before it is sent to a
simulator. CMOST provides user-defined commands with input information, such as
Experiment Name Tagged Parameter File (.par), Experiment Name Tagged Temporary
Dataset File (.tmp), or Experiment Name Tagged Dataset File (.dat). The users program
can use these files as needed to modify the original dataset (Experiment Name Tagged
Dataset File or Experiment Name Tagged Temporary Dataset File) to generate a new
version of the experiment name tagged dataset file to be submitted to a simulator.
The Experiment Name Tagged Dataset File must be used as the output file for a Run User
Defined command.



CMOST User Guide Creating and Editing Input Data 85




5.4.4.5.1 Executable Path
Set the path of the executable (.exe) file. Type in the path or click the Browse
button to locate the executable.
5.4.4.5.2 Command Line Switches for the Executable
Enter any command line switches for the executable if needed.
5.4.4.5.3 Command Line Argument Switches
Select an Argument Switch cell to input argument switches required by an argument file.
5.4.4.5.4 Command Line Argument File Type

Experiment Name Tagged Dataset File (.dat)
The CMOST dataset file can be used as the input or output file for a generic command. When
it is used as the input file, it refers to the CMOST-generated original dataset file or the
modified dataset as a result of the previous command.
NOTE: A job name tagged dataset file must be used as the output file for a Run User
Defined command.
Experiment Name Tagged Temporary Dataset File (.tmp)
The temporary dataset file has the same content as J obNameTaggedDatFile, but with a
different extension. For example, file MyWork_00008.tmp has the same content as
MyWork_00008.dat.



86 Creating and Editing Input Data CMOST User Guide



Experiment Name Tagged Parameter File (.par)
This file contains parameter values for a specific job, for example:
File name: MyWork_00008.par
Porosity 0.09
Kv_kh_ratio 0.25
5.4.4.6 Run GOCAD Sil ently Command Configuration
Paradigm GOCAD software can export SGrid object data to CMG include files (.inc) using
GOCAD scripts. Both geometric and property information can be exported.

CMOST uses the Run GOCAD Silently command to link with GOCAD. CMOST will
trigger GOCAD to run specific script and workflow file (optional) to generate CMG include
files (.inc), then CMOST will use the generated include files in the simulation dataset file.
Procedure for setting up the link:
1. Prepare the GOCAD files for the Run GOCAD Silently command, including the
GOCAD project (.gprj) file, GOCAD Master Script (.script) file and (optional)
GOCAD workflow (.xml) file. For more information on preparing GOCAD Master
Script file, please refer to Preparing GOCAD Master Script File for Run GOCAD
Silently Command.
2. Edit the CMOST master dataset (.cmm) file. Insert GOCAD generated include
files at the desired place in the dataset file, for example:
<cmost>outputPoro</cmost> **Porosity
<cmost>outputKi</cmost> **Permeability I
PERMJ EQUALSI **Permeability J



CMOST User Guide Creating and Editing Input Data 87



3. Set up the Run GOCAD Silently command.
3.1 Add a Run GOCAD Silently command
3.2 Set GOCAD .gprj, .script, .xml files as shown above.
3.3 Extract CMOST parameters within the .script and .xml files by clicking
the Extract button. New CMOST parameters will be added in
the parameter list in the Parameters page.
5.4.4.6.1 Preparing GOCAD Master Script File for Run GOCAD Silently Command
1. There are different property export commands that are supported in GOCAD,
however, for CMOST-GOCAD link, only write_sgrid_as_CMG_ascii_file
export command line can be used in master script file to export reservoir geometry or
properties to CMG include file (.inc) format.
2. At least one write_sgrid_as_CMG_ascii_file export command line is used in
the GOCAD Master Script file. In addition, CMOST keywords must be used as
output file name in at least one export command line, for example:
Gocad on SGrid GridName write_sgrid_as_CMG_ascii_file File_name
"<cmost>outputPoro</cmost>" origin 0 switchIJ 0 vertical_scaling
1 horizontal_scaling 1 save_geometry 0 use_deadcell 0 properties
"POR+Porosity1" lgr_scenario "";
3. Only one property can be output in each export command line. If more than one
property needs to be exported, multiple export command lines can be used, for
example:
Gocad on SGrid GridName write_sgrid_as_CMG_ascii_file File_name
"<cmost>outputPoro</cmost>" origin 0 switchIJ 0 vertical_scaling
1 horizontal_scaling 1 save_geometry 0 use_deadcell 0 properties
"POR+Porosity1" lgr_scenario "";
gocad on SGrid GridName write_sgrid_as_CMG_ascii_file File_name
"<cmost>outputKi</cmost>" origin 0 switchIJ 0 vertical_scaling 1
horizontal_scaling 1 save_geometry 0 use_deadcell 0 properties
"PERMI+PermH1" lgr_scenario "";
4. CMOST keyword can be used at other parts in GOCAD Master Script file as required.
5. A workflow file is optional. Use the following command to load the master
GOCAD workflow file if needed:
gocad load_xml_parameters name "Property_study_loaded" file
Example_GOCAD_Master_XML_File.xml"
6. Use the following command at the end of the Master Script file to quit GOCAD:
gocad quit really true
5.4.4.6.2 Extract Parameters from GOCAD Master Files
After you click the Extract button, CMOST will copy all parameters that are
present in GOCAD script and XML files to the task file. If the this[OriginalValue] syntax
is used, the default value will also be copied from the file.



88 Creating and Editing Input Data CMOST User Guide



The following is an example of extracting a CMOST parameter from a GOCAD export
command line:
Gocad on SGrid GridName write_sgrid_as_CMG_ascii_file File_name
"<cmost>This[poro.inc]=outputPoro</cmost>" origin 0 switchIJ 0
vertical_scaling 1 horizontal_scaling 1 save_geometry 0 use_deadcell 0
properties "POR+Porosity1" lgr_scenario "";
After extraction, the source of the parameter outputPoro is automatically set to be
FORMULA, the default value is poro.inc and the formula value is set to be J obName +
" outputPoro. i nc" . We recommend this formula value not be changed.
5.5 Objecti ve Functions
Through the Objective Functions node, you define the expressions or quantities that you want
to minimize or maximize. In the case of history matching, for example, you will likely want to
minimize the error between field data and simulation results. In the case of optimization, you
may want to maximize net present value. For further information, refer to Objective Functions.
The requirement to configure subnode pages in the Objective Functions node depends on the
purpose of the study. Regardless, at least one of the following must be configured:
Basic Simulation Results
History Match Quality
Net Present Values
Advance Objective Functions
For example, if you want to perform a sensitivity analysis on NPV, the Basic Simulation
Results page need not be filled in, however, you will need to configure the Net Present
Values page. Likewise, if you want to examine the effect of input parameters on Cumulative
Oil, you would need to configure the Basic Simulation Result page. Finally, you may want
to define and analyze an advanced objective function if you want to calculate an objective
function using Excel, user-defined code, or a user-defined executable.
5.5.1 Characteristic Date Times
Through the Characteristic Date Times page, shown below, you can specify dates on which
you want to calculate the values of objective functions and terms. Defined dynamic date
times can be used as objective functions:



CMOST User Guide Creating and Editing Input Data 89




Datetimesautomatically
populatedfromthebaseSR2
files
Fixeddatetimesmanually
enteredbytheuser
Dynamicdatetimesentered
bytheuser,basedonan
originaltimeseries
Dynamicdatetimesentered
bytheuser,basedonauser-
definedtimeseries

As shown above, four types of dates can be defined through the Characteristic Date Times page:
Built-in fixed date times: Date times that are automatically populated from the
base SR2 files, BaseCaseStart and BaseCaseStop in the above example.
Fixed date times: Specific dates named and entered by the user, by clicking Insert
to the right of the table and then entering the Name and Date Time Value, as
shown in the following example:

You can enter the Date Time Value in one of two ways:
1. Click the current Date Time Value then, using the TAB key to move to
between the entries, type in the year, then the month, and so on.
2. Use the calendar drop-down to select the date, as shown below:




90 Creating and Editing Input Data CMOST User Guide



Dynamic date times from original time series: Dates that are based on the value
of the data in an original time series; for example, the date on which the
cumulative oil produced by a certain well exceeds a certain value.
In the following example, dynamic date Date_1 is the first date after BaseCaseStart
on which the cumulative oil produced by well PRO-1 exceeds 1000 m
3
.

Typedinbyuser
Selectedfromdrop-downlists

Dynamic date times from user-defined time series: Dates that are based on the
value of the data in a user-defined time series. In the following example,
DynamicDateTime001 is defined as the date when CumGOR, a user-defined time
series, was greater than 5 for the last time.

Typedinbyuser
Selectedfromdrop-downlists
DefaultEntry.Typein
desiredname

5.5.2 Basic Simulation Results
Refer to the general discussion at the beginning of Objective Functions for a discussion of the
requirement to configure the subnode pages in the Objective Functions node.
Through the Basic Simulation Results page, you can define objective functions that read
directly from original time series or user-defined time series. These results can then be used
as objective functions or to define global objective function candidates and soft constraints.
You can define the following types of objective functions:
Basic Simulation Result from Original Time Series: Through this table, enter
results that are derived directly from an original time series in the SR2 files. In the
following example, we have defined basic simulation results ProducerCumOil,
ProducerCumWater and InjectorCumWater on Characteristic Time
YearEnd2011 for the origin and properties noted:



CMOST User Guide Creating and Editing Input Data 91




Basic Simulation Result from User-Defined Time Series: Through this table,
you can define results that are derived from a user-defined time series. In the
following example, we have defined CumGORSC_YearEnd2011 as the value of
the user-defined time series CumGORSC on characteristic date YearEnd2011:

Characteristic Time Durations: The calculated duration between two
characteristic times. In the following example, Duration001 is the duration, in
days, between BaseCaseStart and YearEnd2011:

5.5.3 History Match Quality
Refer to the beginning of Objective Functions for a discussion of the requirements for
configuring the subnode pages in the Objective Functions node.
History match quality indicates the fit of simulation results with historical data, such as field
history files. For further information, refer to History Match Error.
The procedure for defining a history match quality function is as follows:
1. Open the Objective Functions | History Match Quality page:



92 Creating and Editing Input Data CMOST User Guide




Inthesetabs,define
andspecifytheterms
usedtocalculatethe
selectedlocalhistory
matcherror.
Inthistable,defineand
specifythelocalhistory
matcherrorsthatyou
willusetocalculatethe
globalhistorymatch
error.
Inthisarea,definethe
globalhistorymatch
error.

2. In the Global HM Error Definitions area, define the global HM error:
Global HM Error Name: Enter any acceptable name that provides a
clear description of the global objective function.
Unit Label: Units displayed with the global HM error. This setting,
which defaults to %, is optional and does not affect the calculated
value of the global HM error.
Calculation Method: Select one of Weighted Average or Get Maximum.
If set to Weighted Average, CMOST will average all of the local HM
errors used to calculate the global HM error using the Weight for each
local HM error used, as follows:

=
i
i i
w
LHME w
Error HM Global

where LHME
i
is value of local HM error i and w
i
is its weight.
If Get Maximum is selected, the global HM error will be equal to the
largest of the local HM errors.
3. In the Local History Match Error Definitions table, define the local history match
errors that will be used to calculate the global history match error, as follows:
Name: Names of local HM errors must be unique.
Unit Label: The units that should be displayed with the local HM error
function. This setting, which defaults to %, is optional and does not
affect the calculated value of the global HM error.



CMOST User Guide Creating and Editing Input Data 93



Active: The Active check box determines whether or not CMOST will
use the local HM error function to calculate the global HM error. If
Active is checked, CMOST will use the error; otherwise, the error will
be calculated but its result will not be used to calculate the global HM
error. All inactive local HM errors will act as if they are observers.
Weight: Weight will give the local HM error more or less emphasis in
the global HM error if Weighted Average is selected for the Calculation
Method. The higher the Weight relative to the weight of other local HM
errors, the more emphasis the local HM error will have on the global HM
error.
HM Error Calculation Method: Select the HM Error Calculation
Method, one of Weighted Average or Get Maximum. If set to Weighted
Average, CMOST will average all of the terms used to calculate the local
HM error using the Term Weight for each term used, as follows:

=
i
i i
w
t w
Error HM Local

where t
i
is value of term i and w
i
is its weight.
If Get Maximum is selected, the HM error will be equal to largest of the
term errors.
4. In the Local history match error definitions table, select the first local history
match error. As appropriate, select the Original Time Series Terms used in the
calculation of the local history match error, as shown in the following example:

Insert a row for each original time series term and enter the fields, as follows:
Origin Type: From the drop-down list, select the type of data that will
be retrieved for the local HM error term, one of WELLS, GROUPS,
SPECIALS, SECTORS, LAYERS, or LEASES. All Origin Types come
from the simulation results files.
Origin Name: The choices in the drop-down list are based on your selection
of Origin Type. If there are no items corresponding to the Origin Type, the
Origin Name list will be empty. If this is the case, that Origin Type cannot
be used. For example, if Origin Type is WELLS, Origin Name will contain
a list of all the wells that are present in the dataset.
Property: The choices in the drop-down list are based on your selection
of Origin Name. For example, if Origin Type is WELLS, the Property
cell would have a list of well properties such as Cumulative Oil SC or
Gas Rate SC.



94 Creating and Editing Input Data CMOST User Guide



Start Time: Select the characteristic date from which the data analysis
should start. This date must be between the simulation start date and End
Time. The default date that is entered is the simulation start date.
End Time: Select the characteristic date on which CMOST will stop
analyzing data. This date must be between Start Time and the simulation
stop time. The default date that is entered is the simulation stop time.
Reset Cumulative: In some history matching problems, users may find that
parts of the field data records are unreliable because measurements have
either not been made or have been estimated. The result is that reliable parts
of the record are intermingled with unreliable records. For example, in the
following figure, the black portions of the cumulative water curve are
considered reliable and the red portion is not in other words, we do not
know what has really happened to water produced over the red period.

500 450 400 350 300 250 200 150 100 50 0
Time(days)
50
0
100
150
200
250
C
u
m
u
l
a
t
i
v
e

W
a
t
e
r

S
C

(
m
3
)
FixedDateTime001 FixedDateTime002 BaseCaseStart
BaseCaseStop

To match the cumulative water curve in the above plot, we need to define
history matching error terms for the two time periods of the black portion,
and ignore the time period for the red portion, which is deemed unreliable.
To use the valid cumulative data correctly we also need to be able to
calculate and use the individual cumulative for each time period, which
means starting each time period at zero cumulative and working with the
delta of cumulative for each point within that segment. This correction is
needed even if the cumulative quantities (e.g. cumulative oil/water/gas) are
derived from rate quantities (e.g. oil/water/gas rate). To enable this
correction, users can select Reset Cumulative for history matching error
terms that contain erroneous historical data.



CMOST User Guide Creating and Editing Input Data 95



For the above cumulative water curve, history matching error Water1 can
be defined as:

and Water 3 can be defined as:

From BaseCaseStart to FixedDateTime001, history matching error term
Water1 is defined. It is optional to set Reset Cumulative for this term
because the first four months of cumulative water data are deemed reliable.
For the next five months (from FixedDateTime001 to
FixedDateTime002), which is the red portion of the above cumulative
water curve, no history matching error term is needed because the data in
this time period is unreliable and should be ignored.
Absolute Measurement Error: Used to indicate the accuracy of the
production data. The value is considered to be half of the absolute error
range, which means that if the simulated result is between (historical value
ME) and (historical value +ME), the match is considered to be satisfactory
(or perfect because it is within the range of measurement accuracy). More
information about how Measurement Error is used to calculate the history
match error can be found in the History Match Error section.
Term Weight: The Term Weight gives the local HM error terms more
or less emphasis. The higher the Term Weight relative to the other
terms term weight, the more emphasis the term will have on the local
HM error. Normally, higher term weight should be given to wells that are
important (good production, long history, near future development wells)
and difficult to match.
Normalization: For information about this parameter, refer to History
Match Error.
5. As appropriate, select the User Defined Time Series Terms used in the
calculation of the local HM error, as shown in the following example:




96 Creating and Editing Input Data CMOST User Guide



Insert a row for each user-defined time series term in the table and populate the
cells, as follows:
User-Defined Time Series: Select the series, which you have already
entered through Fundamental Data | User-Defined Time Series.
Start Time: As described above for Original Time Series Terms.
End Time: As described above for Original Time Series Terms.
Absolute Measurement Error: As described above for Original Time
Series Terms.
Term Weight: As described above for Original Time Series Terms.
Normalization: As described above for Original Time Series Terms.
6. Select the Property vs. Distance Terms tab and, as appropriate select and specify
property vs. distance terms used in the calculation of the local HM error, as shown
in the following example:

Insert a row for each property vs. distance term in the table and populate the cells,
as follows:
Property vs. Distance Name: Select the series, which must already have
entered through Fundamental Data | Property vs. Distance Series.
Absolute Measurement Error: As described above for Original Time
Series Terms.
Term Weight: As described above for Original Time Series Terms.
Normalization: As described above for Original Time Series Terms.
7. Repeat steps 4-6 for the other local HM errors.
5.5.4 Net Present Values
Refer to the beginning of Objective Functions for a discussion of the requirements for
configuring the subnode pages in the Objective Functions node.
Through the Net Present Values page, you can define a Field NPV global objective function,
as shown below:



CMOST User Guide Creating and Editing Input Data 97




Inthesetabs,define
andspecifytheterms
usedtocalculatethe
selectedLocalNPV.
Inthistable,defineand
specifytheLocalNPVs
thatyouwilluseto
calculatetheFieldNPV.
Inthisarea,definethe
FieldNPV.

To illustrate the configuration of the Net Present Values page, consider the following example:

BaseCaseStart
Prediction
Start BaseCaseStop
ContinuousMonthly
IncomefromNewWELL1
OilRateSC
ContinuousMonthly
CashOutlayfor
NewWELL1Water
RateSC
DiscreteCashOutlay
newW1_L3_UBAK
DiscreteCashOutlay
newW1_nLayers
NPV_newW1
1. CashflowtermsarediscountedbacktoBaseCaseStartatmonthlyrate
calculatedfromyearlydiscountrates(allequalto0.1inthisexample).
2. PresentvaluesforcashflowsaresummedtodetermineNPV_newW1.
3. FieldNPVisdeterminedfromsumoflocalNPVs:
NPV_newW1+NPV_newW2+NPV_oldWells
DiscountcashflowtermstoBaseCaseStart




98 Creating and Editing Input Data CMOST User Guide



The following procedure illustrates the procedure for implementing the above global NPV
objective function:
1. In the Field NPV Definition area, shown in the example below, define the Field NPV:

Field NPV Name: Enter a unique, descriptive name for the Field NPV.
Unit Label: Enter the units you want to display with the Field NPV; for
example, you may want to use M$ to indicate that net present values are
in millions of dollars. This setting is optional and does not affect the
calculated value of the Field NPV.
Calculation Method: This field is set to Sum of Active Net Present
Values and cannot be changed. The Field NPV is the arithmetical sum of
the active Local NPVs.
2. In the Local NPV Definitions table, enter and configure the Local NPVs that you
will use to calculate the Field NPV. For our example:

3. Select the first Local NPV by clicking the grey cell to the left of the row and fill in
the fields, as follows:
Name: Names of Local NPVs must be unique within the study.
Unit Label: The units that should be displayed with the Local NPV. This
setting does not affect the calculated value of the Field NPV.
Active: The Active check box determines whether or not CMOST will
use the Local NPV to calculate the Field NPV. If Active is checked,
CMOST will use the Local NPV; otherwise, the Local NPV will be
calculated but its result will not be used to calculate the Field NPV. All
inactive Local NPVs will act as if they are observers.
NPV Present Date: Select the characteristic date time to which the
future cash flow terms are to be discounted to reflect, for example, the
time value of money.
Property Filter: This works as a filter to help users set cash flow terms
belong to this local NPV; for example, if Daily Rate is selected, in the
cash flow terms, in the property column, only daily rate properties will
be listed in the drop-down box.



CMOST User Guide Creating and Editing Input Data 99



Calculation Method: This field is set to Sum of Net Present Value
Terms and cannot be changed. The Local NPV is the arithmetical sum of
the net present values of the continuous and discrete cash flow terms.
4. Select the Continuous Cash Flow Terms tab and enter all of the continuous cash
flow terms needed to calculate the Local NPV, as shown below for NPV_newW1 in
our example:

Origin Type: From the drop-down list, select the type of data that will
be retrieved for the continuous cash flow term, one of WELLS, GROUPS,
SPECIALS, SECTORS, LAYERS, or LEASES. All Origin Types come
from the simulation results files.
Origin Name: The choices in the drop-down list are based on your
selection of Origin Type. If there are no items corresponding to the Origin
Type, the Origin Name list will be empty. If this is the case, that Origin
Type cannot be used. For example, if Origin Type is WELLS, Origin
Name will contain a list of all the wells that are present in the dataset.
Property: The choices in the drop-down list are based on your selection
of Origin Name. For example, if Origin Type is WELLS, the Property
cell would have a list of well properties such as Oil Rate SC Monthly or
Water Rate SC - Monthly.
Start Time: Select the characteristic date time of the first cash flow term.
End Time: Select the characteristic date time of the last cash flow term.
Yearly Discount Rate: Enter the discount rate in decimal form; i.e., 0.1,
not 10%.
Unit Value: Enter the value of a unit of the property. By convention,
revenues are positive, expenses are negative. In the above example, Oil
Rate SC Monthly is in units of barrels, which have a unit value of $60.
Conversion Factor: Enter the factor needed to convert the cash flow
terms into common units. In our example, we are converting cash flows
and discount values into millions of dollars. The unit value for Oil Rate
SC Monthly is $60 per barrel, so we need to multiply it by 0.000001 to
convert it into millions of dollars.
5. Select the Discrete Cash Flow Terms tab and enter all discrete cash flow items
needed to calculate the selected Local NPV into the table, as shown in the
following example:



100 Creating and Editing Input Data CMOST User Guide




Parameter: Enter a unique, descriptive name for the discrete cash flow
item.
Cash Flow Time: Select the characteristic date time on which the
discrete cash flow will take place.
Yearly Discount Rate: Enter the discount rate in decimal form; i.e., 0.1,
not 10%.
Unit Value: Enter the value of the discrete cash flow. By convention,
revenues are positive, expenses are negative. In the above example,
newW1_L3_UBAK has a unit value of -6000 (expense of 6000).
Conversion Factor: Enter the factor needed to convert the discrete cash
flow term into common units. In our example, we are converting cash
flows and discount values into millions of dollars. The unit values for our
examples are in dollars, so we need to multiply them by 0.000001 to
convert them into millions of dollars.
6. Repeat steps 3-5 for the remaining Local NPVs. Once done, you will have fully
defined the Field NPV objective function.
5.5.5 Advanced Objective Functions
Through the Advanced Objective Functions node, you can define advanced objective
functions using:
Excel spreadsheet calculation: You can configure CMOST to write parameter
values and simulation results to a specific worksheet and cell in an Excel
spreadsheet then read calculated objective function results from that spreadsheet.
User-defined source code: Use J script code to calculate the objective function.
Refer to Using J script Expressions in CMOST.
User-defined executable calculation: You can use a third-party application, such
as MATLAB to calculate and return the value of an objective function.
NOTE: For all three types of advanced objective functions, a Test button is provided
which will use the base dataset and parameter default values to calculate the advanced objective
functions.
5.5.5.1 To enter an Excel spreadsheet calculation
Before starting this procedure, create the Excel spreadsheet, so you know which cells
CMOST will write to and read from.



CMOST User Guide Creating and Editing Input Data 101



1. In the Advanced Objective Functions node, click Insert and then select Use
Excel Spreadsheet Calculation. A new advanced objective function is added to
the Advanced Objective Functions table using a unique default name. Tabs for
defining the interface to the Excel spreadsheet are presented below the table, as
shown in the following example:

2. In the Advanced Objective Functions table, enter the following:
- Name: Type in a name that describes the advanced objective function.
Name will be displayed in results plots, and will be used in global
objective function calculations.
- Unit Label: Enter the units of the advanced objective function. These
units will be displayed in results plots.
- Advanced Objective Function Type: This is read-only.
- Max. Execution Time (min): Enter the maximum execution time in
minutes that you want CMOST to use. If the operation has not finished
executing within the maximum execution time, the CMOST engine will
stop and an error message will be displayed in the Engine Events table
on the Control Centre page.



102 Creating and Editing Input Data CMOST User Guide



3. In the Objective Function From Excel tab, CMOST will read the value of the
objective function from the Excel spreadsheet as defined in the following example:

In the example, we have browsed to the Excel spreadsheet, and specified the
worksheet, column and row number of the cell from which CMOST will read the
objective function. The calculation will be performed for each experiment, and the
value of the advanced objective function will be displayed in the Experiments Table.
4. In the Write Parameter Values to Excel tab, specify the parameters that you want
to submit to the Excel spreadsheet, and to which worksheet, column, and row, as
shown in the following example:

In the above example, CMOST will write the value of INTOI used in the
experiment to worksheet Sheet1 of the Excel spreadsheet, to cell $A$1.
5. In the Write Simulation Results to Excel tab, specify the simulator results that
you want to submit to the Excel spreadsheet, and to which cell (worksheet,
column, and row), as shown in the following example:

In the above example, CMOST will write the value of Cumulative Oil SC at the
date time shown to the defined cell.



CMOST User Guide Creating and Editing Input Data 103



6. Click the Test button to test the Excel calculation. A Test.xlsx
spreadsheet will be created. You can open this spreadsheet to view and confirm the
correctness of the Excel calculation before starting the CMOST engine.
5.5.5.2 To enter a user-defined source-code cal culation
1. In the Advanced Objective Functions node, click Insert and then select Use User
Defined Source Code Calculation. A new advanced objective function is added
to the Advanced Objective Functions table using a unique default name. In the
Advanced Objective Functions table, enter the following:
- Name: Type in a name that describes the advanced objective function.
Name will be displayed in results plots, and will be used in global
objective function calculations.
- Unit Label: Enter the units of the advanced objective function. These
units will be displayed in results plots.
- Advanced Objective Function Type: This is read-only.
- Max. Execution Time (min): Enter the maximum execution time in
minutes that you want CMOST to use. If the operation has not finished
executing within the maximum execution time, the CMOST engine will
stop and an error message will be displayed in the Engine Events table
on the Control Centre page.
A table is provided for entering any fixed-time variables that you may need for
your J script calculation, and a screen through which you will enter the J script code.
Refer to Using J script Expressions in CMOST for more information:




104 Creating and Editing Input Data CMOST User Guide



2. As necessary, insert and configure the fixed-time variables you will need in your
J script calculation.
3. Where noted in the Source Code area, enter the J script code needed to perform the
calculation.
4. Click the Test button to test the J script code before starting the CMOST
engine. The calculation is working correctly if a value is returned in the Test
Calculation Results dialog box.
5.5.5.3 To enter a user-defined executable cal culation
You can use a third-party application to read information from the simulation results files,
calculate the value of an objective function, and write this value to a file. CMOST can be
configured to read the value from the file then store it in the Experiments Table. To
illustrate, consider the following example, in which CMOST uses an executable,
GetMatBalanceError.exe, to calculate an advanced objective function, UserExeCal001:

CMOST GetMatBalanceError
.logfile
ResultFile
1
2
3
4
.datfile
5
6

1. CMOST creates the experiment dataset, for example,
SAGD_2D_DynaGrid_Optimization_4_00001.dat.
2. The simulator runs the dataset, and generates the .log, .out, and SR2 files.
3. CMOST calculates the objective functions for the experiment. To calculate
objective function UserExeCal001, CMOST triggers executable
GetMatBalanceError.exe (located in folder
D:/ResearchProjects/GetMatBalanceError). CMOST passes the executable two
argumentsthe file path to the .log file,
SAGD_2D_DynaGrid_Optimization_4_00001.log, and the file path for the result
file, SAGD_2D_DynaGrid_Optimization_4_00001.UserExeCal001.
4. GetMatBalanceError.exe reads the material balance error from the .log file.



CMOST User Guide Creating and Editing Input Data 105



5. GetMatBalanceError.exe writes the value of the material balance error to the
result file.
6. CMOST reads the value from the result file and sets it to UserExeCal001.
CMOST uses this value in algorithms, as appropriate, and inserts it into the
Experiments Table in the UserExeCal001 column.
To configure the above, the Advanced Global Objective page would appear as follows:

Objectivefunctionto
becalculatedbythe
externalexecutable
Filepathtoexternal
executable
Commandline
argumentsinthe
requiredorder
Previewofcommand
line
Commandline
switchesasnecessary

NOTE: You can select command line arguments then use the and buttons to arrange
them in the correct order.
In the Advanced Objective Functions table at the top of the page, select the user executable
calculation then click the Test button to verify that the calculation engine works
correctly and returns a value in the Test Calculations Results dialog box.
5.5.6 Global Objective Function Candidates
Through the Global Objective Function Candidates node, you can view built-in nominal
global objective function candidates, if you have defined them, as well as basic simulation
results:
History Match Quality
Net Present Values
Advanced Objective Functions
Basic Simulation Results



106 Creating and Editing Input Data CMOST User Guide



As well, through the Global Objective Function Candidates node, you can define
additional nominal global objective function candidates using the built-in nominal global
objective function candidates as variables. In the following example, we have defined
CJGOF as a function of several nominal global objective function candidates:
= + 2 + 0.4

As with built-in nominal objective function candidates, you can, for example, specify the
direction (minimization or maximization) of user-defined nominal global objective function
candidates as the goal of an optimization study, through the Engine Settings node, as shown
below:




CMOST User Guide Creating and Editing Input Data 107



5.5.7 Soft Constraints
You can define hard and soft constraints for history matching and optimization studies. The
purpose of soft constraints is to allow users to change objective function values when
constraints are violated. Refer to Hard Constraints for information about hard constraints.
Through the Soft Constraints page, you can define soft constraint violations which, if they
occur, will override objective function values. Checking for soft constraint violations is
performed while simulations are being run and, if a soft constraint is violated, penalties will
be applied to the objective function immediately, while the simulation is running.
To specify a soft constraint:
1. Select the Constraint Formula tab.
2. Click the button to the right of the Constraints table. A soft constraint
is entered. You can edit the name and enter a comment, if necessary. Selecting
Active instructs CMOST to check for the constraint violation.
3. When you create a soft constraint, a CMOST Formula Editor session is opened in
the Constraint Formula area. Enter the formula for the soft constraint violation,
as shown in the following example:

Formulas can be entered using any of the available functions and variables
available by clicking the button, for example:



108 Creating and Editing Input Data CMOST User Guide




The list of variables includes previously defined parameters, objective functions,
and basic simulation results. An alternate method of entering the formula is to
manually type it in. In the above example, Soft_Constraint_1 is violated if variable
CSOR2019 is not less than or equal to 4.
To define a penalty:
The other part of specifying a soft constraint is to define the penalties that will be applied if
the constraint is violated:
1. In the Constraints table, click the soft constraint to which you want to assign a
violation penalty. For each of the soft constraints defined in the Constraints table,
you should define one or more penalties.
2. Select the Constraint Penalties tab:




CMOST User Guide Creating and Editing Input Data 109



3. Click beside the Penalties for the constraints table to add a constraint
violation penalty.
4. Select the Objective Function from the drop-down list in the table then, as
appropriate, select the Active check box.
5. In the Formula Editor pane, enter the penalty, starting on the line indicated.
Penalty definitions may require more than one line.
In the above example, if Soft_Constraint_1 is violated, then NPV will be adjusted
as follows:
If objective function NPV is greater than or equal to 0, the value for NPV will be
replaced by the value of:

) 3 2019 ( CSOR
NPV

If NPV is less than 0, its value will not be changed.
If multiple soft constraints which affect the same objective function are violated,
the value calculated from the last violated soft constraint in the list will be used for
the objective function.
Any constraint violations that occur will be mentioned in the messages displayed
in the Engine Events table of the Control Centre page when the job is running.




CMOST User Guide Running and Controlling CMOST 111



6 Running and Controlling CMOST
6.1 Introduction
The Control Centre node and its subnodes are used to set up the simulators and experiments
and then run the CMOST jobs.
6.2 Control Centre
Through the Control Centre page, you start, monitor the progress of, pause, and stop your
CMOST jobs. Before you start the jobs, review and configure the settings in the following
pages:
Engine Settings
Simulation Settings
Experiments Table
Once you have configured the above settings, you can run your CMOST jobs by clicking the
Start Engine button in the Control Centre node or the one in the toolbar. If there are
outstanding validation errors, the Validation Error Summary dialog box will be displayed:

Validation errors must be resolved before you can start the engine. The resolution of
warnings is optional. You can click the button to copy the contents of the Validation
Error Summary table into the Windows clipboard for pasting into a Word or Excel file, or
into the body of an email.



112 Running and Controlling CMOST CMOST User Guide



Before you the engine has started, depending on the engine type, the Control Centre node
will be similar to the following:

In the Engine events area at the bottom of the screen, select the engine event types you want
to display in the table.
Click the Run button to start the CMOST engine. While jobs are running:
Engine events are displayed in the table in accordance with your selection. In the
following example, we have selected Show all (engine events):

NOTE: If you move the pointer over one of the progress bars, the number of experiments in
that category will be displayed.



CMOST User Guide Running and Controlling CMOST 113



NOTE: If the study is user-defined, a sensitivity analysis, or an uncertainty assessment, the
Experiments progress area will contain progress bars similar to that shown above. If the
study is a history matching or optimization, the Experiments progress area will consist of a
run progress plot similar to that shown below, which will show base case and optimal
solutions. If you move the pointer over one of the data points, the experiment number and
data-point value will be displayed.

You can open the Experiments Table to monitor the progress of experiments and
their outputs.
You can open the Simulation J obs page to monitor the status of jobs being sent to
the selected simulator.
You can pause the CMOST engine at any time by clicking the Pause button.
While the engine is paused, no new jobs will be submitted to the scheduler. All
unfinished jobs will continue to run until the end and CMOST will process their
results.
You can stop the CMOST engine at any time by clicking the Stop button. No
new jobs will be submitted to schedulers. All unfinished jobs will continue to run
until the end, however, CMOST will not process their results. After you have
clicked the Stop button, you can click the Run button to restart the
CMOST engine.
You can refresh the CMOST engine status at any time by clicking the Refresh
button. If you do not click this button, engine status will be updated automatically
at an (internally calculated) updating frequency.
You can clear the contents of the Engine Events table by clicking the
Delete All Events button.
You can copy the contents of the Engine Events table by clicking the Copy All
Events to Clipboard button then pasting the contents of the clipboard into an
application such as Microsoft Excel or Word.



114 Running and Controlling CMOST CMOST User Guide



If the run completes successfully, the Control Centre page will appear as follows:

In the above example, all experiments completed normally. If some of the experiments failed
or completed abnormally, you can obtain further information through the Experiments Table
of Simulation J obs pages.
When appropriate, you can view available results through the Proxy Dashboard and through
the Results & Analyses node to review the calculation and analysis of results. Some of the
results will be available during the run.
6.3 Engine Settings
Through this node, you define and specify the CMOST engine that will be used for your study.
6.3.1 Introduction
Click the Control Centre | Engine Settings subnode and set the study type and engine name:

NOTE: Estimated no. of new experiments is read-only and is an estimate of the number of
new experiments that will be required based on the study type and engine name.



CMOST User Guide Running and Controlling CMOST 115



The following engine choices are available:
Engine
User
Defined
SA HM OP UA
Manual Engine
External Engine
Response Surface Methodology
One Parameter At A Time
CMG DECE
Particle Swarm Optimization
Latin Hypercube Plus Proxy
Optimization

Differential Evolution
Random Brute Force
Monte Carlo Simulation Using
Proxy

Monte Carlo Simulation Using
Reservoir Simulator

Manual Engine: In the case of a user-defined study type, use of this engine means
no automatic creation of experiments and all experiments are created explicitly by
the user through classical experimental design, Latin hypercube design, or manual
creation.
External Engine: In the case of a user-defined study type, use of this engine
allows use of users own optimization algorithm. For further information, refer
to External Engine and User-defined Executable.
Response Surface Methodology: For Sensitivity Analysis and Uncertainty
Assessment using classical experimental design or Latin hypercube design,
a response surface methodology is applied. Response surface methodology (RSM)
explores the relationships between input variables (parameters) and responses
(objective functions). A set of designed experiments is used to build a proxy model
(approximation) of the reservoir objective function. The most common proxy
models take either a linear or quadratic form. After a proxy model is built, Tornado
plots displaying a sequence of parameter estimates are used to assess parameter
sensitivity. Refer to Response Surface Methodology for further information.



116 Running and Controlling CMOST CMOST User Guide



One Parameter At A Time (OPAAT): Traditional method for performing
sensitivity studies, in which information about the effect of a parameter is
determined by varying only that parameter. The procedure is repeated, in turn, for
all parameters to be studied. Refer to One-Parameter-At-A-Time Sampling for
more information.
CMG DECE: CMG-proprietary history matching and optimization method in
which parameter values are intelligently selected to achieve optimal solutions. For
further information, see CMG DECE.
Particle Swarm Optimization: History matching and optimization method in
which the run is initialized with a population of random solutions. Navigation
through the search space is guided by the best success so far, which usually results
in a convergence towards the best solution. Refer to Particle Swarm Optimization
for more information.
Latin Hypercube Plus Proxy Optimization: Latin hypercube design is used to
construct experiments then an empirical proxy model is built using the training
data obtained from the Latin hypercube design runs. The proxy model is then used
to determine the optimal solution. See Latin Hypercube plus Proxy for further
information.
Differential Evolution (DE): History matching and optimization method in which
the run is initialized with a population of random solutions or pre-defined known
ones. DE attempts to find parameter values in an intelligent manner to get optimal
solutions. Refer to Differential Evolution (DE) for more information.
Random Brute Force: History matching and optimization method in which all
combinations of parameter values are tested, with the starting point and path
through the parameter values different for each run. See Random Brute Force
Search for further information.
Monte Carlo Simulation Using Proxy: Using Monte Carlo simulation, inputs are
randomly generated from probability distributions to simulate the process of
sampling from an actual population. These inputs are then fed into the response
surface model, which is used to determine the uncertainty in the reservoir model.
Monte Carlo Simulation Using Reservoir Simulator: In this case, the inputs
selected from the Monte Carlo simulation are run through the simulator to
determine the uncertainty in the reservoir model.
6.3.2 General Settings
Regardless of the engine type you select, the following general settings will be presented:
Engine General:
- Auto Save Result Interval: Specify the frequency with which results are
to be saved to the CMOST study file while the CMOST engine is running.



CMOST User Guide Running and Controlling CMOST 117



Experiments Management:
- Default Keep SR2 Option for New Experiments: In the case of
sensitivity analyses and uncertainty assessments, if this is set to Yes,
CMOST will keep SR2 files for all experiments.
- Number of Failed Jobs to Exclude an Experiment: If an experiment
has failed this many times, then the experiment will be excluded.
- Number of Optimum Experiments to Keep Simulation Files: In the
case of history matching and optimization studies, the number of optimum
experiments for which CMOST will keep simulation input and output
files. The engines will delete the simulation files for all other experiments.
- Number of Perturbation Experiments for Each Abnormal
Experiment: A perturbation experiment is an experiment generated by
CMOST by slightly modifying the abnormal terminated experiment. This
is helpful in cases such as numerical tuning, where a minor change in the
input can change an experiment from an abnormal termination to a
normal termination.
Optimization Settings: In the case of an optimization study:
- Global Objective Function Name: Name of the global objective
function that is being optimized.
- Search Direction: If set to Minimize, the goal of the optimization will be
to minimize the global objective function. If set to Maximize, the goal of
the optimization will be to maximize the global objective function.
- Total Number of Experiments: This is the maximum number of
experiments that will be carried out to determine the optimum solution.
Once this number is reached, the engine will stop. This setting can be
changed during the run.
In the case of a user-defined study using an external engine, several configuration
items will be presented. Refer to External Engine for further information.
Random Seed: For engine functions that require a seed to generate random values,
specify whether the seed is to be user-specified or not. If it is not specified by the
user, a random seed, generated based on computer clock time, will be used. If it is
user-specified, using the same seed for the same set of parameters will always lead
to the same set of experiments. This means the result is repeatable.
6.3.3 Engine-Specific Settings
Depending on the engine type you select, the following settings will be presented:
6.3.3.1 CMG DECE Optimization
Honour Parameter Constraints: If set to True, then the engine will honour hard
constraints when creating new experiments. If set to False, then experiments violating
hard constraints may be created by the engine, however, they will not be run.



118 Running and Controlling CMOST CMOST User Guide



Continuous Parameters Sampling: This field is not selectable by the user. The
DECE engine will always assume continuous parameters have uniform distributions.
Discrete Parameters Sampling: This is not selectable by the user. The DECE
engine will always treat discrete candidate values equally probable during history
matching or optimization.
Number of Initial Effect Screening Experiments: This is the number of initial
experiments used by the DECE engine to test parameter effects. It is calculated
internally and the user cannot change it.
6.3.3.2 Latin Hypercube Plus Proxy Optimization
Continuous Parameters Sampling: In the case of a history match or optimization,
the method by which continuous parameters are to be sampled, one of the following:
- Discrete Sampling Using Pre-defined Levels: If this option is used, the
data range will be divided equally into pre-defined levels, and
experiments will only choose discrete values.
- Continuous Uniform Sampling within the Data Range: Parameters are
sampled uniformly within the data range.
- Continuous Sampling Using Prior Distribution: The user-defined
prior distribution is considered while sampling the search domain.
Discrete Parameters Sampling: In the case of a history match or optimization,
the method by which discrete parameters are to be sampled, one of the following:
- Treat Discrete Values Equally Probable: All candidate values are
treated as if they have the same probability.
- Honour Prior Distribution of Discrete Values: User-defined prior
distributions are honoured during the sample process.
Proxy Model Type: Set to Ordinary Kriging or Polynomial Regression.
Maximum time (minutes) allowed for proxy calculations in each iteration: The
maximum time allowed for the proxy to calculate and propose the next iteration of
experiments. If, in some scenarios, users find that between iterations, it is taking
too long to generate experiments for the next iteration, they can reduce this time;
otherwise, we recommend keeping the maximum time at its default value.
Number of Initial Proxy Training Experiments: Number of training
experiments that are used to build initial proxy. This is calculated by CMOST and
users are not able to change it.
6.3.3.3 Monte Carlo Si mulation Using Proxy
This uncertainty assessment method can be used when model uncertainty can be represented
by a list of independent/dependent uncertain parameters.
Proxy Model Type: You can choose either Polynomial Regression or Ordinary
Kriging. If Polynomial Regression is chosen, the following results will be available:



CMOST User Guide Running and Controlling CMOST 119



- Effect Estimates (tornado plots)
- Proxy Model Statistics (summary of fit, analysis of variance, effect
screening, and polynomial equations)
- Model Quality Check (Monte Carlo results)
If Ordinary Kriging is selected, the following results will be available:
- Proxy Models (variogram base, exponent, and weights, if the number of
experiments is less than 100)
- Monte Carlo Results
If the Polynomial Regression proxy model is chosen, the following terms can be used as
engine stop criteria:
Interested Terms [not applicable for ordinary kriging]: If set to Linear, then
the CMOST engine will check that the acceptable criteria for a linear polynomial
proxy model are met. Once the criteria are met, the CMOST engine will stop. If set
to Linear + Quadratic, then a Linear + Quadratic polynomial proxy model will be
checked against the stop criteria. If set to Linear + Quadratic + Interaction, then a
Linear + Quadratic+ Interaction polynomial proxy model will be checked against
the stop criteria.
Acceptable R-Square [not applicable for ordinary kriging]: R-square (R
2
)
indicates how well a proxy model fits observed data. An R
2
of 1 occurs when there
is a perfect fit (the errors are all zero). An R
2
of 0 means that the proxy model
predicts the response no better than the overall response mean. This field specifies
the value that would be deemed acceptable. If this value is not reached and the
total number of extra experiments has not exceeded the Percentage Limit of
Extra Experiments for Improving Proxy, the CMOST engine will try to
generate more experiments to improve proxy quality.
Acceptable R-Square Adjusted [not applicable for ordinary kriging]: R-square
adjusted is a modification of R
2
that adjusts for the number of explanatory terms in
a model. Unlike R
2
, the adjusted R
2
increases only if the new term improves
the proxy model more than would be expected by chance. The adjusted R
2
can be
negative, and it will always be less than or equal to R
2
. This field specifies the
value that would be deemed acceptable. If this value is not reached and the total
number of extra experiments has not exceeded the Percentage Limit of Extra
Experiments for Improving Proxy, the CMOST engine will try to generate more
experiments to improve proxy quality.
Acceptable R-Square Prediction [not applicable for ordinary kriging]: R-
square prediction indicates how well a proxy model predicts responses for new
observations. Ranging between 0 and 1, larger values suggest models of greater
predictive ability. This field specifies the value of R-squared prediction that would
be deemed acceptable. If this value is not reached and the total number of extra
experiments has not exceeded the Percentage Limit of Extra Experiments for



120 Running and Controlling CMOST CMOST User Guide



Improving Proxy, the CMOST engine will try to generate more experiments to
improve proxy quality.
Acceptable Relative Error of Proxy Verifications (%): This field specifies the
maximum acceptable error for the verification experiments. If this value is not
reached and the total number of extra experiments has not exceeded the
Percentage Limit of Extra Experiments for Improving Proxy, the CMOST
engine will try to generate more experiments to improve proxy quality.
Percentage Limit of Extra Experiments for Improving Proxy (%): This stop
criterion determines maximum number of experiments that can be generated to try
to meet all the above criteria. For example, for a certain study, the expected
number of experiments is 100. If the percentage limit of extra experiments for
improving proxy is set to be 25%, then a maximum of 25 experiments will be
added to try to improve the quality of the proxy to meet the specified stop criteria.
If 125 experiments have been run, and there are still criteria that are not met, the
engine will stop anyway.
If the Ordinary Kriging proxy model is chosen, Acceptable Relative Error of Proxy
Verifications (%) and Percentage Limit of Extra Experiments for Improving Proxy (%)
can be used as Engine stop criteria.
6.3.3.4 Monte Carlo Si mulation Using Simul ator
This uncertainty assessment method should be used when the model uncertainty is
represented by discrete realizations (e.g., geostatistical realizations or history-matched
models). In this case, it is inappropriate to use a proxy model for the uncertainty assessment
because discrete realizations cannot be characterized by continuous numbers.
6.3.3.5 One-Parameter-At-A-Time
Reference Case Parameter Values: If set to Use Parameter Median Values, then
the parameters median value is used in the reference case. If set to Use Parameter
Default Values, then the parameters default value is used in the reference case.
Continuous Parameter Testing: If set to Test All Discrete Levels, then each
discrete level of the parameter value will be used to generate experiments. If set to
Test Lower and Upper Limit Only, then only the minimum and maximum values
will be used to generate experiments.
Discrete Parameter Testing: If set to Test All Candidate Values, then each
candidate value will be used to generate experiments. If set to Test Lower and
Upper Bound Only, then only the minimum and maximum values will be used to
generate experiments.



CMOST User Guide Running and Controlling CMOST 121



6.3.3.6 Parti cle Swarm Optimization (PSO)
Refer to Particle Swarm Optimization for further information:
Inertia Weight: This setting must be between 0.4 and 0.9. A large inertia weight
facilitates a global search while a small inertia weight facilitates a local search.
C1, C2: Cognition and social components in the PSO. These settings control the
exploration/exploitation capability of the algorithm. In history matching,
exploration refers to a wide search of the possible combination of unknown
parameters that give a good match while exploitation is a deeper search of the
previously found promising regions. Lower values for these parameters support
exploration of the search space, but this can have an adverse impact on the
convergence. It is highly recommended that you keep C1 and C2 between 1 and 2.
Population Size: Set to 20 by default. A large population size can be used when
the total number of simulations is high. It is suggested that the number of PSO
iterations (the total number of simulations divided by the population size) should
be greater than 20.
6.3.3.7 Differential Evolution (DE)
Refer to Differential Evolution (DE) for further information:
F: This parameter, scaling factor F [0,4], causes perturbation to the vector
differences, which controls the rate at which the population evolves. A large value
facilitates exploration, while a small value promotes exploitation. Set to 0.5 by
default.
Cr: This parameter, crossover probability C
r
[0,1], controls the diversity of the
populations. Set to 0.8 by default.
Np: This parameter, population size Np [4, 200], is set to 30 by default.
6.3.3.8 Random Brute Force Search
Refer to Random Brute Force Search for further information:
Continuous Parameters Sampling: In the case of a history matching or
optimization, the method by which continuous parameters are to be sampled, one
of the following:
- Discrete Sampling Using Pre-defined Levels: If this option is used, the
data range will be divided equally into pre-defined levels, and
experiments will only choose discrete values.
- Continuous Uniform Sampling within the Data Range: Parameters are
sampled uniformly within the data range.
- Continuous Sampling Using Prior Distribution: The user-defined
prior distribution is considered while sampling the search domain.



122 Running and Controlling CMOST CMOST User Guide



Discrete Parameters Sampling: In the case of a history matching or optimization,
the method by which discrete parameters are to be sampled, one of the following:
- Treat Discrete Variables Equally Probable: All candidate values are
treated as if they have the same probability.
- Honour Prior Distribution of Discrete Values: User-defined prior
distributions are honoured during the sample process.
6.3.3.9 Response Surface Methodology
Interested Terms: If set to Linear, then the CMOST engine will check that the
acceptable criteria for a linear polynomial proxy model are met. If the criteria are met,
the CMOST engine will stop. If set to Linear + Quadratic, then a Linear + Quadratic
polynomial proxy model will be checked against the stop criteria. If set to
Linear + Quadratic + Interaction, then a Linear + Quadratic+ Interaction polynomial
proxy model will be checked against the stop criteria.
Acceptable R-Square: R-square (R
2
) indicates how well a proxy model fits
observed data. An R
2
of 1 occurs when there is a perfect fit (the errors are all zero).
An R
2
of 0 means that the proxy model predicts the response no better than the
overall response mean. This field specifies the value that would be deemed
acceptable. If this value is not reached and the total number of extra experiments has
not exceeded the Percentage Limit of Extra Experiments for Improving Proxy,
the CMOST engine will try to generate more experiments to improve proxy quality.
Acceptable R-Square Adjusted: R-square adjusted is a modification of R
2
that
adjusts for the number of explanatory terms in a model. Unlike R
2
, the adjusted R
2

increases only if the new term improves the proxy model more than would be
expected by chance. The adjusted R
2
can be negative, and it will always be less
than or equal to R
2
. This field specifies the value that would be deemed acceptable.
If this value is not reached and the total number of extra experiments has not
exceeded the Percentage Limit of Extra Experiments for Improving Proxy, the
CMOST engine will try to generate more experiments to improve proxy quality.
Acceptable R-Square Prediction: R-square prediction indicates how well a proxy
model predicts responses for new observations. Ranging between 0 and 1, larger
values suggest models of greater predictive ability. This field specifies the value of
R-squared prediction that would be deemed acceptable. If this value is not reached
and the total number of extra experiments has not exceeded the Percentage Limit
of Extra Experiments for Improving Proxy, the CMOST engine will try to
generate more experiments to improve proxy quality.



CMOST User Guide Running and Controlling CMOST 123



Acceptable Relative Error of Proxy Verifications (%): This field specifies the
maximum acceptable error for the verification experiments. If this value is not
reached and the total number of extra experiments has not exceeded the
Percentage Limit of Extra Experiments for Improving Proxy, the CMOST
engine will try to generate more experiments to improve proxy quality.
Percentage Limit of Extra Experiments for Improving Proxy (%):This stop
criterion determines maximum number of experiments that can be generated to try
to meet all the above criteria. For example, for a certain study, the expected
number of experiments is 100. If the percentage limit of extra experiments for
improving proxy is set to be 25%, then a maximum of 25 experiments will be
added to try to improve the quality of the proxy to meet the specified stop criteria.
If 125 experiments have been run, and there are still criteria that are not met, the
engine will stop anyway.
6.3.3.10 External Engine and User-defined Executable
In the case of a user-defined optimizer, you will need to define the interface between CMOST
and your optimizer, in particular the Engine Settings and Input/Output Tables. These settings
are used to define, for instance, the location of the table files used when an external engine is
used in a user-defined study.
The workflow is as follows:
1. CMOST calls the simulator and calculates the objective functions for a set of
experiments.
2. Once all existing experiments are complete, CMOST outputs the experiment table
in CSV (comma-separated values) file format and calls the user-defined optimizer.
The iteration number, starting from 0, will be used as the only argument for the
command call.
3. The user-defined optimizer reads the file and performs its optimization calculation.
4. The user-defined optimizer proposes a new set of experiments. The proposed new
experiments are written in CSV format for CMOST to read. The user-defined
executable exits.
5. CMOST reads the new experiments table file and, if the number of generations
does not exceed the maximum number to be generated, adds the experiments to the
Experiments Table and then submits them to the simulator. Go to step 2.



124 Running and Controlling CMOST CMOST User Guide



The workflow for the user-defined optimizer is illustrated below:

Parameter
Table
(optional)
New
Experiments
TableFile
Previous
Generation
Experiments
TableFile
Setupastudy.Defineparameters,
objectivefunctions,andsoon
StartEngine
ExperimentsVerification*
Add,runexperiments
MaxNo.
Generation?
Stop
User-Defined
Optimizer
(Executable)
No
Yes
CMOST
* Beforeaddingexperiments,CMOSTchecks
ifthetotalnumberofexperimentsmatches
thepopulationsizeornot.Forauser-defined
engine,themaximumnumberofgenerations
isusedasthestopcriterion.

Previous Generation Experiments Table File
Once all of the experiments in the experiments table are complete, CMOST output the
previous generations experiments information into this file. The table is output in CSV
format. All the columns in the CMOST Experiments Table page are output, as shown in the
following example:
I D, Gener at or , St at us, Resul t St at us, Pr oxy Rol e, Keep SR2, Has SR2, Hi ghl i ght , Par a1,
Par a2, Par a3, Par a4, Obj 1, Obj 2
18, Ext er nal Engi ne, 5, 4, 0, 0, Fal se, Fal se, 0. 5, 200, 1, 0. 45, 3450. 3, 54. 4,
19, Ext er nal Engi ne, 5, 4, 0, 0, Fal se, Fal se, 1. 9, 300, 1, 0. 21, 2980. 3, 125. 8,
20, Ext er nal Engi ne, 5, 4, 0, 0, Fal se, Fal se, 1. 0, 200, 3, 0. 45, 1480. 3, 50. 2,
User-defined Executable File
This executable file contains the users optimization algorithm. It is called when all the
existing experiments in the experiments table are completed.
The main tasks of the user-defined executable file include, but are not limited to:
1. Read in the previous experiments results from the Previous Generation
Experiments Table file.



CMOST User Guide Running and Controlling CMOST 125



2. Analyze the results and, as necessary, propose a new set of CMOST experiments.
3. Write the proposed experiments to the New Experiments Table file, which is
going to be read by CMOST.
Notes about creating user-defined executables:
1. User-defined executables can be compiled using any programming language.
2. User-defined executables must use a generation-based algorithm; i.e., in each
generation, the population size (number of experiments) must be maintained
constant.
3. When CMOST calls a user-defined executable, the iteration number is passed to
the executable as the only command argument.
4. A user-defined executable may save information used between generations, or
simply output this information to a debug file.
5. Maximum run time for a user-defined executable is 60 minutes.
6. User-defined executables must be able to exit silently once they have completed
their calculations.
New Experiments Table File
The new experiments table file is the CSV table file output by the user-defined executable. The
number of proposed experiments must equal the population size. The first row of the table file
is the comma separated parameter names, and the rest of the rows are proposed experiments.
Note that any illegal or repeated experiments will be rejected and will automatically be replaced
with a random experiment defined by CMOST.
An example new experiment table:
I Par a1, Par a2, Par a3, Par a4
1. 9, 200, 0. 56, 2
1. 5, 300, 0. 32, 1
0. 5, 100, 0. 21, 3
Parameter Table File
After the external engine has started, CMOST will write a parameter table file. The user-
defined executable can read in parameter information from this file. Users may alternately
hard-code parameter information into the executable, in which case, the parameter table file
will be ignored.
Each parameters information will be output to a line in the file. Different formats are used,
depending on the parameter type:
For discrete integer and discrete real (double) parameters, the parameter
information format is:
Par amt er Name, Sour ce, candi dat eVal ue1, candi dat eVal ue2, ,
candi dat eVal ueN



126 Running and Controlling CMOST CMOST User Guide



For discrete text parameters, the numerical value will be output in the following
format:
Par amt er Name, Di scr et eText , candi dat eNumer i cal Val ue1,
candi dat eNumer i cal Val ue2, candi dat eNumer i cal Val ue3,
candi dat eNumer i cal Val ueN
For continuous real (double) parameters, the format is:
Par amt er Name, Cont i nuousDoubl e, mi nVal ue, maxVal ue
The following is an example of the contents of a parameter table file:
Par a1, Di scr et eDoubl e, 0. 5, 1. 0, 1. 5, 1. 9
Par a2, Di scr et eI nt eger , 100, 200, 300
Par a3, Cont i nuousDoubl e, 0. 1, 0. 6
Par a4, Di scr et eText , 1, 2, 3
6.4 Simulation Settings
Before you can run CMOST simulation jobs, you will need to configure the Simulation
Settings page, shown below, in particular:
Schedulers
Simulator version
Number of CPUs per job
Maximum simulation run time
J ob record and file management

As shown in the above example, the Simulation Settings page has three areas.



CMOST User Guide Running and Controlling CMOST 127



6.4.1 Schedulers
Through the Schedulers area, you can adjust the settings for each scheduler separately. The
Schedulers table also lists basic information about the schedulers.
Configure the table in the Schedulers area as follows:
Active: The Active check box determines whether or not CMOST will use a specific
scheduler. If the Active check box is selected, CMOST will use the scheduler;
otherwise, the scheduler will not be used.
NOTE: If schedulers other than Local are used, all study files must be located in a UNC
(Universal Naming Convention) directory.
Scheduler Name: The names of the schedulers are listed in the Scheduler Name
column. Local is the computer that is currently being used to open the project.
This information cannot be edited in CMOST. If this information needs to be
changed, it should be done through Launcher.
Type: The scheduler type is one of the following types:
- Local (local computer running CMG J ob Service)
- CMG Drone Scheduler (a remote computer running CMG J ob Service)
- MSCC Scheduler (Microsoft Windows Compute Cluster)
- LSF Scheduler (Platform Computing LSF)
- SGE Scheduler (Sun Grid Engine)
Max Concurrent Jobs: Different numbers of jobs can be run on different
schedulers. Max Concurrent Jobs can be edited so that CMOST will send a
certain number of jobs to each scheduler. The default value is 1. When changing
this value, you should consider:
- Number of processors for the scheduler: N
- Number of required processors for each job: nj
- Is the scheduler shared by other users?
Max Concurrent Jobs should NOT be greater than N/nj. If a scheduler is shared
by other users, you should limit Max Concurrent Jobs to prevent using all
processors by yourself.



128 Running and Controlling CMOST CMOST User Guide



NOTES:
1. J obs that are running as well as jobs that are queued are both considered pending jobs.
2. If the CMG DECE Optimizer is used for a history matching or optimization study, it is
suggested that the total number of Max Concurrent Jobs for all schedulers be less than
10 to improve performance. This is because if there are too many simultaneously running
jobs, the optimizer will not be able to learn fast enough to reduce the total number of jobs
required to find an optimum solution. On the other hand, if the objective is just to reduce
total elapsed time, this suggestion can be ignored.
If this value is set to 0, no jobs will be run on that scheduler; however, it is
recommended that the Active check box be cleared if jobs should not be run on a
specific scheduler to avoid confusion.
Max Failed Jobs: Schedulers can be set to stop sending jobs to a scheduler if a
certain number of jobs have failed. Failed jobs are jobs that could not be started
due to hardware or software problems such as:
- Network cable unplugged
- Computer shut down
- CMG J ob Service not running
- No simulator is installed, or
- No license is available.
J obs that are terminated by simulator abnormally due to numerical problems are
NOT failed jobs. CMOST will stop sending jobs if the hardware/software problem
persists for a scheduler. The default maximum number of failed jobs is 25.
Work Plan: CMOST can be set to send jobs to different schedulers at different
times. There are three options available for work plans:
- All Time
- Evenings and Weekends
- Weekends
If Work Plan is set to All Time, there will no limits on when jobs can be
scheduled. This is the default.
If Work Plan is set to Evenings and Weekends, jobs will be scheduled 8 PM to
6 AM Monday through Friday and all hours on Saturday and Sunday. This option
is useful if a computer is normally used during the day during the workweek but is
left idle during evenings and weekends.
If Work Plan is set to Weekends, jobs will only be scheduled from 12:00 AM
Saturday to 12:00 PM Sunday. This option is useful if a computer is used during
the workweek but left idle during the weekends.



CMOST User Guide Running and Controlling CMOST 129



NOTE: J obs that take a while to run may run outside of the range set in the Work Plan. The
Work Plan only guarantees that jobs will not be sent to a scheduler outside of the times set. If
necessary, it is OK to kill jobs in Launcher. Once a job is killed, CMOST will be notified and
proper action will be taken. For sensitivity analysis and uncertainty assessment, a new job
with the same parameter values will be scheduled. For history matching and optimization, the
killed job will be ignored and a new job may be scheduled depending on the progress of the
optimization process.
Job Priority: Different priorities can be set for different schedulers. If there are
multiple jobs queued for a scheduler, the jobs with higher priority will be run first.
The default value for Job Priority is Low.
Additional Switches: If a scheduler switch is required, it should be entered into
the Additional Switches column. See the Launcher Users Guide for more
information on scheduler switches.
Host Computer: The Host Computer column lists the computers the schedulers
run from. There is no Host Computer listed for the Local scheduler. This
information cannot be edited in CMOST. If this information needs to be changed,
it should be done through Launcher.
Refresh: If a new scheduler has been added or removed via Launcher, the Scheduler
table can be updated to reflect these changes. Schedulers may also need to be updated
if a study file is copied in from another computer and the schedulers are different on
the two machines. The Refresh button can be clicked to update the table. If new
schedulers have been added, they will be added to the table with default values.
6.4.2 Simulator Settings
Through the Simulator Settings area, configure the setting as follows:
Simulator: Simulator that CMOST should use is specified. This is read-only.
Simulator Version: If there is more than one simulator version installed on the local
computer, more than one version will be listed. It is recommended that the same
simulator version is installed on all compute nodes of all schedulers.
Number of CPUs per job: The number of processors to use for each job can be
specified.
NOTE: This value should not be greater than the number of processors for the scheduler with
the lowest number of processors.
If you are running on your local machine, right-click the Windows task bar then
select Properties. Select Start Task Manager. The number of available CPUs is
displayed in the Performance tab. Enter that number in this field.
In the following example, the number of available CPUs is 12.



130 Running and Controlling CMOST CMOST User Guide




Thiscomputer
has12CPUs

Method to Find Executable: The method to find executable defines which
simulator will be used on remote computers. By default CMOST will attempt to
use the simulator version defined in the Simulator field. If this version does not
exist on the remote computer, CMOST will use a method to find an alternative
simulator version. Three options are available:
- Find Closest Version
- Find Latest Version
- Find Exact Version
The option Find Closest Version will try to locate the closest match to the
simulator defined. The Find Latest Version option will try to find the newest
version located on the remote computer. If Find Exact Version is used, only the
simulator defined will be used; i.e., no jobs will be sent to the scheduler if that
version does not exist.
Max Run Time per Job (hours): A limit can be set on the amount of time taken
for a single job. If a job has not completed by the time specified, the job will be
killed by the engine. The default maximum is 720 hours (30 days).
Additional Simulator Switches: If simulation switches are required, they can be
entered in the Additional Simulator Switches text box. More information on
simulator switches can be found in the simulator and Launcher user guides.
Apply Simulator License Multiplier: If set to True, simulation jobs submitted by
CMOST will take less tokens than a job submitted out of CMOST. For example,
for the same amount of license, IMEX jobs submitted from CMOST take of
the token compared with an IMEX job submitted out of CMOST. GEM and
STARS jobs take of the token when using the CMOST License Multiplier.



CMOST User Guide Running and Controlling CMOST 131



Write SR2 Files on Execution Host: If set to True, SR2 files will be written to a
temporary folder on the execution host computer during simulation, then the files
will be copied to the study folder when the simulation is done. If set to False, then
SR2 files will be written to the study folder directly.
Write Log File on Execution Host: If set to True, simulator log files will be
written to a temporary folder on the execution host computer during simulation,
then the files will be copied to the study folder when the simulation is done. If set
to False, then simulation log files will be written to the study folder directly.
6.4.3 Job Record and File Management
Through the Job Record and File Management area you can configure how CMOST
manages job records in Launcher and simulation output files on the disk. For example, you
can instruct CMOST to keep or delete simulation output files (.irf, .mrf, .out) for abnormally
terminated jobs, as shown below:

6.5 Experiments Table
Based on your engine settings, CMOST will automatically generate a set of experiments,
except when you choose to use a manual or external engine. You can also generate additional
experiments using available design algorithms and by manually defining them. Through the
Experiments Table, you manage the experiments that will be run as part of a study. This
section provides information about the following:
Navigating the Experiments Table
Creating and Importing Experiments, in addition to those that are automatically
created by CMOST based on engine settings.
Configuring the Experiments Table. You can configure the appearance of the
Experiments table; in particular, the order of the columns and the grouping of
rows on the basis of one or more of the columns.
Checking Experiment Quality. You can review the orthogonality of the experiments.
Exporting the Experiment Table to Excel.
Viewing the Simulation Log.



132 Running and Controlling CMOST CMOST User Guide



Clearing the SR2 Files
Reprocessing Experiments.
6.5.1 Navigating the Experiments Table
The Experiments Table, shown below, will contain experiments that have been:
Automatically generated by CMOST based on engine settings.
Generated by the user using CMOST experiment design tools (such as Latin
hypercube design).
Manually entered by the user (user-defined experiments).

Experiments
Table
Contextmenu
Operationbuttons

6.5.1.1 Experiments Table Columns
NOTE: Whether cells in the Experiments Table are editable or not depends on the status of
the experiment. For New experiments, cells in the Proxy Role, Keep SR2, Highlight, and
Comments columns are editable. For Completed experiments, only the cells in the Highlight,
Proxy Role and Comments columns are editable. Proxy Role and Keep SR2 columns can be
edited by right-clicking the experiment then selecting the desired option in the context menu.
ID: Unique experiment ID number, assigned by CMOST when the experiment is
created. If the experiment is subsequently deleted, the ID number will not be reused.
Generator: The generator that was used to create the experiment; for example,
LatinHyperCube, Response Surface Methodology, or User.



CMOST User Guide Running and Controlling CMOST 133



Status: Status of an experiment while it is running, one of:
- Expecting: The Engine has determined that a certain number of
experiments need to be run. They have not been created yet, but are
expected to be created.
- Reuse Pending: This status is displayed if you have added new
parameters after creating an experiment. Experiment values for the new
parameters are therefore unknown so you will need to resolve Reuse
Pending status by providing the unknown parameter values for each
experiment, as follows:
a. Right-click an experiment, or multiple experiments, with Reuse
Pending status, using the CTRL and SHIFT keys as necessary.
b. Select Resolve Reuse Pending and then click Resolve Selected
Experiment(s).
c. In the Experiment Parameter Values dialog box, enter the value
for the new parameter that you want to use in the selected
experiment(s).
d. Click OK to apply.
If you select Resolve Reuse Pending and then click Resolve All
Experiments, you can set the value of the new parameter(s) for all
experiments that still have a Reuse Pending status.
- New: New experiment has been added, but the dataset has not yet been
created.
- Creating dataset: CMOST is creating the dataset for the experiment.
- Dataset created: CMOST has created the dataset for the experiment.
- Running: CMOST has submitted the dataset to Launcher.
- Complete: CMOST has received the SR2 files from Launcher.
- Reused: A previously completed experiment been reused in the current
study.
- Aborted: At any time before they are Complete, users can abort
experiments manually. Aborted experiments are ignored by CMOST; for
example, the results of an experiment that has been aborted will not be
used to determine a proxy model or as a candidate for an optimal solution.
Result Status: Status of the simulation results of an experiment:
- Unknown: CMOST has not yet checked the .log, .out, and .irf files.
- Incomplete: Simulation is not complete.
- Exceed max run time: Experiment has exceeded the Max Run Time
per Job setting defined in Simulator Settings.



134 Running and Controlling CMOST CMOST User Guide



- Abnormal termination: J ob was terminated by the simulator before the
stop time was reached.
- Normal termination: The simulation has run to the last time or date
specified in the dataset and has produced the output files. CMOST will
clear the job record in Launcher, recover the necessary information from
the output files, and process them as specified in the Job Record and
File Management table in the Simulation Settings node, and in the
Keep SR2 column in the Experiments Table.
- Violate hard constraints: A hard constraint has been violated, and the
simulation run has not been allowed to proceed. Refer to Hard Constraints
for further information.
- Waiting to be re-processed: This status is displayed if, after running an
experiment, you change unit system, field data, fundamental data, or
objective functions. In this case, CMOST will need to recalculate the
objective functions. Starting the engine will automatically reprocess any
experiments that require it, or you can force one or more experiments to
be reprocessed, as follows:
1. Click the experiment you want to reprocess or use the CTRL key
to select multiple experiments.
2. Right-click the selected experiment(s) then select
Reprocess | Reprocess Selected Experiment(s) or click the
Reprocess Experiment(s) button then select Reprocess
Selected Experiment(s).
3. CMOST will try to recalculate all the objective functions for the
experiment.
- Re-process failed: CMOST cannot find enough data to recalculate the
objective functions for this experiment. For example, a new objective
function is added, so all experiments need to be reprocessed. A
re-process failed can happen if the SR2 files are deleted and the VDR
files do not contain the data needed to calculate the newly added
objective function.
Proxy Role: If set to Training, the results of the experiment will be used to
formulate the proxy model. If set to Verification, the results will be used to verify
the accuracy of the resulting proxy model. If set to Ignore, the results will be used
for neither training nor verification. Refer to Proxy Dashboard and Proxy Analysis
for further information about the CMOST proxy model.
Keep SR2: If set to Auto, CMOST will adhere to the simulation settings. If set to
Yes, CMOST will keep the SR2 files after the experiment simulation has run
regardless of the simulation settings. If set to No, CMOST will delete the SR2 files
after the experiment has run, regardless of the simulation settings. This field is
editable as long as the experiment has not completed.



CMOST User Guide Running and Controlling CMOST 135



Has SR2: If simulation results (SR2) files have been produced and CMOST has
not deleted them, this box will be checked. This check box is read-only and cannot
be edited.
Highlight: Specific experiments can be highlighted in CMOST plots. If Highlight
is checked, the results of that experiment will be displayed as a purple
curve or data point in plots.
Parameter Value: The specific candidate value (numeric, text, or other), that was
entered or generated for each parameter is listed in the Experiments Table, with
the name of the parameter displayed in the column heading. If the parameter had
its source specified as Formula, the formula calculation result is displayed. If a
parameter is not Active, the default value of the parameter is used. All parameters
are listed whether or not they are Active.
Objective Function Values: The calculated value for each objective function is
listed in the Experiments Table, with the name of the objective function displayed
in the column heading.
Execution Node: Host computer on which the experiment was run.
Dataset Path: The name and path of the dataset that was simulated is listed here.
In most cases, the .dat, .log, and output files are deleted for normally terminated
jobs, since this is the default setting in the Job Record and File Management
area of the Simulation Settings node. The VDR files are for CMOST use only.
Optimal: There is generally only one optimal experiment. For example, if the
global objective function to be optimized is NPV and the optimization direction is
Maximize (these fields are chosen in the Engine Settings node), the experiment
with the largest NPV is the optimal experiment. The optimal experiment may
change during the run. The Optimal check boxes are read-only and cannot be
edited. By default, the optimal experiment will show in a different color in all plots
by default so that they are easily distinguishable.
In some cases, multiple experiments may have identical global objective function
values. A typical situation that may lead to multiple optimal solutions is when some
of the parameters have no or very little effect on the global objective function.
Comment: Users can enter information about a particular experiment in this field.
6.5.1.2 Context Menu
Once you have populated the Experiments Table, you can select an experiment (the
experiment row will change to blue shading once selected), right-click the experiment row to
display the context menu, then select a menu item to perform one of the following operations:
Edit: Available if you have selected a user-defined experiment. It will not be
available if you select a generated experiment.



136 Running and Controlling CMOST CMOST User Guide



If you have selected a user-defined experiment, the Experiment Parameter Values
dialog box will be displayed, through which you can change the experiments
parameter values. The new values will appear in the table after you click OK.
Copy to new: Copy the selected experiment to a new user-defined experiment,
then change the parameters in the new experiment to ensure it is unique.
Copy to Clipboard: Copy the experiment table headings and selected experiment
rows to the Windows clipboard, which you can then paste into Excel, for example.
Resolve Reuse Pending: If the status of an experiment is Reuse Pending, this
option will be enabled. By clicking this option, users will be prompted to enter the
unknown parameter values for the selected experiment, so that the experiment can
be reused by CMOST.
Delete: If you have selected a generated experiment, you will be given the option
to Delete all Experiments in Generators. If you proceed, the Choose Generators
to Delete dialog box will be displayed, as shown in the following example:

In the above example, Generator Name LatinHyperCube is selected. If you click
OK, all of the experiments created by the LatinHyperCube generator will be
deleted from the table.
If you select a user-defined experiment, and then right-click Delete, you will have
the option of deleting only that experiment or, through the Choose Generators to
Delete dialog box, deleting all of the user-defined experiments.
You can select multiple user-defined experiments to delete using the CTRL and
SHIFT keys.
Set Proxy Role: Change the proxy role of the experiment; for example, if it was
set to Training, you will be able to change it to Verification or Ignore. When set to
Training, the results of the experiment will be used to calculate the proxy model. If
set to Verification, the results will be used to verify the accuracy of the resulting
proxy model. If set to Ignore, the results will not be used for training or
verification. Refer to Proxy Dashboard and Proxy Analysis for further information
about the CMOST proxy model.



CMOST User Guide Running and Controlling CMOST 137



Set Keep SR2: If set to Auto, CMOST will honour the settings in the Job Record
and File Management table in the Simulation Settings page. If set to Yes, the
SR2 files will be kept after the experiment has run regardless of the simulation
settings. If set to No, the SR2 files will not be kept after the experiment has run,
regardless of the simulation settings.
Clean SR2 Files: Deletes related SR2 files to save disk space.
Set Highlight: If set to False (default), the results of the experiment will not be
highlighted in plots. If set to True, the status and results of the experiment will be
highlighted in plots. The status and results of multiple experiments can be
highlighted. This setting can be entered directly into the table by selecting the
Highlights check box.
Abort: Regardless of the status of the experiment, the status will be changed to
Aborted. If the test has not yet been run, it will not be run. If it had already been
submitted to the scheduler, the job will be killed.
Reprocess: Recalculate all objective function values for the experiment.
Create Dataset: A dataset will be created using the experiment parameters. If you
select an experiment with a dataset path defined, you can click the Launch
Builder button to open the dataset in Builder. The purpose of this is for viewing
only, since CMOST will not know about any changes you apply through Builder.
Restore to New: This will restore a completed experiments status to New. Parameter
values are kept the same; however, the values of all objective function are erased.
6.5.1.3 Operation Buttons
The following buttons are provided on the right side of the Experiments Table:
Button Name Operation

Create Experiments Manually create experiments, in addition to those
automatically generated by the CMOST engine.

Export to Excel Export the table contents to an Excel file. Refer
to Exporting the Experiment Table to Excel.

Configure the Table Configure the columns or apply filters, as outlined
in Configuring the Experiments Table.

Reprocess Experiments Recalculate all objective function values for all or
selected experiments.



138 Running and Controlling CMOST CMOST User Guide



Button Name Operation

Check Quality Open the Experiments Quality window to view
information about the orthogonality of the
experiments. Refer to Checking Experiment Quality
for further information.

View Simulation Log Open the selected experiments simulation log file,
if available, in your default text editor. You specify
if the experiment log files are to be saved through
the Simulation Settings node.

Launch Builder Open the selected experiments dataset, if available,
in Builder. You specify if the experiment datasets
are to be saved through the Simulation Settings
node.

Launch Results Graph Open the selected experiments SR2 files, if
available, in Results Graph. The Has SR2 box will
be checked if it is available. You specify if
experiment SR2 files are to be saved through the
Simulation Settings node.

Launch Results 3D Open the selected experiments SR2 files, if they are
available, in Results 3D. The Has SR2 box will be
checked if it is available. You can specify if
experiment SR2 files are to be saved through the
Simulation Settings node.



CMOST User Guide Running and Controlling CMOST 139



6.5.2 Creating Experiments
In addition to experiments that CMOST will automatically generate, you can create your own
experiments, as follows:
1. Click Create Experiments in the upper right. The Create New Experiments
dialog box is displayed:

2. Select the method you want to use to create the new experiments then click Next.
3. If you choose Using classic design, the Choose classic experiment design dialog box
will be displayed:

a. Select the Levels of Experimental Design, 2 or 3.



140 Running and Controlling CMOST CMOST User Guide



b. Select the Sampling Method. For Levels of Experimental Design equal
to 2, Sampling Method can be set to one of Plackett-Burman,
Fractional Factorial, or Full Factorial. For Levels of Experimental
Design equal to 3, Sampling Method can be set to one of Box Behnken
or CCD Uniform Precision. For further information about these sampling
methods, refer to Classical Experimental Design.
c. Number of Experiments will be set based on the above settings.
4. If you choose Using Latin Hypercube design, the following Create New
Experiments dialog box will be displayed (for further information about Latin
hypercube design, refer to Latin Hypercube Design):

a. Select the sampling option for Continuous Parameters Sampling, one of :
Discrete Sampling Using
Pre-defined Levels
If this option is selected, the data range will be
divided equally into pre-defined levels, and
experiments will only chose discrete values.
Continuous Uniform
Sampling within the Data
Range
Parameters are sampled uniformly within the data
range.
Continuous Sampling
Using Prior Distribution
The user-defined prior distribution is considered
while sampling the search domain.



CMOST User Guide Running and Controlling CMOST 141



b. Select the sampling option for Discrete Parameters Sampling, one of:
Treat Discrete Values
Equally Probable
All candidate values are treated as if they have the
same probability.
Honour Prior Distribution
for Discrete Values
User-defined prior distributions are honoured
during the sample process.
c. Select the Number of Experiments. This will initially be set by
CMOST. The available selections are based on the number of parameters
and candidate values.
d. If you select Design quality optimization, CMOST will try to optimize
the quality of the design (minimizing the maximum pairwise correlation
and maximizing the minimum sample distance) by generating multiple
designs and selecting the best. If not selected, only one design is
generated and used.
e. Set Design quality optimization iterations, the maximum number of
iterations allowed for optimizing the design quality. It is only used when
Design quality optimization is selected.
f. If desired, select Use user-specified random seed to generate the
experiments and then enter the User-specified random seed. If the same
random seed is used, the sequence of the random generated experiments
can be repeated. If you do not select Use user-specified random seed, a
default random seed will be used.
g. Click . CMOST creates experiments based on the sampling options
and design configuration. The progress bar will indicate when the
experiment designing is complete.
5. The Using all parameter combination design option will only be enabled if the
parameters are discrete real, integer, or text. Formula-based parameters are also
allowed, since these are dependent parameters. CMOST supports up to 65000
combinations of parameter values, so the Using all parameter combination
design option will not be enabled if you meet or exceed this number of
combinations.



142 Running and Controlling CMOST CMOST User Guide



6. If you select Manual (user defined), you can define and add your own
experiments to the Experiments Table:

a. As shown above, select candidate values for discrete variables
(HTSORW, for example) and use the slider to set the value of a
continuous variable (PERMH, in the example).
b. Click Next to add the experiment to Experiments to be added.

c. Click Finish to add the experiment to the Experiments Table.
7. If all of the parameters are discrete, you will have the option of selecting Using all
parameter combination design, which will generate a set of experiments testing
all parameter combinations.
8. Regardless of the method you use to add experiments, they will be added to the
Experiments Table when you click Finish, for example:



CMOST User Guide Running and Controlling CMOST 143




6.5.3 Configuring the Experiments Table
6.5.3.1 To Re-order the Columns
Drag column headings left or right to reorder the columns.



144 Running and Controlling CMOST CMOST User Guide



6.5.3.2 To Group the Experiments
You can drag column headings to the area above the table to group the experiments by those
columns. In the following example, the experiments are grouped first by POR and then by
PERMV:

6.5.3.3 To Disable Column Di spl ay
You can restrict the display of information in the Experiments Table by clicking the
Configure Table button then selecting Column Configure. The Column Configure dialog
box is displayed. Through this dialog box, you can select general, parameter, and objective
functions for display:




CMOST User Guide Running and Controlling CMOST 145



6.5.3.4 To Filter Experiments
If you have a large number of experiments, you can define a filter to display only those
experiments that meet the filter properties, as follows:
1. Click the Configure Table button then select Filter Configure. The
following dialog box is displayed:

2. Configure the table filter as required. In the following example, the filter will
display only those experiments where Generator contains the string
LatinHyperCube, POR is equal to 0.22, ID is greater than 5, and HTSORG is
not equal to 0.04:

3. Once you have configured the filter, click OK. The Filter info label at the bottom
of the Experiments Table will now display the filter specification to the right of
the check box.
4. If you select the Filter info check box, the Experiments Table will only display
experiments meeting the filter specification, as shown below for our example:

NOTE: To display all experiments again, clear the Filter info check box.



146 Running and Controlling CMOST CMOST User Guide



6.5.4 Checking Experiment Quality
Click the Check Quality button to open the Experiments Quality window which
provides information about the quality of the experiment design. Refer to Sampling Methods
for a discussion of experiment orthogonality and the distribution of sampling points about the
parameter space.
In the following example, the experiment orthogonality is about 3.1E-17, which is in the
Perfectly orthogonal range:

In the following example, the orthogonality is 0.47, which is in the Not Orthogonal range:

NOTE: If the experimental design quality is in the red area, you can try additional classical
experimental design or Latin hypercube design to improve the design quality.



CMOST User Guide Running and Controlling CMOST 147



6.5.5 Exporting the Experiment Table to Excel
Click the Export to Excel button to export the table contents to an Excel file. This
command will honour the ordering of the columns that you have specified, and will save only
the rows that have been filtered.
6.5.6 Viewing the Simulation Log
Click the View Simulation Log button to open the simulation log file in your default text
editor. If no text editor is set to open .log files by default, a dialog box will appear asking the
user to select a program to view the file with. The .log file contains information useful to
reservoir engineers. Through the Job Record and File Management area on the
Simulations Settings node, users can specify when to keep or delete the log file.
6.5.7 Reprocessing Experiments
If the study objective functions, unit system or history data changes, the experiment objective
functions will need to be recalculated, after which the experiment can be reused. Click the
button to reprocess (i.e. recalculate all objective functions for) selected experiments or
all experiments.
6.6 Proxy Dashboard
The CMOST Proxy Dashboard provides an efficient way of assessing the effects of parameter
values on results, even while simulations jobs are in progress. You can see the results, as
simulations are completed, without having to wait until all of the simulations are complete.
This is particularly useful in history matching and optimization, where you may be running a
large number of simulations.
Through the Proxy Dashboard, before all of the experiments are completed, you can:
Use preliminary proxy models to begin predicting reservoir behavior.
Investigate the effect of varying input parameter values, thereby improving your
understanding of the reservoir and how proxy modeling works.
Define and add training or verification experiments to the study.
Compare different proxy models.
6.6.1 Opening the Proxy Dashboard
Once the CMOST engine has started, you can view and interpret the results of completed
experiments through the Proxy Dashboard. On the left side of the dialog box, you can
configure the Proxy Dashboard display, as follows:



148 Running and Controlling CMOST CMOST User Guide




Selectthetimeseriesthatyouwanttodisplay.
Selectthetrainingorvalidationexperimentforwhichyou
wanttocomparesimulation,proxymodel,andfield
historyresults,andwhat-ifscenarios.
Clicktocopytheselectedexperimentparametervalues
totheWhat-ifscenariotable.
Definetheparametervaluesforthewhat-ifscenario.
Addanexperimentthatusesthewhat-ifscenario
parametervaluestotheExperimentsTableor,ifthe
selectedexperimentisnotyetrunning,updateits
parametervaluesusingthewhat-ifscenarioparameter
values.

Select series area: In this area, select the series for which you want to build a
proxy model based on completed training experiments, view results calculated by
the proxy model, and compare the proxy model prediction with simulator outputs
and field history measurements.
Reference experiment area: Select the experiment for which you want to
compare results of field history measurements, simulator predictions (from the
experiments .irf file), and proxy model predictions (using the experiments
parameter settings). To do this:
1. Select the generator, CMG DECE in the above example.
2. Select the Proxy Role, one of Training or Validation. The experiments
that meet the criteria will be available through the Experiment ID drop-
down list.
3. Select the Experiment ID. The experiment parameter values will be
displayed in the Selected Experiment table. If a proxy model has been
built, the proxy model prediction will be displayed in the plot.



CMOST User Guide Running and Controlling CMOST 149



What-if scenario area: If you click the Selected Experiment parameter
values will be copied into the What-if scenarios table. You can then adjust the
values in the What-if scenario table using drop-down selections for discrete
parameters, and sliders (or type values in directly) for continuous variables. The
proxy-model-predicted time series for the what-if scenario will be displayed in the
plot. This technique illustrates the effect of varying parameter values on the proxy
model prediction, and can be used to define new experiments and adjust existing
experiments if they have not yet started.
If you click Update Experiment, then the parameters in the selected experiment
will be updated with the values in the what-if scenario table, as long as the
experiment has not yet started running, and it is unique.
If you click Add Experiment, a new experiment, using the parameter settings
defined in the What-if scenario area, will be added to the Experiments Table as
long as it is unique. As well, the experiment will become the reference (selected)
experiment.
6.6.2 Building a Proxy Model through the Proxy Dashboard
The proxy model is built internally by CMOST using the results of completed Training
experiments. The number of experiments needed to build a proxy model depends on the
number of parameters and the proxy type.
To build a proxy model, click the Build Proxy Model button. The Select and
Build Proxy Model dialog box is displayed:

Select the desired proxy model type, one of Classic Polynomial Proxy, Ordinary Kriging
Proxy, Advanced Polynomial Proxy, or Kernel Proxy, and then click the Build Proxy
button. Refer to Proxy Modeling for more information.
The progress bar will be changed to green once the proxy model is built:




150 Running and Controlling CMOST CMOST User Guide



Click OK. The proxy model is built and the models prediction for the selected time series
using the experiments parameter values is displayed, as shown in the following example:

Fieldhistorydata
Proxymodelprediction
usingparametersettings
forExperiment34.
Simulatorpredictionusing
parametersettingsfor
Experiment34.
Proxymodelprediction
usingparametersettings
forwhat-ifscenario.

In the above example:
The plot shows the proxy dashboard display for experiment 34 of a history matching
task that uses an ordinary kriging proxy model built after 50 experiments were run.
The plot shows the fit of the simulator and proxy model predictions for
experiment 34 with the field history data.
The what-if scenario has been generated using the parameter settings of experiment
34 as a starting point, then adjusting PERMH to move its proxy model prediction
down and below the field data, for the purpose of illustration. A user would more
likely try to align the proxy model prediction for the what-if scenario as closely as
possible with the field history data by adjusting one or more reservoir parameters,
then adding an experiment to the study with those values or, if the reference
experiment has not yet run, update its parameters with those of the what-if scenario.



CMOST User Guide Running and Controlling CMOST 151



6.6.3 Interacting with the Proxy Model
6.6.3.1 To assess proxy model prediction
As illustrated above, once you have built the proxy model, you can select any completed
experiment to see how closely the proxy model prediction matches experiment simulation
results and field data. As more Training experiments are completed, you can click the
button to rebuild the same type of proxy model, or to specify and build a new
proxy model type.
6.6.3.2 To view the effect of parameter values variation
As illustrated above, through the What-if scenario area, you can investigate the effect that
varying parameter values has on the prediction of the proxy model. In our previous example,
the What-if scenario area, starting with the parameter settings for experiment 34, appears as
follows:

Youcanadjustthevalueofacontinuousparameter,
suchasPERMH,bytypingoverthevalueormovingthe
slider.
Youcanadjustthevalueofadiscreteparameter,such
asHTSORW,byselectingthedesiredvalueinthedrop-
downlist.
Addanexperimentusingthewhat-ifscenario
parameterstotheExperimentsTableor,ifthereference
experimentisnotyetrunning,updateitsparameters
usingthewhat-ifscenarioparametervalues.

6.6.3.3 To zoom in to Proxy Dashboard plots
As with other plots, you can define and zoom into an area to view the proxy dashboard plot in
more detail, then right-click the plot and select Un-zoom to 100% to zoom back out.
6.6.3.4 To copy the Proxy Dashboard plot
You can also save the proxy dashboard plot to one of several image file formats, or copy it to
the Windows clipboard then paste it into another application, such as Excel or Word.
6.6.3.5 To add experiments through the Proxy Dashboard
You can add an experiment using the parameter values for the what-if scenario by clicking
the Add Experiment button or, if the selected experiment is not yet running, update its
parameter values with those of the what-if scenario by clicking the Update Experiment
button.
Through the Experiments Table, you can make further adjustments, for example, you can
change parameter values or the experiments proxy role.



152 Running and Controlling CMOST CMOST User Guide



You can run these experiments then, through the Proxy Dashboard, see how well the
simulation results for the added or modified experiment compare with the field history data
and the proxy model prediction.
6.6.3.6 To reload the proxy dashboard display
The Proxy Dashboard page does not automatically update if you remain on it; for instance, a
selected experiment may complete, but its *.irf curve will not show up in the Proxy
Dashboard plot unless you click Reload.
If the CMOST engine has added experiments in the background (experiments added by the
DECE engine, for example), the added experiments will be available for selection after you
click Reload.
6.6.4 Changing the Proxy Role
At any time, you can open the Experiments Table, change the proxy role, and then return to
the Proxy Dashboard. You may want to do this, for example, if you feel an experiment is an
outlier and you do not want to use it for training purposes.
6.7 Simulation Jobs
You can view the status of the simulation jobs, by clicking the Control Centre | Simulation
Jobs node. As experiments are completed, details are recorded in the Simulation Jobs table,
as shown in the following example:

NOTE: As with the Experiments Table, you can group the rows in the Simulation J obs table
by dragging the headings to the area above the table. You cannot, however, reorder the
columns by dragging their headings right or left.
The columns in the table are as follows:
Experiment: Unique (within the study) experiment ID number.
Launcher ID: Each experiment in a study will have a unique experiment ID. They
will be assigned a unique Launcher ID when they are submitted to the scheduler.
The Launcher ID will be different from the experiment ID since the scheduler may
process, within a single session, experiments from multiple studies.
Scheduler: The scheduler to which Launcher submitted the job.
Execution Node: Computer executing the job.



CMOST User Guide Running and Controlling CMOST 153



Launcher Status: Launcher job status, one of Waiting to Start, Running, or Complete.
Submitted At: Time Launcher submitted the job to the simulator.
Started At: Time the simulator started the simulation.
Finished At: Time the simulator completed the simulation.
Dataset: Dataset used for the simulation. If you selected Delete .dat on Normal
Termination, the dataset will be deleted if the simulation job terminated normally.
If you did not select Delete .dat on Normal Termination, the datasets will be stored
in the study folder.
Status: Launcher job status, one of Pending or Finished.
Results Status:
- Abnormal Termination: Job terminated by simulator before reaching
the stop time.
- Incomplete: J ob cannot run to completion as a result of hardware or
software issues.
- Normal Termination: Simulator ran it through to completion without
there being any problems.
- Killed: J ob killed by user.
- Failed: J ob failed to complete due to a hardware, software, or license
problem.
- Unknown: While the job is running, its result status is unknown.
Results Status Info: Information to clarify the reason for a particular Results
Status. If the Results Status for an experiment is Abnormal Termination, Results
Status Info may, in the case of an optimization study, be Convergence not achieved.
On the right side of the table, the following buttons are provided:
Button Name Operation

Refresh Simulation J obs Click to force CMOST to obtain an update of the status
of the simulation jobs.

View Simulation Log Open the selected jobs simulation log file, if available,
in your default text editor. The log file may not be
available after the job is Complete, depending on your
Simulation Settings.

Launch Builder Open the selected jobs dataset in Builder. The .dat file
for the job may not be available after the job is
Completed, depending on your Simulator Settings.



154 Running and Controlling CMOST CMOST User Guide



Button Name Operation

Launch Results Graph Open the selected jobs SR2 files, if available, in Results
Graph. The SR2 files for the job may not be available
after the job is Completed, depending on your
Simulation Settings.

Launch Results 3D Open the selected jobs SR2 files, if available, in
Results 3D. The SR2 files for the job may not be
available after the job is Completed, depending on your
Simulation Settings.





CMOST User Guide Viewing and Anal yzing Results 155



7 Viewing and Analyzing Results
7.1 General Information
Through the Results & Analyses node, you can view the results of the CMOST runs. As
soon as the study runs start, CMOST will begin to display the results.
7.1.1 Display of Multiple Plots
In the main page of each Results & Analyses node that has subnodes, you can display a mix
of plots from any of the subnodes, as shown in the following example:
1. Click Select in the Operations area. The Select Plots dialog box is displayed:

2. As shown in the above example, immediate subnodes appear as tabs, then in each
tab, there is a table of lower levels nodes. Select the plots you want to display in
the main node. You can select plots from different nodes and subnodes.



156 Viewing and Anal yzing Results CMOST User Guide



3. In the Operations area of the main node, configure the plot display by selecting
the number of plots on the horizontal and the vertical. In the following Results &
Analyses | Objective Functions node, we are displaying a run progress plot, a
histogram, a cross plot, and a proxy analysis plot:

If the number of plots exceeds the number allowed per page, multiple pages will be
required and you can step through these. Alternately, you can change the number
allowed horizontally and vertically, as shown above.
4. If you are displaying the plots while the simulations are running, the plots will be
refreshed regularly; however, you can click the Refresh button to update the
plots immediately if new data is available.
7.1.2 Screen Operations
Refer to Plots for information about general operations, such as saving plots to image files
and the Windows clipboard, and zooming in and zooming out.
7.1.3 Navigating the Tree View
Once you select a node in the tree view, you can navigate up and down open nodes using the
UP and DOWN ARROW keys. You can also use the LEFT and RIGHT ARROW keys to
open and close tree nodes. If there are no nodes to open or close, using the LEFT and RIGHT
ARROW keys will move the cursor up and down the tree hierarchy.



CMOST User Guide Viewing and Anal yzing Results 157



7.2 Parameters
Through the Parameters node, you can display results information on the basis of study
parameters.
7.2.1 Run Progress
Through the Parameters | Run Progress node, you can view plots that show the progress of
the simulation runs in terms of study parameters:

The above example shows run progress based on parameter INJP2007 and the experiment ID.
Each blue data point represents one experiment. As the run progresses, more and more
experiments (data points) are added. In the case of history matching and optimization studies,
the current optimal solution is also indicated by a red diamond-shaped data point. As the run
progresses, the optimal solution will be changed whenever a better solution is found.
As shown above, if you move the pointer over a data point, the experiment ID and the value
of the parameter (INJP2007) will be displayed.
A Run Progress plot is produced for each study parameter, which you can view by selecting
the plot node in the tree view.



158 Viewing and Anal yzing Results CMOST User Guide



If you right-click the plot and then select Data, you can open the Run Progress Data Table
dialog box:

Using the buttons on the right, you can export the contents of the Run Progress Data Table
to Excel, or copy it to the Windows clipboard then paste it into another application.
7.2.2 Histograms
Through the Parameters | Histograms node, you can view the frequency with which a
parameter value has been used in study experiments, for example:

As with other plots, you can save the image in various formats, or copy it to the Windows
clipboard then paste it into other applications. You can also zoom in and zoom out.



CMOST User Guide Viewing and Anal yzing Results 159



If you right-click the plot then select Data you can view and save the data in tabular format:

Using the buttons on the right, you can export the contents of the Results Histogram Data
Table to Excel, or copy it to the Windows clipboard then paste it into another application.
7.2.3 Parameter Cross Plots
Through the Parameters | Cross Plot node, you can view and assess trends and relationships
between parameters, and between parameters and objective functions. A typical cross plot is
shown below:

To generate a cross plot:
1. In the tree view, under Parameters | Cross Plots node, select the parameter you
want to analyze.



160 Viewing and Anal yzing Results CMOST User Guide



2. In the Cross Plot Settings table to the left, select the cross plots you want to display,
TotalOilProduced in the above example. As outlined previously, you can display
multiple cross plots in one view using the Plots on horizontal and vertical settings.
The cross plot will automatically include points from the base dataset if it is
available, and optimal and user-highlighted solutions will be identified.
7.3 Time Series
7.3.1 Observers
An example of a Time Series Results Observer plot is shown below:

The optimal solution in the above Time Series Results Observers plot is colored red. Other
runs are shown in blue. The thicker the lines appear, the more runs exist that are similar to the
one shown.
If a field history file is specified for the Result Observer, data points from this file will
displayed in the plot as dark blue circles.
For a history matching or an optimization minimization task, the optimal solution is the run
with the lowest global objective function. For an optimization maximization task, the optimal
solution is the run with the highest global objective function.



CMOST User Guide Viewing and Anal yzing Results 161



To export times-series data to a .txt file:
1. Right-click anywhere in the time-series plot and then select Data. The Export
Series Data Time Settings dialog box is displayed:

2. Set the time you want to export with the data, one of:
- Export time for each experiment: The data will be exported with the
time associated with each experiment.
- Export common time for all experiments: The data will be adjusted to
have a common time before it is exported. In this case you will define a
start and end time, and the number of points between them.
3. Click Next. The Export Series Data dialog box is displayed:

4. Select the series that you want to export, one of the following:
- Export all experiments
- Export user-highlighted experiments
- Export optimal solution experiment
- Export user selected experiments. If you select this option, you can click
User Selections to open the Select experiments to export table. You
can then select multiple experiments, making use of the CTRL and
SHIFT keys as necessary, then click Check. You can also select
experiments then click Uncheck to cancel. Finally, you can click the
Export check box directly.



162 Viewing and Anal yzing Results CMOST User Guide



5. Click Next. The Export Series Data File Information dialog box is displayed:

6. Enter the specification and configuration information for the exported text file.
7. Click Finish to export the time series data.
7.4 Property vs. Distance
7.4.1 Observers
An example of a property vs. distance results observer plot is shown below:




CMOST User Guide Viewing and Anal yzing Results 163



As with the time series results observers, the optimal solution is colored red. User highlighted
solutions are shown in purple. The remaining runs are shown in blue.
If a field history file is specified for the Result Observer, this will be displayed as circular
dark blue data points.
For a history matching or an optimization minimization task, the optimal solution is the run
with the lowest global objective function. For an optimization maximization task, the optimal
solution is the run with the highest global objective function.
You can export property vs. distance observer data to a text file, in a manner similar to the
procedure for exporting time series observer data, which is described in Exporting Time
Series Data. In the case of property vs. distance data, time setting will not be necessary.
7.5 Objecti ve Functions
7.5.1 Run Progress
In the Objective Function Run Progress plots, the optimal solution is shown in red. In the
following example, we have moved the pointer over the optimal solution data point to display
the experiment ID and the value of GlobalObj:

As with other plots, you can save the image in various formats, or copy it to the Windows
clipboard then paste it into other applications. If you right-click the plot then select Data you
can view and save the data in tabular format.



164 Viewing and Anal yzing Results CMOST User Guide



7.5.2 Histogram
The Objective Function Histogram plot shows the distribution of objective functions
calculated from the experiments:

As with other plots, you can save the image in various formats, or copy it to the Windows
clipboard then paste it into other applications. You can also zoom in and zoom out. If you
right-click the plot then select Data you can view and save the data in tabular format:

Using the buttons on the right, you can export the contents of the Results Histogram Data
Table to Excel, or copy it to the Windows clipboard then paste it into another application.



CMOST User Guide Viewing and Anal yzing Results 165



7.5.3 Objecti ve Function Cross Plots
Using objective function cross plots, you can identify trends and relationships between, for
example, objective functions and parameters, as shown below:

7.5.4 OPAAT Anal ysis
If you are performing a sensitivity analysis using the OPAAT engine, OPAAT plots will be
produced for each objective function, using different experiments as the reference, as shown
in the following example, where we have used the Select button to select multiple OPAAT
displays.



166 Viewing and Anal yzing Results CMOST User Guide




In the above example, the OPAAT plot uses Experiment ID 1 as the reference. The vertical
darker line indicates the value of the objective function produced by the experiment.
The values of the objective function for a parameter are shown by the side bars. If there is a
positive relationship between the objective function and the parameter, the
Parameter Upside (blue bar) is to the right and the Parameter Downside (red bar) is to the
left of the reference experiment which is shown with the vertical darker line. On the other
hand, if the relationship between the objective function and the parameter is negative, the
corresponding bars are reversed; i.e., blue bar is on the left and red bar is on the right.
The overall length of the bar indicates the degree to which changing the value of the
parameter affects the value of the objective function.
In Experiment ID 1, PERMH had a value of 4500. If we decrease PERMH to 3000, holding all
other parameters constant, the effect on ProducerCumOil is to reduce it to 951. If we increase
PERMH to 6000, ProducerCumOil will increase to 1209. There is a strong positive relationship
between PERMH and ProducerCumOil, indicated by the red bar to the left and the blue bar to
the right. For comparison, the relationship between HTSORG and ProducerCumOil is a
negative one; i.e., starting with Experiment ID 1 then increasing HTSORG to 0.06 while
holding all other parameters constant will decrease ProducerCumOil to 1063.
NOTE: When a horizontal bar in the OPAAT diagram ends on the reference experiment line,
this means the experiment was conducted with the parameter at a minimumor maximumvalue.



CMOST User Guide Viewing and Anal yzing Results 167



7.5.5 Proxy Anal ysis
Information is displayed in this node if there are enough experiments to build the proxy model.
Once enough runs are available, one or more types of proxy models (linear, quadratic,
reduced linear, reduced quadratic, and ordinary kriging) will be created. For sensitivity
analysis studies, the proxy models are then used to determine the main (linear) effects,
interaction effects, and quadratic (nonlinear) effects. For uncertainty assessment studies,
proxy models are used to perform Monte Carlo simulations using the distributions from
the Prior Probability Distribution Functions defined in the Parameters page of the study file.
NOTE: It is important to check the verification plot of each response surface model for obvious
outliers. If there are outliers, the cause of the outliers should be investigated. Sometimes the
jobs that correspond to the outliers may need to be re-run or excluded from the analysis.
7.5.5.1 Proxy Model Verification Plot
In the Proxy Analysis node, if you select an objective function subnode, information about
the proxy models for that objective function will be displayed, as shown in the following
example, in which the Model Quality Check (QC) tab is selected:

The Model QC plot shows how closely the proxy model predictions match actual values from
the simulations. The 45 degree line represents a perfect match between the proxy model and
actual simulation results. The closer the points are to the 45 degree line, the better the match
between the predicted and actual data. The points that fall on the 45 degree line are those that
are perfectly predicted. The points that are far from the 45 degree line are outliers. If there are
outliers, the cause needs to be investigated before making use of the proxy model.



168 Viewing and Anal yzing Results CMOST User Guide



For polynomial regression models (Linear, Quadratic, Reduced Linear, and Reduced
Quadratic), the lower and upper 95% confidence curves are superimposed on the actual by
predicted plot. These confidence curves are useful for determining whether the regression
model is statistically significant. The lower and upper 95% confidence curves are determined
using the equations given in paper Leverage Plots for General Linear Hypotheses, John
Sall, The American Statistician, November 1990, Vol. 44, No. 4.
The dark blue points are the training experiments used by CMOST to create the proxy model.
The green points are verification experiments used by CMOST to check if the proxy model
created is a good proxy to the actual simulation results.
For ordinary kriging models, all of the points for the training jobs should be exactly on the 45
degree line. Therefore, only the verification jobs can be used to estimate the predictability of
an ordinary kriging model.
7.5.5.2 Proxy Model Data Table
The proxy model can be viewed in tabular form by right-clicking the verification plot then
selecting Data to open the Proxy Model Qc Results table:

As with other results tables, you can save the contents of the Proxy Model Qc Results table to
Excel or to the Windows clipboard, from which it can be pasted into another application.



CMOST User Guide Viewing and Anal yzing Results 169



7.5.5.3 Proxy Model Stati stics
A detailed report of the response surface statistics is available by selecting the Statistics tab,
shown in the following example (Effect Screening Using Normalized Parameters and
Coefficients in Terms of Actual Parameters tables in the example have been cropped to fit
on the page):

For polynomial regression models (Linear, Quadratic, Reduced Linear, and Reduced
Quadratic), see Proxy Modeling for an explanation of the statistical terms used in CMOST.
As shown above, the Statistics tab contains the following five sections.
Summary of Fit
Analysis of Variance
Effect Screening Using Normalized Parameters
Coefficients in Terms of Actual Parameters
Equation in Terms of Actual Parameters



170 Viewing and Anal yzing Results CMOST User Guide



For ordinary kriging models, the variogram base, exponent, and weights will be shown if the
number of experiments is not more than 100.
You can copy the contents of the Statistics tab to Excel by selecting the desired portion of the tab
with your mouse, or press CTRL+A to copy everything. Right-click in the selected area then
select Copy. In Excel, click Paste.
7.5.5.4 Proxy Model Effecti ve Estimate
For information on interpreting the information in this tab, refer to Linear Model Effect
Estimates, Quadratic Model Effect Estimates, and Reduced Model Effect Estimates as
appropriate.
7.5.5.5 Monte Carlo Si mulations
In the case of uncertainty analyses, the Monte Carlo Simulation tab shows the distribution
of values for each objective function with all uncertain parameters sampled from the prior
probability density functions, as illustrated in the following example:

The plot shows a histogram of objective function values, to illustrate the shape of the
probability density function, as well as the cumulative probability. P10, P50, and P90 values
are highlighted.



CMOST User Guide Viewing and Anal yzing Results 171



Data can be viewed in tabular form by right-clicking the plot and then selecting Data. The
Monte Carlo Unconditional Simulation Results dialog box is displayed, as shown in the
following example:

The table shows the distribution data for selected Monte Carlo simulations, sorted by cumulative
probability values. In this data table, you can find the objective function value and the
corresponding parameter values for typical cumulative probabilities, such as P10, P50, and P90.
Click to copy the data to the Windows clipboard, or to export it to an Excel
spreadsheet. If you select rows of the table using CTRL and SHIFT keys, then only those
rows will be copied to the clipboard or spreadsheet.
Similarly, select one or more data rows then click to create new experiments with those
parameter settings.
You can also right-click the plot then select commands to save the image in one of several
formats, or copy the image to the Windows clipboard.
If you right-click the plot and then select All Generated Monte Carlo Simulation Data, you
will open a table that shows the results of all the Monte Carlo simulations. Again, you can
copy the content of the table to the Windows clipboard or to a spreadsheet, as outlined above.



172 Viewing and Anal yzing Results CMOST User Guide



7.5.5.6 Proxy Settings
Through the Proxy Settings tab, you can select a proxy type then configure the available settings:

After you change your settings, click the Build Proxy button.




CMOST User Guide General and Advanced Operations 173



8 General and Advanced Operations
8.1 CMM File Editor
8.1.1 Introduction
The CMOST CMM File Editor is a tool for viewing and editing the CMOST master dataset
(.cmm) and related include (.inc) files. This editor can be used to:
Create/Insert/Delete CMOST parameters.
Comment out/Uncomment multiple lines in a file.
Create/Edit/Extract include files.
Quick navigation through the editing file.
8.1.1.1 To Start the CMOST CMM Editor
To start the CMM Editor, click the Edit button on the General Properties page or
the Parameters page.
CMOST CMM Editor Toolbar and Status Bar
Functions will show up as you move the pointer over the toolbar buttons.

Comment
Lines
Uncomment
Lines
Create
CMOST
Parameter
Save
Cut
Copy
Paste
Undo
Redo
Add
Comment
Delete
CMOST
Parameter
Previous
CMOST
Parameter
Next
CMOST
Parameter


Find&
Replace ZoomIn
GoToLine
Number
ZoomOut
SearchCMOST
Parameter
Line
Number
CMOST
Parameters
Create
IncludeFile
OpenInclude
File
Extract
IncludeFile
Reservoir
Sections
Toggle
Outlining
ToggleSyntax
Highlighting




174 General and Advanced Operations CMOST User Guide



8.1.1.2 CMOST CMM Editor Context Menu
Most functions can be accessed through the context menu. To display the context menu,
right-click in the text editor:

NOTE: Refer to Keyboard Shortcuts for information about shortcuts.
8.1.1.3 To Create/Insert a CMOST Parameter
Select the part that needs to be replaced with a CMOST parameter, and then click the Create
CMOST Parameter button. Enter a parameter name and default value (optional) to create
a new parameter. To insert an existing parameter, select the parameter name in the drop-down
list. If the default value of the selected parameter has been defined, the previous default value
will be used.

Click OK. The selected text will be replaced by a CMOST parameter.




CMOST User Guide General and Advanced Operations 175



8.1.1.4 To Delete a CMOST Parameter
Move the cursor into a parameter definition part, and then click the Delete CMOST
Parameter button. The CMOST parameter will be deleted. If a default name is defined,
the parameter will be replaced by the default value.
8.1.2 Working with Comments
8.1.2.1 To Add a Comment
Move the cursor to the place where you want to add a comment, and then click the Add Comment
button to add a new comment.
8.1.2.2 To Comment Out Sel ected Lines
Select one or multiple lines then click the Comment Lines button to comment out the
selected lines.
8.1.2.3 To Uncomment
Select one or multiple comment lines, click the Uncomment Lines button to delete the
comment indicator in each selected line.
NOTE: Only the CMG default comment indicator ( ) is supported.
8.1.3 Working with Include Files
8.1.3.1 To Create an Include File
Select a portion of the cmm file, and then click the Create Include File button. Enter the
include file name in the Create Include File dialog box.
NOTE: The include file will be saved to the same folder as the master dataset file.




176 General and Advanced Operations CMOST User Guide



Click OK. The include file will be created, and the selected text will be replaced by an
include line, for example:

In the Create/Extract Include File dialog box, if the Create CMOST parameter check box is
selected, then a CMOST parameter will be created using the created include as the default value.
8.1.3.2 To Edit an Include File
Move the cursor to an include file line, then click the Open Include File button to open
the include file in a new CMM Editor session.

8.1.3.3 To Extract an Include Fil e
Move the cursor to an include file line, and then click the Extract Include File button.
The Extract Include File dialog box will be displayed. Click OK. The include file line in the
master dataset will be replaced by the content in the include file. If Delete include file is
selected, the include file will be deleted after extraction.




CMOST User Guide General and Advanced Operations 177



8.1.4 Navigation Tools
8.1.4.1 To Go to a Line
Enter the line number you want to go to in the Line Number box in the toolbar and then
press Enter. At any time, click to go to the line number entered in the box.
8.1.4.2 To Go to a Section
Click the Reservoir Sections button and select any item in the drop-down list to go to
the specified section.

8.1.4.3 To Go to a CMOST Parameter
Click the CMOST Parameters drop-down list in the upper-right of the screen and select any
item in the list to go to the line that contains the parameter. If a CMOST parameter appears
multiple times in the file, click the Search CMOST Parameter button to go to its next
appearance. CMOST parameters in the list are sorted alphanumerically, as shown in the
following example.

8.1.5 Other Functions
8.1.5.1 Toggle Outlining
Click the Toggle Outlining button to collapse of expand multi-line comments and data
blocks.
You can collapse sections of the CMM file by clicking at the head node, or expand by
clicking . Move the cursor over any collapsed head node to preview part of the collapsed
lines.



178 General and Advanced Operations CMOST User Guide



8.1.5.2 Toggle Syntax Highlighti ng
Click the Toggle Syntax Highlighting button to turn on or turn off syntax highlighting.
For large CMM files, this speeds up navigation of the file.
8.1.5.3 Find/ Replace Text
Click the Find and Replace button on the toolbar to open the Find and Replace dialog
box. Regular expression searching is supported.

8.1.5.4 Block Sel ection
Block selection is used to select a rectangular portion of text.
To make a block selection with the mouse, hold the ALT key, click in the text area
and then drag to define the selection.
To make a block selection with the keyboard, hold the SHIFT+ALT keys, and then
press any arrow key.



CMOST User Guide General and Advanced Operations 179




8.1.5.5 Multipl e Views of a File
Select then drag the handle located at the top right corner of the editor to enable multiple
views of a file. A maximum of two vertically split views are supported. Horizontally split
views are not available.




180 General and Advanced Operations CMOST User Guide




8.1.6 Keyboard Shortcuts
Commands Shortcut Keys
Redo CTRL+Y
Undo CTRL+Z
Cut CTRL+X
Copy CTRL+C
Paste CTRL+V
Find CTRL+F
Save File CTRL+S
Block Selection ALT+Mouse Selection
Create Parameter CTRL+P
Delete Parameter CTRL+SHIFT+P
Open Include File CTRL+G
Create Include File CTRL+E
Extract Include File CTRL+SHIFT+E
Add Comment CTRL+K
Comment Out CTRL+SHIFT+C
Uncomment CTRL+SHIFT+U



CMOST User Guide General and Advanced Operations 181



8.2 Handling Large Files
When handling large dataset files, if the CMM File Editor seems slow, you can try saving
grid block properties such as POR and PERMI into include files. This can be done using
Builder. Refer to the Organizing Include Files section in the Builder User Guide for more
information. Segments related to these keywords often account for more than 90% of the size
of a large file. The size of the main CMM file can be greatly reduced by using include files.
8.3 Formula Editor
Formulas are equations that perform calculations on values during a CMOST run. Formulas
can appear in CMOST master datasets. In a CMOST master dataset, formulas can appear
anywhere provided that each formula is enclosed within a start tag <cmost>and an end tag
</cmost>. As an advanced feature, CMOST also allows using J Script code to create formulas
(See Using J script Expressions in CMOST for details).
8.3.1 Parts of a Formula
A formula can contain any or all of the following: functions, variables (parameter or objective
function term names), constants, and operators.
Constants: Numbers or text values entered directly into a formula, such as 0.001,
OPEN.
Functions: The POWER(30, 0.25) function returns the result of 30 raised to 0.25.
Variables: SteamPress returns the chosen value of parameter SteamPress for the
particular simulation job created by CMOST.
Operators: The +(plus) operator adds, and the * (asterisk) operator multiplies.
8.3.2 Constants in Formulas
A constant is a value that is not calculated. For example, the number 210, and the text
"OPEN" are all constants. Text must be enclosed in double quotation marks. An expression,
or a value resulting from an expression, is not a constant.
8.3.3 Functions in Formulas
Functions are predefined formulas that perform calculations by using specific values, called
arguments, in a particular order, or structure. Functions can be used to perform simple or
complex calculations.
Structure of a function:
POWER(SteamPress * 0.001, 0.2388057)
Structure: The structure of a function begins with the function name, an opening
parenthesis, the arguments for the function separated by commas, and a closing
parenthesis.
Function name: Refer to List of Built-in Functions in CMOST.



182 General and Advanced Operations CMOST User Guide



Arguments: Arguments can be constants, parameter names, formulas, or other
functions. The argument you designate must produce a valid value for that
argument. For the LOOKUP function, arrays of constants or formulas can be
arguments as well.
In certain cases, you may need to use a function as one of the arguments of another function.
In this case, the function used as an argument is a nested function. When a nested function is
used as an argument, it must return the same type of value that the argument uses.
8.3.4 Variables in Formulas
For formulas in the Master Dataset and Parameters page, a parameter name can be used as
a variable name. For formulas in the Objective Functions page, an objective function term
name can be used as a variable name. A parameter name identifies a parameter defined in
your CMOST study file. For a particular job created by CMOST, each parameter has one
chosen value, which will be used by CMOST in evaluating formulas. An objective function
term name identifies an objective function term within the objective function. For a complete
job, CMOST will extract the value from SR2 files and use it in calculating formulas.
For example, the following formula contains a parameter name SteamPress. For a specific
simulation job created by CMOST, the chosen value of parameter SteamPress is equal to
2500. When CMOST evaluates the formula, the result will be 223.78.
179.7989 * POWER(SteamPress * 0.001, 0.2388057)
Another example, the following formula contains two objective function terms: CumOil and
CSOR. CumOil is defined as the Cumulative Oil SC at 2008-12-01 and CSOR is defined as
the cumulative steam oil ratio at 2008-12-01. For a specific finished simulation job, CumOil
is equal to 6.528e5 m
3
and CSOR is equal to 3.15. When CMOST evaluates the formula, the
result will be 0.207.
1e-6 * CumOil/CSOR
8.3.5 Operators in Formulas
Operators specify the type of calculation that you want to perform on the elements of a formula.
CMOST includes two different types of calculation operators: arithmetic and comparison.
Arithmetic operators: To perform basic mathematical operations such as addition, subtraction,
or multiplication; combine numbers; and produce numeric results, use the following arithmetic
operators.
Arithmetic operator Meaning (Example)
+(plus sign) Addition (3+3)
(minus sign) Subtraction (31)
Negation (1)
* (asterisk) Multiplication (3*3)
/ (forward slash) Division (3/3)



CMOST User Guide General and Advanced Operations 183



Comparison operators: You can compare two values with the following operators. When two
values are compared by using these operators, the result is a logical value either TRUE or FALSE.
Comparison operator Meaning (Example)
==(double equal sign) Equal to (A1==B1)
>(greater than sign) Greater than (A1>B1)
<(less than sign) Less than (A1<B1)
>=(greater than or equal to sign) Greater than or equal to (A1>=B1)
<=(less than or equal to sign) Less than or equal to (A1<=B1)
!=(not equal to sign) Not equal to (A1!=B1)
8.3.6 Formula Calculation Order
Formulas calculate values in a specific order. CMOST calculates the formula from left to
right, according to a specific order for each operator in the formula.
Operator precedence: If you combine several operators in a single formula, CMOST performs
the operations in the order shown in the following table. If a formula contains operators with
the same precedencefor example, if a formula contains both a multiplication and division
operator CMOST evaluates the operators from left to right.
Operator Description
Negation (as in 1)
* and / Multiplication and division
+and Addition and subtraction
==<><=>=!= Comparison
Use of parentheses: To change the order of evaluation, enclose in parentheses the part of the
formula to be calculated first. For example, the following formula produces 11 because
CMOST calculates multiplication before addition. The formula multiplies 2 by 3 and then
adds 5 to the result.
=5+2*3
In contrast, if you use parentheses to change the syntax, CMOST adds 5 and 2 together and
then multiplies the result by 3 to produce 21.
=(5+2)*3
8.3.7 List of Built-in Functions in CMOST
NOTE: The list here only includes the Built-in functions in CMOST. As an advanced feature,
CMOST allows the use of J Script code to write formulas. So all functions supported by
J Script are supported by CMOST. See Using J script Expressions in CMOST for more details.



184 General and Advanced Operations CMOST User Guide



8.3.7.1 IF
Returns one value if a condition you specify evaluates to TRUE and another value if it
evaluates to FALSE. Use IF to conduct conditional tests on values and formulas.
Syntax
I F( l ogi cal _t est , val ue_i f _t r ue, val ue_i f _f al se)
logical_test is any value or expression that can be evaluated to TRUE or FALSE. For
example, WellLen>=800 is a logical expression; if the value of WellLen is greater than or
equal to 800, the expression evaluates to TRUE. Otherwise, the evaluates to FALSE. This
argument can use any comparison calculation operator.
value_if_true is the value that is returned if logical_test is TRUE. Value_if_true can be a
constant or a formula.
value_if_false is the value that is returned if logical_test is FALSE. Value_if_false can be a
constant or a formula.
Example 1
Use the following IF functions to open/close certain perforations of a well according to the
length of the well in a CMOST master dataset.
PERF GEO ' Hor i zont al '
**$ UBA f f St at us Connect i on
9 3 12 1. OPEN FLOW- TO ' SURFACE'
9 4 12 1. OPEN FLOW- TO 1
9 5 12 1. OPEN FLOW- TO 2
9 6 12 1. OPEN FLOW- TO 3
9 7 12 1. <cmost >I F( WL>=500, " OPEN" , " CLOSED" ) </ cmost > FLOW- TO 4
9 8 12 1. <cmost >I F( WL>=600, " OPEN" , " CLOSED" ) </ cmost > FLOW- TO 5
9 9 12 1. <cmost >I F( WL>=700, " OPEN" , " CLOSED" ) </ cmost > FLOW- TO 6
For a simulation job created by CMOST with WL=600, perforations for well 'Horizontal' in
the dataset will be:
PERF GEO ' Hor i zont al '
**$ UBA f f St at us Connect i on
9 3 12 1. OPEN FLOW- TO ' SURFACE'
9 4 12 1. OPEN FLOW- TO 1
9 5 12 1. OPEN FLOW- TO 2
9 6 12 1. OPEN FLOW- TO 3
9 7 12 1. OPEN FLOW- TO 4
9 8 12 1. OPEN FLOW- TO 5
9 9 12 1. CLOSED FLOW- TO 6
8.3.7.2 LOOKUP
The LOOKUP function looks in array_constants for the specified value and returns a value from
the same position in array_formulas. Use LOOKUP to define relationships between parameters.



CMOST User Guide General and Advanced Operations 185



Syntax
LOOKUP( l ookup_val ue, ar r ay_const ant s, ar r ay_f or mul as)
lookup_value is a value that LOOKUP searches for in an array. Lookup_value must be a
parameter name.
array_constants is a group of constants that you want to compare with lookup_value.
array_formulas is a group of formulas, one of which will be the return value of the LOOKUP
function. The number of formulas should be equal to the number of constants in array_constants.
Remarks
array_constants format
Constants are enclosed in braces { } and separated with commas (,).
Array constants can contain numbers and text. Text is case-sensitive.
Numbers in array constants can be in integer, decimal, or scientific format.
Text must be enclosed in double quotation marks, for example, "CLOSED".
Array constants cannot contain formulas and parameter names.
array_formulas format
The format of array_formulas is the same as array_constants except that
array_formulas can contain constants, formulas, and variable names.
Important
The LOOKUP function finds the value that matches exactly the same as
the lookup_value. If LOOKUP can't find the lookup_value, the result is
undefined.
Example 2
Use the LOOKUP function to set the mole fraction of solution gas according to gas-oil ratio
in a CMOST master dataset.
NOTE: <cmost>and </cmost>must be in the same line.
**$ Pr oper t y: Oi l Mol e Fr act i on( Sol n_Gas)
MFRAC_OI L ' Sol n_Gas' CON
<cmost >LOOKUP( ogor , {7, 5, 4, 3}, {0. 10, 0. 08, 0. 06, 0. 04}) </ cmost >
For a simulation job created by CMOST with ogor=4, the mole fraction of solution gas will
be set to 0.06
**$ Pr oper t y: Oi l Mol e Fr act i on( Sol n_Gas)
MFRAC_OI L ' Sol n_Gas' CON 0. 06



186 General and Advanced Operations CMOST User Guide



8.3.7.3 ABS
Returns the absolute value of a number. The absolute value of a number is the number
without its sign.
Syntax
ABS( number )
number is the actual number of which you want the absolute value.
Example
ABS( - 12. 5) r et ur ns 12. 5
8.3.7.4 COS
Returns the cosine of the given angle.
Syntax
COS( number )
number is the angle in radians for which you want the cosine.
Remark
If the angle is in degrees, multiply it by 3.14159/180 to convert it to radians.
Example
COS( 1. 047) r et ur ns 0. 500171
8.3.7.5 EXP
Returns e raised to the power of number. The constant e equals 2.71828182845904, which is
the base of the natural logarithm.
Syntax
EXP( number )
number is the exponent applied to the base e.
Remarks
To calculate powers of other bases, use the POWER function.
Example
EXP( 2) r et ur ns 7. 389056
8.3.7.6 LN
Returns the natural logarithm of a number. Natural logarithms are based on the constant e
(2.71828182845904).



CMOST User Guide General and Advanced Operations 187



Syntax
LN( number )
number is the positive real number for which you want the natural logarithm.
Remark
LN is the inverse of the EXP function.
Example
LN( 2. 7182818) r et ur ns 1
8.3.7.7 LOG10
Returns the base-10 logarithm of a number.
Syntax
LOG10( number )
number is the positive real number for which you want the base-10 logarithm.
Example
LOG10( 100) r et ur ns 2
8.3.7.8 MAX
Returns the largest value in a set of values.
Syntax
MAX( number 1, number 2, . . . )
number1, number2, ... are numbers for which you want to find the maximum value.
Remarks
You can specify arguments that are numbers, variable names, and formulas.
The arguments must contain at least two values.
Example
MAX( var 1, 2. 54) r et ur ns 2. 54 i f var 1=1. 06
8.3.7.9 MIN
Returns the smallest number in a set of values.
Syntax
MI N( number 1, number 2, . . . )
number1, number2, ... are values for which you want to find the minimum value.



188 General and Advanced Operations CMOST User Guide



Remarks
You can specify arguments that are numbers, variable names, and formulas.
The arguments must contain at least two values.
Example
For a SAGD well pair, you want to change the total liquid rate of the producer according to the
steam rate of the injector. You can use the MIN function in your master dataset to achieve this.
I NJ ECTOR MOBWEI GHT EXPLI CI T ' WEST- I '
I NCOMP WATER 1. 0. 0.
TI NJ W 253. 2
QUAL 0. 9
OPERATE MAX BHP 4200. CONT REPEAT
OPERATE MAX STW<cmost >St eam</ cmost > CONT REPEAT
PRODUCER ' WEST- P'
OPERATE MI N BHP 2000 CONT REPEAT
OPERATE MAX STL <cmost >MI N( St eam*1. 6, 1000) </ cmost > CONT REPEAT
For a job with Steam=500, operating constrains for producer 'WEST-P' will be:
PRODUCER ' WEST- P'
OPERATE MI N BHP 2000 CONT REPEAT
OPERATE MAX STL 800 CONT REPEAT
For a job with Steam=700, operating constrains for producer 'WEST-P' will be:
PRODUCER ' WEST- P'
OPERATE MI N BHP 2000 CONT REPEAT
OPERATE MAX STL 1000 CONT REPEAT
8.3.7.10 POWER
Returns the result of a number raised to a power.
Syntax
POWER( number , power )
number is the base number. It can be any number.
power is the exponent to which the base number is raised.
Example
POWER( 5, 2) r et ur ns 25
8.3.7.11 SIN
Returns the sine of the given angle.
Syntax
SI N( number )
number is the angle in radians for which you want the sine.
Remark
If your argument is in degrees, multiply it by 3.14159/180 to convert it to radians.



CMOST User Guide General and Advanced Operations 189



Example
SI N( 30*3. 14159/ 180) r et ur n 0. 5
8.3.7.12 SQRT
Returns a positive square root.
Syntax
SQRT( number )
number is the number for which you want the square root.
Remark
The number must be non-negative.
Example
SQRT( 2. 0) r et ur ns 1. 4142
8.3.7.13 TAN
Returns the tangent of the given angle.
Syntax
TAN( number )
number is the angle in radians for which you want the tangent.
Remark
If your argument is in degrees, multiply it by 3.14159/180 to convert it to radians.
Example
TAN( 0. 785) r et ur n 0. 99920
8.4 Using Jscript Expressions in CMOST
Dynamic J Script code execution is a powerful feature that allows CMOST to be extended
with code that is not compiled into the application. With it users can customize and extend
CMOST via custom J Script code. For example, users can write their own code for calculating
objective functions. Then CMOST can read the code and execute it on the fly.
With dynamic code execution, CMOST allows users to have advanced control over the
CMOST workflow, including simulation dataset generation, simulation output file post-
processing, objective function calculations, and applying optimization constraints. In
CMOST, J Script code can appear in the Parameters, User-Defined Time Series, Objective
Functions, User-Defined Nominal Global Objective Function Candidates, and
Constraints pages to:
Define complex relationships among parameters
Define custom objective functions
Define complex constrains and penalty functions



190 General and Advanced Operations CMOST User Guide



J Script is the Microsoft dialect of the ECMAScript scripting language specification, with
J avaScript being another dialect. J Script is implemented as a Windows Script engine. It
allows embedding ActiveX objects to reuse the functionalities of many Microsoft Windows
applications. For example, users can easily link CMOST to an Excel spreadsheet through
J Script code for calculating objective functions.
A complete reference on the J Script language is available in Microsoft Developer Network
website: http://msdn.microsoft.com/en-us/library/x85xxsf4.aspx
The website below also provides reference information on some key J Script topics:
http://ns7.webmasters.com/caspdoc/html/jscript_language_reference.htm
The following examples are provided in the CMOST installation directory to help users to
understand the use of J Script in CMOST:
RelPermTableHM.cmp: use J Script code to create a relative permeability table
SAGD_2D_OP.cmp: use J Script code to define custom objective functions
SAGD_2D_Tuning.cmp: use J Script code to read simulation output files
The following sections provide more details about the use of J Script code in CMOST.
8.4.1 Transferring Data from CMOST to User JScript code
Because the JScript code written by users is executed dynamically by CMOST, the data transfer
between CMOST and users JScript code occurs in the formof common variables which are
visible in both CMOST and users JScript code. The following table summarizes common
variables that are visible to users JScript code depending on the location of the JScript code.
Jscript Code Location Common Variables
Parameters All parameters except for the parameter which is
defined by the current J Script code
All input and output files of the simulation job
Advanced Objective
Functions
All parameters except for the parameter which is
defined by the current J Script code
All input and output files of the simulation job
User-Defined Nominal
Global Objective Function
Candidates
All objective function terms defined for the current
objective function
All input and output files of the simulation job
All parameters
Hard Constraints All parameters
All input and output files of the simulation job
Soft Constraints All parameters
All objective functions
All input and output files of the simulation job



CMOST User Guide General and Advanced Operations 191



Jscript Code Location Common Variables
Soft Constraint Penalties All parameters
All objective functions
All input and output files of the simulation job
To use the common variables in your JScript code, use the corresponding names directly. Do not
use the same common variable names to define new variables. For example, if InjectionPress is
the name of a parameter, you can use InjectionPress as a variable in your JScript code directly.
However, you should not try to define a new variable with its name equal to InjectionPress.
Also, you should not attempt to modify the value of any common variable declared by CMOST
because CMOST will replace the variable name by its actual value before the JScript is executed.
The available variables are predefined in the formula, refer to Formula Editor for more
information.
8.4.2 Accessing Simulation Job Input and Output Files
To access the input and output files of a simulation job, use the following common variables
defined by CMOST:
Variable Name Value
ProjectDirectory Full path of the project directory
StudyDirectory Full path of the study directory
ExperimentName
Name of the simulation job (which is the file name
without extension)
ExperimentDatFilePath Full path of the dataset of the simulation job
ExperimentLogFilePath Full path of the log file of the simulation job
ExperimentOutFilePath Full path of the out file of the simulation job
ExperimentIrfFilePath Full path of the irf file of the simulation job
ExperimentMrfFilePath Full path of the mrf file of the simulation job
For example, the following J Script code opens the log file of a simulation job and reads the
material balance error for the job at the last time step.



192 General and Advanced Operations CMOST User Guide



var f so=new Act i veXObj ect ( " Scr i pt i ng. Fi l eSyst emObj ect " ) ;
var t s=f so. OpenText Fi l e( Exper i ment LogFi l ePat hh) ;
var l i ne: St r i ng;
var mat Er r or St r : St r i ng
var mat Er r or : doubl e
whi l e( ! t s. At EndOf St r eam)
{
l i ne=t s. ReadLi ne( ) ;
i f ( l i ne. l engt h>=90)
{
mat Er r or St r =l i ne. subst r i ng( 84, 89)
t r y
{
mat Er r or =doubl e. Par se( mat Er r or St r ) ;
}
cat ch( e)
{
/ / Do not hi ng
}
}
}
mat Er r or ;
8.4.3 Transferring Data from JScript Code to CMOST
When users J Script code is executed inside CMOST, the final result that will be transferred
from users J Script code to CMOST is the evaluation result of the last line of the J Script
code. For example, the final value obtained by CMOST for the following J Script code is 10;
var a, b, c, d;
a=4;
b=3;
c=2;
d=1;
a+b+c+d;
8.4.4 Starting a New Line in the Dataset
The keyword data input system of simulators requires that certain data entry must start with a
new line. To start a new line in J Script code, string \r\n should be used. For example:
var kwLi nes: St r i ng;
kwLi nes =" \ r \ n" +" ** Gr oup 1 Heat er s " ;
kwLi nes+=" \ r \ n" +" HEATR I J K " +UBAI +" 1 " +UBAK+ +QH;
kwLi nes+=" \ r \ n" +" UHTR I J K " +UBAI +" 1 " +UBAK+" 16351" ;
kwLi nes+=" \ r \ n" +" TMPSET I J K " +UBAI +" 1 " +UBAK+" 1129" ;
kwLi nes+=" \ r \ n" +" AUTOHEATER ON " +UBAI +" 1 " +UBAK;




CMOST User Guide Configuring Launcher and CMOST to Work Together 193



9 Configuring Launcher and CMOST
to Work Together
9.1 Introduction
CMOST relies on either Launcher or the CMG J ob Service for running jobs. So before
CMOST can be used, you will need to configure Launcher and the CMG J ob Service to work
properly. CMOST must also be configured to connect with Launcher or the CMG J ob Service
in order for jobs to be queued and run properly.
9.2 Configuring Launcher
9.2.1 Launcher
Launcher is a Windows GUI application. It can be started any time using the shortcut on the
users desktop or by running the executable (CMG.exe) directly from its installation folder
(CMG_HOME\Launcher\xxxx.xx\Win32\EXE\).
9.2.2 CMG Job Service
CMG J ob Service is a Windows Service application. The executable (CMG.J obService.exe)
is installed in CMG_HOME\CMGJ obService directory. The service (CMG J ob Service) is
automatically created and registered with Windows when CMG software is installed. You
may access the CMG J ob Service via Control Panel | Administrative Tools | Services.




194 Configuring Launcher and CMOST to Work Together CMOST User Guide



NOTE: Administrator rights are required in order to register, start, or stop a Windows Service
If for any reason CMG J ob Service is not available in the list of Windows Services, it can be
registered by a user with Administrator rights using the following command:
sc create CMGJ obService binPath="<path to CMG.J obService.exe>"
Note that a space is required between the equal sign and the path to CMG.J obService.exe. If
the path to CMG.J obService.exe contains spaces, it must be contained in quotes.
Because CMG J ob Service is a Windows Service application, starting or stopping the service
must be performed in Control Panel | Administrative Tools | Services. If the startup type is
set to Automatic, CMG J ob Service will be started automatically when the computer is
rebooted. If the startup type is Manual, the user has to start the service manually by pressing
the Start button (Administrator rights are required for starting the service manually).

9.2.3 Use Launcher Embedded Mode for Submitting Jobs
If your organization uses smart card technology for user credentials, you may use Launcher in
Embedded Mode for submitting jobs. In Embedded Mode, the CMG J ob Service is not used
and the service may be stopped. In Embedded Mode, Launcher must always be open and Use
job service to submit any jobs must be cleared in Launcher | Configuration | Configure
Local Job Server as shown in the Configure Local Job Server dialog box:



CMOST User Guide Configuring Launcher and CMOST to Work Together 195




9.2.4 Use CMG Job Service for Submitting Jobs
If you use a user name and password for Windows login, you may use the CMG Job Service
for submitting jobs. In this mode, CMG J ob Service must be running. Refer to CMG J ob
Serviceon how to start and stop CMG J ob Service. If jobs are submitted by CMOST to CMG
J ob Service, Launcher can be open or closed. However, if you want to see the list of jobs in
Launcher, Launcher must be open and Use job service to submit any jobs must be selected
in Launcher | Configuration | Configure Local Job Server as shown in the following
dialog box.



196 Configuring Launcher and CMOST to Work Together CMOST User Guide




9.2.5 Submitting Jobs to a Remote Computer
J obs can be submitted to a remote computer through a Remote Scheduler. Launcher supports
several types of remote schedulers:
CMG Scheduler
IBM Platform LSF
Microsoft HPC
Oracle Grid Engine
Portable Batch System (PBS/TORQUE)
Here we show the configurations required for submitting jobs to a remote computer using
CMG Scheduler. For other types of schedulers, refer to the Launcher Users Guide.



CMOST User Guide Configuring Launcher and CMOST to Work Together 197



Install CMG Software on the remote computer. During the installation process,
select Use CMG Job Service to submit jobs when prompted by the installation
program.
On the remote computer, go to Control Panel | Administrative Tools | Services
and open CMG J ob Service. Make sure the startup type is set to Automatic so that
the service is started automatically when the computer is rebooted. If the service
has not been started, press the Start button to start the service (Administrator
rights are required to start the service manually).
On the remote computer, open Launcher and go to Configuration | Configure
Local Job Server. Make sure Use job service to submit any jobs and This
computer is a CMG Cluster Compute Node are selected. Launcher can be
closed after the setting is complete.

On the users local computer, open Launcher and go to Configuration | Configure
Remote Schedulers. Follow the wizard to add a remote CMG Scheduler by
providing the remote computer name.



198 Configuring Launcher and CMOST to Work Together CMOST User Guide



On the users local computer, add your password to Launcher through
Configuration | Password | Add/Modify (This step can be skipped if Launcher
already has your password).
Now the user can submit jobs to the remote computer from Launcher on the users local
computer.
NOTE: For jobs submitted to a remote computer, the remote computer needs to access the
working directory of job input and output files using a UNC path. If the working directory is
a non-UNC path, Launcher will try to convert it to a UNC path. If the conversion is not
successful, you will not be able to submit the job to a remote computer.



CMOST User Guide Troubleshooting 199



10 Troubleshooting
10.1 Introduction
This section provides information to help the user understand and resolve problems that may
arise during use of the CMOST application.
10.2 Failed and Abnormal Termination Jobs
NOTE: Refer to Number of Perturbation Experiments for Each Abnormal Experiment for
information about creating perturbation experiments to help obtain normal termination jobs.
1. Failed/incomplete jobs are jobs that cannot run to completion as a result of
hardware or software issues. If a job is failed/incomplete, it should be able to run
to completion using the same dataset after the hardware or software issue is
resolved. To determine why a job has failed:
In Launcher, find the job and check whether there are any error messages
in the Message column. If the Message column is not shown, click
Add/Remove J ob Columns to add the Message column.
Open the .log or .out file of the job in Notepad and scroll down to the end of
the file to check whether the simulator has written any termination messages
there.
Find the .dat file of the job and submit it to the scheduler directly in
Launcher and see whether the job can run to completion.
Some possible causes for failed/incomplete jobs are:
Missing or wrong password in Launcher. This can be verified by
submitting a job to the scheduler directly in Launcher. If this is the cause,
supply the correct password using
Launcher | Configuration | Password | Add/Modify.
Out of disk space. Check available disk space in the simulator working
directory. If the remote schedulers have been configured to write
simulation output files to a temporary folder on the execution computer
and copy all files back to the simulator working directory when the job is
done, it is possible that the location (usually the C: drive) for temporary
files has been filled. If this is the case, you may remove unwanted



200 Troubleshooting CMOST User Guide



temporary simulation output files from REMOTE EXECUTION
COMPUTERS. In Windows Server 2008, the temporary files are in
C:\ProgramData\CMG\CopyLocalJ obs. In Windows Server 2003, the
temporary files are in C:\Documents and Settings\All Users\Application
Data\cmg\CopyLocalJ obs. In Windows 7, the temporary files are stored
in C:\ProgramData\cmg\CopyLocalJ obs.
Intermittent file I/O issue. This is often very difficult to diagnose. To
reduce the chance of running into intermittent file I/O issues, it is
strongly recommended that the simulation output files (.irf, .mrf, .rst,
.out) are kept as small as possible by properly configuring the
Input/Output Control section of the .cmm file. For details, refer to
WRST, OUTSRF, WSRF, OUTPRN, and WPRN keywords in the
simulators manual. If you are running more than five concurrent remote
jobs, it is also recommended using a Windows 2003 or 2008 File Server
to store CMOST input/output files.
The remote scheduler does not accept remote jobs. If a job is submitted
to a remote CMG Scheduler and the remote computer is configured to
run local jobs only, the job will fail. To verify this, open Launcher in the
remote computer and check Launcher | Configuration | Configure
Local Job Server. To allow remote jobs, Use job service to submit any
jobs and This computer is a CMG Cluster Compute Node must be
checked in the remote computer. In addition, CMG J ob Service must be
running in the remote computer.
The required simulator executable doesnt exist in the remote computer.
Simulator licensing problem.
2. Abnormal termination jobs are jobs terminated by the simulator before reaching
the stop time. If a job is abnormal termination, it will usually stop at the same point
if the same dataset is re-run again with the same simulator release. Usually, certain
modifications of the dataset such as numerical tuning are required to make the job
run to completion. To determine the cause of abnormal termination, examine the
.log and/or .out file of the job.
Some possible causes for abnormal termination jobs are:
Syntax error in the dataset.
Simulator runs into numerical problems such as not able to converge, too
many time step cuts, etc.
Killed by CMOST because maximum job run time is exceeded.



CMOST User Guide Troubleshooting 201



10.3 Exception Reports
When CMOST encounters a problem that it is not able to resolve, it will display a CMOST
Unhandled Exception dialog box containing information that can assist CMG development
and support staff investigate the problem, as shown in the following example:

If you encounter an Unhandled Exception message:
1. Click Copy Exception.
2. Open an email message and paste (CTRL+V) the exception report into the body.
3. Add any other information you feel could assist CMG resolve the problem.
4. In the subject line, enter CMOST Unhandled Exception.
5. Send the email to support@cmgl.ca.
6. As shown in the above exception report, if you click Continue, CMOST will
ignore the message and attempt to continue. If you click Quit, CMOST will close
immediately.



CMOST User Guide Theoretical Background 203



11 Theoretical Background
11.1 Probability Distribution Functions
11.1.1 Uniform Distribution
Uniform distribution assumes that all values in the defined range are equally probable. Its
probability density function is:

>

<
=
b x 0
b x a
a b
1
a x 0
) x ( f

where a and b are the lower and upper limit of the variable.
11.1.2 Triangle Distribution
The probability density function for the triangle distribution is:

>
<





<
=
b x 0
b x c
) c b )( a b (
) x b ( 2
c x a
) a c )( a b (
) a x ( 2
a x 0
) x ( f

where a and b are the lower and upper limit of the variable, and c is the peak (mode).
11.1.3 Truncated Normal Distribution
The probability density function for the Gaussian normal distribution is:

|
|
.
|

\
|

=
2
2
2
) x (
exp
2
1
) x ( f




where and are the mean and standard deviation of the variable.



204 Theoretical Background CMOST User Guide



In CMOST, the normal distribution is truncated by user-defined minimum and maximum
values:

Max x Min

The default min and max values are -1E+308 and 1E+308 respectively.
11.1.4 Truncated Log Normal Distribution
The probability density function for the log normal distribution is:

|
|
.
|

\
|



=
2
2
2
) x (ln
exp
2 x
1
) x ( f

where and are the mean and standard deviation of the variables natural logarithm. By
definition, in a log normal distribution, the variables logarithm is normally distributed.
In contrast, the mean and standard deviation of the non-logarithm values are denoted m and s.
To calculate m and s from and :
= e
+0.5
2
, = e

2
1e
2+
2

To calculate and from m and s:
= ln

2

2
+
2
, = ln (1 +

2
)
In CMOST, the log normal distribution is truncated by user-defined minimum and maximum
values:
Max x Min
The default Min and Max values are 1E-308 and 1E+308 respectively.
11.1.5 Deterministic Distributions
Unlike probability distributions, where the uncertainty of an input parameter is described by a
distribution, deterministic distributions treat the input parameter as a constant. A fixed value
is defined for the parameter, since there is no uncertainty about its value.
11.1.6 Custom Distribution
CMOST provides predefined discrete and continuous distributions for input parameters;
however, if none of these distributions is appropriate for the uncertainty of an input
parameter, users can create a custom distribution.
The custom distribution is given as a table of intervals and the corresponding probability
values. For example, the following table provides the points that define a custom distribution
which indicates the probability is 60% for values between 0.2 and 0.3:



CMOST User Guide Theoretical Background 205



Left Bound Right Bound Probability
0.0 0.1 0.05
0.1 0.2 0.15
0.2 0.3 0.60
0.3 0.4 0.15
0.4 0.5 0.05
NOTE: If the probability for all defined intervals does not sum to 1, CMOST will normalize
the probability values to ensure that the total cumulative probability is equal to 1.
11.1.7 Discrete Probability Distribution
The discrete distribution is given as a table of x-values and the corresponding probability
values. For example, for the discrete distribution defined by the following table, only three
values (100, 200, and 300) will be used in Monte Carlo simulation. The probability of using
100, 200, and 300 is 25%, 50%, and 25% respectively. If the sum of all probability values is
not equal to 1, CMOST will normalize the probability values so that it does equal 1.
X Probability
100 0.25
200 0.50
300 0.25
11.2 Objecti ve Functions
Two types of objective functions are described in this section:
History Match Error
Net Present Value
11.2.1 History Match Error
The History Match Error measures the relative difference between the simulation results and
measured production data for each objective function. If a field has multiple wells and each
well has multiple types of production data to match, CMG recommends you define an
objective function for each well. The objective function of each well contains multiple
objective function terms, each of which corresponds to a production data type. In practice, it
is also common for the quality and importance of measured data to be different for different
production data types. In a manual history matching task, these variations are usually taken
into account by the reservoir engineer intuitively and qualitatively. In computer-assisted
history matching, a quantitative approach should be used to account for the data quality and
importance. Therefore, different absolute measurement errors and weights need to be



206 Theoretical Background CMOST User Guide



assigned to different production data types of different wells in calculating objective
functions.
In CMOST, the following equation is used to calculate the history match error for well i:
Q
i
=
1
tw
i,j
N(i)
j=1

Y
i,j,t
s
Y
i,j,t
m

2
T(i,j)
t=1
NT(i, j)
Scale
i,j
N(i)
j=1
100% tw
i,j

where:
i,j,t Subscripts representing well, production data type, and time respectively
N(i) Total number of production data types for well i
NT(i,j) Total number of measured data points
s
t , j , i
Y Simulated results
m
t , j , i
Y Measured results
tw
i,j
Term weight
Scale
i,j
Normalization scale
The normalization scale is calculated using one of the following four methods.
Method #1 applies when the number of measured data points is greater than 5 and the
normalization method is set to AUTO. In this method, the normalization scale is the
maximum of the following three quantities:

j i,
m
j i,
Merr 4 Y +


j i,
m
t j, i,
m
t j, i,
Merr 4 ) min(Y , ) max(Y min 0.5 ) ( +


j i,
m
t j, i,
m
t j, i,
Merr 4 ) min(Y , ) max(Y min 0.25 ) ( +

where:
m
j , i
Y

Measured maximum change for well i and production data type j
Merr
i,j
Measurement error
The value of measurement error (ME) means that if the simulated result is between (historical
value ME) and (historical value +ME), the match is considered to be satisfactory (or
perfect because it is within the range of measurement accuracy). So ME is the absolute
error range.



CMOST User Guide Theoretical Background 207



Method #2 applies when the number of measured data points is small (5), and the
normalization method is set to AUTO, in which case, the normalization scale is obtained by:

j i,
m
t j, i,
m
t j, i, j i,
Merr 4 ) ) Y min( , ) Y max( max( Scale + =

Method #3 applies when the normalization method is set to OFF, in which case:

1 Scale
j i,
=
Method #4 applies when the normalization method is set to be MeasurementErrorOnly, in
which case:

j i, j i,
Merr Scale =
As can be seen from the above equations, the calculated history match error is a
dimensionless percentage relative error for methods #1, #2, and #4. If the simulation results
are exactly the same as measured data, the calculated history match error is 0% which
indicates a perfect match. Our experience indicates that if the history match error is less than
5%, the match is usually acceptable.
The global history match error is calculated using the weighted average method:

=
=
=
NW
1 i
i i NW
1 i
i
global
Q w
w
1
Q

where:
Q
global
Global objective function
Q
i
Objective function for well i
NW Total number of wells
w
i
Weight of Q
i
in the calculation of Q
global

11.2.2 Net Present Value
In finance and economics, discounting is the process of finding the present value of an
amount of cash at some future date. The net present value of a cash flow is determined by
reducing its value by the appropriate discount rate for each unit of time between the time
when the cash flow is to be valued and the time of the cash flow. The time when the cash
flow is to be valued is called the NPV Present Date in CMOST.
To calculate the present value of a single cash flow, it is divided by one plus the interest rate
for each period of time that will pass:

t
t
I) (1
R
PV
+
=




208 Theoretical Background CMOST User Guide



where:
t Time of the cash flow
I Discount rate (interest rate)
R
t
Net cash flow (positive for inflow, and negative for outflow) at time t
Net present value (NPV) is defined as the total present value (PV) of a time series of cash
flows. It is a standard method for using the time value of money to appraise long-term
projects. Each cash inflow/outflow is discounted back to its present value (PV). Then they are
summed. Therefore NPV is the sum of all cash inflows/outflows:

=
+
=
T
1 t
t
t
I) (1
R
NPV

The method for calculating cash flow depends on the property. If the selected property is a daily
property, such as Oil Rate SC-Daily, then cash flow is calculated daily. If the selected property is
a monthly property, such as Oil Rate SC-Monthly, cash flow is calculated monthly. If a property
does not specify the frequency, such as Oil Rate SC, then the cash flow is calculated daily.
Do not select cumulative properties unless the cash flow is to be calculated for one day only:


= =
+

=
NJ
1 j
2 N
1 N t
t
) estRate DailyInter 1 (
Factor Conversion UnitValue Quantity
NPV

where:
t Time of the cash flow in days (the number of days elapsed from the NPV
Present Date to the date when the Property Value is read).
N1 Number of days from the NPV Present Date to the Start Date Time.
N2 Number of days from the NPV Present Date to the End Date Time.
Quantity Value read from the SR2 files using the user-specified origin name and
property name
Unit Value User-specified cash flow value per Quantity (positive for inflow, and
negative for outflow)
j Represents each objective function term
NJ Number of objective function terms for the Net Present Value objective
function.
The yearly interest rate is input by the user. Monthly, quarterly, daily interest rates are converted
fromthe yearly rate; for example, the yearly interest (discount) rate is converted to the daily
interest rate using the following formula:



CMOST User Guide Theoretical Background 209



=
ln(1+)
365
1
CMOST uses the CMOST unit system defined on the General Properties page to read SR2
files and a proper Unit Value for each objective function term must be entered according to
the chosen unit system. For example, if the CMOST unit system is Field, the unit for oil rate
will be bbl/day and the unit value should be dollar per barrel. If the unit system is SI, the unit
for oil rate will be m
3
/day and the unit value should be dollar per m
3
.
11.3 Sampling Methods
Parameter space sampling is the most important step in sensitivity analysis and uncertainty
assessment. The outcome of parameter space sampling is a design for laying out a detailed
simulation plan in advance of performing simulations. A well-chosen design maximizes the
amount of information that can be obtained for a given amount of simulation effort. Below
is an introduction to some basic terminology used in CMOST.
Parameters (variables, factors): Simulation inputs which a researcher manipulates
to cause changes in simulation outputs. A parameter can have two or more sample
values.
Sample values (levels): The different values of a parameter.
Objective functions (responses): The outputs of a simulation.
Experiment: An experiment represents the combination of one particular sample
value for each parameter in the simulation model.
Parameter space (search space): The number of all possible experiments for a
given set of parameters and sample values.
Sampling: Process of selecting a set of experiments from all possible experiments.
Design: A set of experiments generated by the sampling process. A good design
with desirable characteristics allows you to fit an accurate proxy model and draw
reliable conclusions regarding parameter effects.
Effect: How changing the value of a parameter changes the objective function. The
effect of a single parameter, as opposed to the effect of an interaction, is also called
a main effect.
Interaction: Occurs when the effect of one parameter on an objective function
depends on the level of another parameter.
For a given set of parameters and sample values, the parameter space is usually extremely large.
For example, the number of all possible experiments for 15 parameters with three sample
values for each parameter is 3
15
, or 14,348,907. If we want to select a set of 600 experiments
from the total of 14,348,907 experiments, there is an exceptionally large number of ways to do
the selection. According to statistical experimental design theory, to efficiently explore the
parameter space, the design (the set of experiments) selected should possess two desirable
characteristics:



210 Theoretical Background CMOST User Guide



Approximate orthogonality of the input parameters.
Space-filling, that is, the sampling points (experiments) should be evenly
distributed in the parameter space. In other words, the collection of experiments
should be a representative subset of all possible experiments. This is indicated by
the minimum sampling distance, the larger the better.
The orthogonality of two columns in a design matrix is measured by the correlation between
two column vectors ) ,..., , (
2 1 n
v v v v =

and ) ,..., , (
2 1 n
w w w w =

:

| |

= =
=


n
i
i
n
i
i
n
i
i i
w w v v
w w v v
1
2
1
2
1
) ( ) (
) )( (

If two columns have zero correlation, they are orthogonal. If all of the columns in the design
matrix are orthogonal, the design is an orthogonal design. An orthogonal design is desirable
since it ensures independence among the coefficient estimates in a regression model.
In CMOST, the orthogonality of a design is measured by the maximum pair-wise correlation
of the columns of a design matrix. The maximum pair-wise correlation is found by
calculating the absolute value of the correlation coefficient for all pairs of column vectors in
the design matrix, and then selecting the maximum of these values. A value of 0 is best
(indicating orthogonality), and a value of 1 is worst (indicating that at least one column in the
design matrix is a linear combination of the remaining columns). Generally, to ensure the
accuracy of sensitivity analysis and uncertainty assessment results, the maximum pair-wise
correlation of the design should be less than 0.2.
Another desirable feature for a design is its ability to evenly spread points in the parameter
space. For many interpolation methods used to generate proxies for the outputs of
simulations, such as kriging, the errors get larger as the interpolated point moves away from
an observation point (in many cases the errors are zero at the interpolation points). Having the
observation points evenly distributed would then guarantee a uniform accuracy for the
approximation throughout the parameter space. Designs with these characteristics are called
space-filling designs. Another benefit of space-filling designs is the avoidance of
undesirable artificial correlations between the parameters.
In CMOST, the space-filling of a design is assessed by the Euclidean minimum distance
which is the minimum Euclidean distance of all design points (experiments). A large value of
Euclidean minimum distance means that no two points are close to each other. Between two
designs, the one with the greater minimum distance between any two points (experiments) is
considered to be the better design.



CMOST User Guide Theoretical Background 211



In CMOST, the following sampling methods are available:
One parameter at a time (OPAAT)
Two-level classical experimental designs: Fractional factorial and Plackett-Burman
designs
Three-level classical experimental designs: Box-Behnken and Central Composite
designs
Latin hypercube design
11.3.1 One-Parameter-at-a-Time Sampling
One-parameter-at-a-time sampling is a traditional method for sensitivity analysis. In this
method, the researcher seeks to gain information about the effect of a parameter by varying
only one parameter at a time. This procedure is repeated in turn for all parameters to be studied.
For example, let us assume we want to perform a sensitivity analysis for the following five
parameters.

The candidate values for these parameters are:
Parameter Candidate Values
POR 0.22, 0.29, 0.36
PERMH 3000, 4500, 6000
PERMV 2000, 2400, 2800
HTSORW 0.16, 0.21, 0.26
HTSORG 0.02, 0.04, 0.06
Using One-Parameter-at-a-Time sampling, CMOST will generate the following 11 experiments:



212 Theoretical Background CMOST User Guide




In the above example, the parameter default values are shown in Experiment ID 0 (base case).
It can be seen from the table that for experiments 1, 2, and 3, all parameters except for POR use
their middle candidate value. Therefore, we can determine the conditional main effect of POR by
comparing the simulation results for experiments 1, 2, and 3. Similarly, we can determine the
effect of PERMH by comparing the simulation results for experiments 3, 4, and 5.
The use of one-parameter-at-a-time sampling is generally discouraged by researchers, for the
following reasons:
More runs are required for the same precision in effect estimation
Interactions between parameters cannot be captured
Conclusions from the analysis are not general (i.e., only conditional main effects are
revealed)
11.3.2 Latin Hypercube Design
11.3.2.1 Evolution of Latin Hypercube
To explain the fundamentals of Latin hypercube design, this section traces the line of
literature from random designs to Latin hypercube sampling to Latin hypercube to orthogonal
Latin hypercube (Cioppa, 2002).
Random design was proposed by Satterthwaite (1959). In a random design, a random
sampling process with replacement is used to choose all or some of the elements of each
variable in the design matrix. The principal criticisms of random designs are that the
interpretation of the results cannot be justified due to random confounding and the estimators
of the coefficients could be biased.



CMOST User Guide Theoretical Background 213



To improve random design, Mckay et al. (1979) proposed Latin hypercube sampling. In Latin
hypercube sampling, the input variables are considered to be random variables with known
distribution functions. For each input variable, all portions of its distribution are represented by
input values which divide its range into n strata of equal probability and sampling once fromeach
stratum. For each input variable, the n sampled input values are assigned at randomto n cases.
As an example, let us assume there are four input variables, each having a uniform [0, 1]
distribution and 10 simulation runs are to be made. For all four variables, one design value is
independently chosen at random from within each of the 10 equal probable intervals [0.0,
0.1), [0.1, 0.2) , [0.2, 0.3) , [0.3, 0.4) , [0.4, 0.5) , [0.5, 0.6) , [0.6, 0.7) , [0.7, 0.8) , [0.8, 0.9) ,
and [0.9, 1.0]. For every input variable, the order in which the 10 sampled values appear in
the design matrix is randomly determined. The following table shows a design matrix
obtained by this procedure. It is noted that, as shown in this example, design matrices
generated in this way will likely have correlations between columns.
Run X1 X2 X3 X4
1 0.32 0.17 0.91 0.71
2 0.53 0.58 0.30 0.93
3 0.92 0.84 0.48 0.12
4 0.17 0.90 0.05 0.22
5 0.29 0.02 0.16 0.30
6 0.45 0.41 0.83 0.87
7 0.63 0.68 0.74 0.04
8 0.75 0.24 0.66 0.61
9 0.87 0.79 0.52 0.48
10 0.01 0.36 0.22 0.53
A common variant of the design generated by Latin hypercube sampling is called Latin
hypercube (Tang, 1993). In Latin hypercube, the input values for every variable are
predetermined and there is no sampling within strata. An k n Latin hypercube consists of k
permutations of the vector {1, 2, , n}
T
. Each element of the vector represents a sample value
(level). Each of the k columns of the design matrix contains the levels 1, 2, , n, randomly
permuted, with each possible permutation being equally likely to appear in the design matrix.
To enhance the capability of Latin hypercube designs for regression analysis, Ye (1998)
constructs orthogonal Latin hypercubes. An orthogonal Latin hypercube is defined as a Latin
hypercube for which every pair of columns has zero correlation. Furthermore, in Yes
orthogonal Latin hypercube construction, the element-wise square of each column has zero
correlation with all other columns, and the element-wise product of every two columns has
zero correlation with all other columns. These properties ensure the independence of
estimates of linear effects of each variable and the estimates of the quadratic effects and
interaction effects are uncorrelated with the estimates of the linear effects.



214 Theoretical Background CMOST User Guide



11.3.2.2 Latin Hypercube Design i n CMOST
The Latin hypercube designs generated by CMOST use a more general variant of the above.
Specifically, each of the parameters can have any number of sample values. The sample values
can be evenly distributed (uniform distribution) or not-evenly distributed as they are entered by
the user. To combine the sample values to create design points (job patterns) in the design,
draws without replacement are done; i.e., for the first point a value for each parameter is
selected randomly from the set of possible values, for the second point the random selection is
done excluding the points already selected and so on. As an example, the following algorithm
describes the steps to generate a basic Latin hypercube design for five parameters.

The sample values for these parameters are:
Parameter Sample Values
POR 0.22, 0.24, 0.26 (3 values)
PERMH 3000, 3500, 4000, 4500, 5000 (5 values)
PERMV 2000, 2500 (2 values)
HTSORG 0.16, 0.18, 0.20, 0.22 (4 values)
HTSORW 0.02, 0.04, 0.06 (3 values)
1. The number of points (job patterns) should be a common multiple for all the numbers
of sample values. So the available numbers of jobs patterns are 60, 120, 180, and 240.
Lets assume we want to generate a design with 120 points (job patterns).
2. For each parameter, generate a vector with 120 sample values. For example, for
parameter POR, the vector should have 40 values of 0.22, 0.24, and 0.26
respectively. The 120 sample values for each parameter are ordered randomly.
3. Assemble all the vectors of all parameters to form a basic Latin hypercube design
with 120 design points (job patterns).



CMOST User Guide Theoretical Background 215



The basic Latin hypercube design generated in the above does not guarantee that the points
will be evenly distributed and uncorrelated. The figure below shows two examples of valid
Latin hypercube design where the points are totally correlated. It is obvious that these designs
are not suitable for proxy generation and sensitivity analysis.

To avoid such undesirable artifacts, an iteration (optimization) process is adopted in CMOST
to generate Latin hypercube designs with two desirable characteristics.
Approximate orthogonality of the input parameters.
Space-filling, that is, the sampling points (experiments) should be evenly
distributed in the parameter space.
The iteration (optimization) process is described as follows:
1. Start with an initial basic Latin hypercube design (this is the initial best design).
2. Generate a new basic Latin hypercube design.
3. Calculate the maximum pair-wise correlation of the new design.
4. Calculate Euclidean minimum distance of the new design.
5. Compare the new design with the best design. If the new design outperforms the
best design in terms of maximum pair-wise correlation and Euclidean minimum
distance, replace the best design with the new design.
6. Repeat steps 2~5 until the number of iterations is reached or an orthogonal design
is found.
It is noted that the above iteration (optimization) process does not aim at getting the optimum
Latin hypercube design, but just an improvement over the initial Latin hypercube design,
while constraining the time to generate the designs in the reasonable range.
11.3.2.3 References
Cioppa, T.M., Efficient Nearly Orthogonal and Space-Filling Experimental Designs for High-
Dimensional Complex Models, Naval Postgraduate School PhD Dissertation, September 2002.



216 Theoretical Background CMOST User Guide



McKay, M.D., Beckman, R.J ., and Conover, W.J ., A comparison of three methods for
selecting values of input variables in the analysis of output from a computer code,
Technometrics, Vol. 21, No. 2, May 1979.
Satterthwaite, F.E., Random balance experimentation, Technometrics, Vol. 1, No. 2, May 1959.
Tang, B., Orthogonal array-based Latin hypercubes, J ournal of the American Statistical
Association: Theory and Methods, Vol. 88, No. 424, December 1993.
Ye, K.Q., Orthogonal column Latin hypercubes and their application in computer
experiments, J ournal of the American Statistical Association: Theory and Methods, Vol. 93,
No. 444, December 1998.
11.3.3 Classical Experimental Design
11.3.3.1 Two-Level Classi cal Experimental Designs
Two-level designs are typically used in sensitivity analysis to identify main (linear) effects.
They are ideal for a quick screening study. They are simple and economical. They also give
most of the information required to go to a next-step multilevel response surface experimental
design if one is needed.
The standard layout for a two-level design uses - and +notation to denote the low level and
the high level respectively, for each parameter. For example, the matrix below describes an
experiment in which 4 runs were conducted with each parameter set to high or low during a
run according to whether the matrix had a +or - set for the parameter during that run. If the
experiment had more than 2 parameters, there would be an additional column in the matrix
for each additional parameter.
Run Parameter (X1) Parameter (X2)
1 - -
2 + -
3 - +
4 + +
The following types of two-level experimental designs are available in CMOST:
Fractional factorial designs
Plackett-Burman designs
11.3.3.2 Three-Level Cl assical Experimental Designs
In uncertainty assessment, three-level experimental designs can be used. In a three-level design,
each parameter effect on the response is evaluated at three levels (low, median, and high).
Three-level designs are also called response surface designs because in addition to main effects
(linear term), both two-term interactions and quadratic terms can be examined. The standard
layout for a three-level design uses -, 0, and +notation to denote the low level, median
level, and the high level respectively, for each parameter. CMOST provides the following
types of response surface designs:



CMOST User Guide Theoretical Background 217



Box-Behnken designs
Central Composite designs (Uniform Precision)
11.3.4 Parameter Correlation
The Parameter Correlation table is used to incorporate relationships that may exist between
green uncertain parameters when performing uncertainty assessment studies.
Normally, it is assumed that uncertain parameters are independent, especially when Monte
Carlo simulation is used to drive the uncertainty assessment; however, the assumption of
independent parameters may not be reasonable for all simulation studies.
The technique used by CMOST to account for parameter correlation was introduced by Iman
and Conover (1982) (refer to the reference below for more information).
Correlation between desired parameters is used to measure the degree to which parameters
are related. The most common measure of linear dependence is the Pearson Product Moment
Correlation, or the Pearson Correlation for short. However, since the Pearson Correlation
cannot capture non-linear trends and will be misleading in cases where there are outliers,
CMOST uses Spearman Rank Correlation to measure parameter correlation.
11.3.4.1 Positive definite correlati on matrix:
CMOST calculates algorithmically (Iman and Conover, 1982) the realized Spearmans rank
correlation matrix if the parameter correlation (desired Spearmans rank correlation matrix) is
positive definite, as defined below:
An n n real symmetric matrix P is positive definite if x
T
Px >0 for all non-zero vectors x
with real entries, where x
T
denotes the transpose of x.
NOTE: If the desired Spearmans rank correlation matrix is not positive definite, CMOST
will try to find the nearest matrix which meets this condition.
11.3.4.2 Spearmans rank coeffici ent
Spearman's rank correlation coefficient is a measure of the statistical dependence of two
variables in a monotonic function. Instead of using the actual values directly, the Spearmans
rank correlation coefficient ranks the values and then correlates the ranks. It produces a value
between -1 and +1. If the correlation is +1, it means there is a perfect positive relation
between the ranks of both variables, and if the correlation is -1, it means there is a perfect
negative relation between the ranks of both variables.
To calculate the Spearmans rank correlation parameter, the data should first be ranked from
minimum to maximum, or vice versa, and then entered into the following equation:

( )( )
( ) ( )
1
2 2
1 1
n
i i
i
n n
i i
i i
x x y y
x x y y

=
= =

=






218 Theoretical Background CMOST User Guide



11.3.4.3 References
Iman, R. and Conover, W., A Distribution-Free Approach to Inducing Rank Correlation
Among Input Variables, Communications in Statistics - Simulation and Computation, 1982.
11.4 Proxy Modeling
11.4.1 Response Surface Methodology
Response surface methodology (RSM) explores the relationships between input variables
(parameters) and responses (objective functions). The main idea of RSM is to use a set of
designed experiments to build a proxy (approximation) model to represent the original
complicated reservoir simulation model. The most common proxy models take either a linear
form or quadratic form of a polynomial function. After a proxy model is built, tornado plots
displaying a sequence of parameter estimates can be used to assess the sensitivity of parameters.
11.4.2 Types of Response Surface Models
11.4.2.1 Linear Model
The linear proxy model is:
=
0
+
1

1
+
2

2
++


where:
y response (objective function)

1
,
2
,


coefficients of the proxy model. In some statistics references, referred to as
the parameter estimates or unknown parameters.

1
,
2
,

input variables (parameters)


11.4.2.2 Simple Quadratic Model
The simple second-degree (quadratic) polynomial model is:


= =
+ + =
k
1 j
2
j jj
k
1 j
j j 0
x a x a a y

where:

intercept

1
,
2
,

coefficients of linear terms



coefficients of quadratic terms
0
a
jj
a



CMOST User Guide Theoretical Background 219



11.4.2.3 Quadratic Model
The second-degree (quadratic) polynomial model is:


= < = =
+ + + =
k
1 j j i
k
2 j
j i ij
2
j jj
k
1 j
j j 0
x x a x a x a a y

where:
0
a
Intercept
k
a a a , , ,
2 1


coefficients of linear terms
jj
a

coefficients of quadratic terms
ij
a

coefficients of cross (interaction) terms.
11.4.2.4 Reduced Linear Model
For a polynomial proxy model, the statistical significance of each term is characterized by its
corresponding > || value. If a term has a large > || value, the term is
statistically not significant and it can be removed from the proxy model to simplify and
improve the model (i.e., to maximize

2
and

2
. For further information, refer
to Summary of Fit Table).The significance probability (alpha) determines whether a response
surface term should be included in the reduced response surface model. If the > || of a
term is less than or equal to alpha, the term will be included. The default alpha value is 0.1.
In CMOST, the reduced linear model is built using the following three-step process:
1. Build the linear model.
2. Remove statistically insignificant terms.
3. Build the reduced linear model using the remaining (statistically significant) terms.
11.4.2.5 Reduced Quadratic Model
Similar to the Reduced Linear Model, the reduced quadratic model is built using the following
three-step process:
1. Build the quadratic model.
2. Remove statistically insignificant terms.
3. Build the reduced quadratic model using the remaining (statistically significant)
terms.
11.4.3 Normalized Parameters (Variables)
The coefficients of a proxy model are highly dependent on the scale of the input variables.
For example, if an input variable is converted from millimeter to meter, the coefficient
changes by a factor of a thousand. If the same change is applied to a squared (quadratic) term,
the coefficient changes by a factor of a million. Since we are interested in the effect size



220 Theoretical Background CMOST User Guide



indicated by the coefficients, we need to examine the coefficients in a more scale-invariant
fashion. This means converting from an arbitrary scale to a meaningful one so that the
magnitudes of the coefficients can be related to the size of the effects on the response. In
CMOST, all input variables (parameters) are scaled to have a mean of zero and a range
from -1 to 1. This corresponds to the scaling used in traditional experiment design. For a
linear term, the coefficient is half the predicted response change as the input variable travels
over its full range, from -1 to 1.
11.4.4 Response Surface Model Verification Plot
The model verification plot shows how the data points fit the model by plotting the actual
response versus the predicted response for each training and verification job. The distance
from each point to the 45 degree line is the error, or residual, for that point. The points that
fall on the 45 degree line are those that are perfectly predicted.
To visually show whether the model is statistically significant, the lower and upper 95%
confidence curves are superimposed on the actual (simulated) vs. proxy predicted plot. The
lower and upper 95% confidence curves are determined using equations given in the paper
Leverage Plots for General Linear Hypotheses by J ohn Sall, published in The American
Statistician, November 1990, Vol. 44, No. 4. If the 95% confidence curves cross the
horizontal reference line defined by the Mean of Response, then the model is significant. If
the curves do not cross, then the model is not significant (at the 5% level).
11.4.5 Summary of Fit Table
The Summary of Fit table shows the following numeric summaries of the response surface
model:
11.4.5.1 R-Squared (R
2
)
The coefficient of multiple determination
2
R is defined as:

) (
) (
2
Total Squares of Sum
Model Squares of Sum
R =

2
R

is a measure of the amount of reduction in the variability of the response obtained by using
the regressor variables in the model. An
2
R of 1 occurs when there is a perfect fit (the errors
are all zero). An
2
R of 0 means that the model predicts the response no better than the overall
response mean. It should be noted that a large value of
2
R does not necessarily imply that the
regression model is a good one. Adding a variable to the model will always increase
2
R ,
regardless of whether the additional variable is statistically significant or not. Thus, it is
possible for models that have large values of
2
R to yield poor prediction of new observations.



CMOST User Guide Theoretical Background 221



11.4.5.2 R-Square Adjusted
2
R can be adjusted to make it comparable over models with different numbers of regressors
by using the degrees of freedom in its computation.

2
is defined as:

) 1 (
) (
) 1 (
1
2 2
R
p n
n
R
adjusted

=

Here n is the number of observations (training experiments) and p is the number of terms in
the response model (including the intercept).
In general,

2
will not always increase as variables are added to the model. In fact, if
unnecessary terms are added, the value of

2
will often decrease. When
2
R and

2
differ dramatically, there is good chance that non-significant terms have been
included in the model.
11.4.5.3 R-Square Prediction

2
is defined as:

) (
1
2
Total Squares of Sum
PRESS
R
prediction
=

where PRESS is the prediction error sum of squares. To calculate PRESS, select an
observation i. Fit the regression model to the remaining 1 n observations and use this
equation to predict the withheld observation
i
y . Denoting this predicted value by
) (

i
y , we
can find the prediction error for point i as
) ( ) (

i i i
y y e = . The prediction error is often called
the i
th
PRESS residual. This procedure is repeated for each observation n i , , 2 , 1 = ,
producing a set of n PRESS residuals
) ( ) 2 ( ) 1 (
, , ,
n
e e e . The PRESS statistic is then defined
as the sum of squares of the n PRESS residuals.


= =
= =
n
i
i i
n
i
i
y y e PRESS
1
2
) (
1
2
) (
] [

2
provides an indication of the predictive capability of the regression model. For
example, we could expect a model with 95 . 0
2
=
prediction
R to explain about 95% of the
variability in predicting new observations.
11.4.5.4 Mean of Response
Mean of Response is the overall mean of the response values. It is important as a base model
for prediction because all other models are compared to it.



222 Theoretical Background CMOST User Guide



11.4.5.5 Standard Error
Standard Error estimates the standard deviation of the random error. It is the square root of
the Error Mean Square in the corresponding Analysis of Variance table. Standard Error
is commonly denoted as .
11.4.6 Anal ysis of Variance Table
11.4.6.1 Source
Source lists the three sources of variation: Model, Error, and Total.
11.4.6.2 Degrees of Freedom (DF)
Total DF is used for the simple mean model. Only one degree of freedom is used (the
estimate of the mean parameter) in the calculation of variation, so the degrees of freedom for
Total is always one less than the number of observations.
Model DF is the number of terms (except for the intercept) used to fit the model.
Error DF is the difference between Total DF and Model DF.
11.4.6.3 Sum of Squares
The Sum of Squares (SS) column accounts for the variability measured in the response. It is
the sum of squares of the differences between the fitted response and the actual response.
Total SS is the sum of squared distances of each response from the sample mean which is the
base model (or simple mean model) used for comparison with all other models.
Error SS is the sum of squared differences between the fitted values and the actual values.
This sum of squares corresponds to the unexplained Error (residual) after fitting the
regression model.
Total SS less Error SS gives the sum of squares attributed to the Model.
One common set of notations for these is SSR, SSE, and SST for sum of squares due to
Regression (model), Error, and Total, respectively.
11.4.6.4 Mean Square
Mean Square is a sum of squares divided by its associated degrees of freedom. This
computation converts the sum of squares to an average (mean square).
Error Mean Square estimates the variance of the error term. It is often denoted as
2
s .
11.4.6.5 F Ratio
F Ratio is the Model Mean Square divided by the Error Mean Square. It tests the
hypothesis that all of the regression parameters (except the intercept) are zero. Under this
whole-model hypothesis, the two mean squares have the same expectation. If the random
errors are normal, then under this hypothesis, the values reported in the Sum of Squares
column are two independent chi-squares. The ratio of these two chi-squares divided by their
respective degrees of freedom (reported in the Degrees of Freedom column) has an



CMOST User Guide Theoretical Background 223



F-distribution. If there is a significant effect in the model, the F Ratio is higher than expected
by chance alone.
11.4.6.6 Prob > F
Prob > F is the probability of obtaining a greater F-value by chance alone if the specified
model fits no better than the overall response mean. Significance probabilities of 0.05 or less
are often considered evidence that there is at least one significant regression factor in the
model. This significance is also shown graphically in Simulated vs. Proxy Predicted plots, as
described in Response Model Verification Plot.
11.4.7 Effect Screening Using Normalized Parameters
11.4.7.1 Term
This column names the estimated terms. The first term is always the intercept. All parameters
are normalized from (Low, High) to (-1, +1).
11.4.7.2 Coefficient
This column lists the parameter estimates for each term. These are the coefficients of the
response surface model found by least squares.
11.4.7.3 Standard Error
Standard Error is the estimate of the standard deviation of the distribution of the parameter
estimate (coefficient). It is used to construct t-tests.
11.4.7.4 t Ratio
t Ratio is a statistic that tests whether the true parameter (coefficient) is zero. It is the ratio of
the coefficient to its standard error and has a Students t-distribution under the hypothesis,
given the normal assumptions about the model.
11.4.7.5 Prob > |t|
Prob > |t| is the probability of getting an even greater t-statistic (in absolute value), given the
hypothesis that the parameter (coefficient) is zero. This is the two-tailed test against the
alternatives in each direction. Probabilities less than 0.05 are often considered as significant
evidence that the parameter (coefficient) is not zero.
11.4.7.6 VIF
This column shows the variance inflation factor, which is a useful measure of the multi-
collinearity problem. Multi-collinearity refers to one or more near-linear dependencies among
the regressor variables due to poor sampling of the design space. Multi-collinearity can have
serious effects on the estimates of the model coefficients and on the general applicability of
the final model.
The larger the variance inflation factor, the more severe the multi-collinearity. Variance
inflation factors should not exceed 4 or 5. If the design matrix is perfectly orthogonal, the
variance inflation factor for all terms will be equal to 1.



224 Theoretical Background CMOST User Guide



11.4.8 Linear Model Effect Estimates
The effect estimate indicates how changing the setting of a parameter changes the response
(objective function). The effect estimate of a single parameter is also called a main effect or
linear effect. To determine the linear (main) effect estimates, the simulation results are fit
using a linear proxy model:

k k
x a x a x a a y + + + + =
2 2 1 1 0

In effect screening, the coefficients
k
a a a , , ,
2 1
are called parameter estimates or effect
estimates. In the above equation, a large coefficient suggests that the parameter is important
because it means that increasing or decreasing the parameter value leads to a significant
change in the objective function (response). On the other hand, a small coefficient would
imply that the parameter is not important.
Note that parameter estimates are highly dependent on the scale of the parameter. For
example, if you convert a parameter from grams to kilograms, the parameter estimates change
by a multiple of a thousand. Therefore, the effect estimates should be determined in a scale-
invariant fashion. There are many approaches to doing this. In CMOST, all parameters are
scaled to have a mean of zero and a range of two; i.e., all parameters are scaled to have a
range from -1 to 1. For a simple linear proxy model, the scaled estimate is half the predicted
response change as the parameter travels its full range (i.e., from -1 to 1).
To avoid ambiguous interpretation of tornado plots, CMOST reports the actual predicted
response change as the parameter travels from the smallest sample value to the largest sample
value. As an example, consider the following tornado plot of linear effect estimates:




CMOST User Guide Theoretical Background 225



The above tornado plot shows that the linear effect estimate for PERMV(1000, 2000) is
207.7. This means that if you increase PERMV from 1000 to 2000, the expected increase of
cumulative oil production is 207.7. Here the word expected is used because a linear proxy
model is an approximation of the real reservoir simulation model. The actual increase of the
objective function due to the change of PERMV from 1000 to 2000 varies for different
combinations of sample values of the other parameters.
To demonstrate the relative importance of different parameters, all of the effect estimates are
plotted on the same scale together with Maximum, Minimum, and Target values. The
Maximum is the maximum objective function value of all simulation runs in the design and the
Minimum is the minimum objective function value of all simulation runs in the design. The
Target is the value in the target field history file, if one is specified. For example, the above plot
shows that the Maximum is less than the Target. This indicates that it is not possible to match
the historical value using the given set of parameters and the defined ranges. Based on the effect
estimates of PERMV and PERMH, we may need to adjust the ranges to PERMV(2000, 3000)
and PERMH(3000, 5000) to match the historical value (target) of cumulative oil.
11.4.9 Quadratic Model Effect Estimates
For second-degree (quadratic) polynomial models, parameter interaction effects (cross terms
j i
x x ) and quadratic effects (
2
j
x ) can be extracted in addition to linear effects (
j
x ):


= < = =
+ + + =
k
j j i
k
j
j i ij j jj
k
j
j j
x x a x a x a a y
1 2
2
1
0

Similar to linear model effect estimates, quadratic model effect estimates are determined in a
scale-invariant fashion. More specifically, all parameters are scaled to have a mean of zero
and a range from -1 to 1. For this reason, for the linear and cross terms in the quadratic
model, the scaled estimate is half the predicted response change as the parameter travels
through its range (from -1 to 1). To avoid ambiguous interpretation of tornado plots, CMOST
reports the actual predicted response change as the parameter (or the cross and quadratic
terms) travels from the smallest sample value to the largest sample value. A sample tornado
of polynomial effect estimates is shown below:



226 Theoretical Background CMOST User Guide







CMOST User Guide Theoretical Background 227



The following table explains the interpretation of the above tornado plot:
Term Scaled Term
Effect Estimate
(see note)
PERMH(3000, 6000) 1
3000 6000
) 3000 PERMH ( 2
PERMH


= 273
PERMV(2000, 2800) 1
2000 2800
) 2000 PERMV ( 2
PERMV


= 223.2
POR*PERMH
PERMH POR
-62.86
POR(0.22, 0.36) 1
22 . 0 36 . 0
) 22 . 0 POR ( 2
POR

= -57.77
POP*PERMV
PERMV POR

-57.35
POR*POR
POR POR
48.52
HTSORG(0.02, 0.06) 1
02 . 0 06 . 0
) 02 . 0 HTSORG ( 2
HTSORG


= -44.63
PERMH*
PERMV
PERMV PERMH
44.62
NOTE: Effect Estimate is the expected change of the objective function when the scaled
term travels from -1 to +1.
Analysis of this particular tornado plot suggests the following conclusions regarding the
sensitivities of the parameters on cumulative oil production:
1. The two most important effects are the main (linear) effects of PERMH and
PERMV
2. The linear effects of POR and HTSORG are relatively important.
3. There are interaction effects between POR and PERMH, POR and PERMV, and
PERMH and PERMV.
4. The non-linear (quadratic) effect of POR*POR is also relatively important.



228 Theoretical Background CMOST User Guide



11.4.10 Reduced Model Effect Estimates
It is common that some model terms of a linear or quadratic model are not statistically
significant. Consider the quadratic model with the following Effect Screening table, where
all terms, including those that are statistically insignificant, are included:

Through the Proxy Settings tab, if you set Exclude Statistically Insignificant Terms to True,
then the proxy model will be built using only those terms that are significant; i.e., which have
Prob >|t| values greater than the value you set for Significant Probability Alpha. A simple
quadratic model can then be built using only significant terms. The Effect Screening table and
its corresponding tornado plot for the simple quadratic model are shown below:




CMOST User Guide Theoretical Background 229




11.5 Optimizers
11.5.1 CMG DECE
The CMOST DECE (Designed Exploration and Controlled Evolution) optimizer implements
CMGs proprietary optimization method. The DECE optimization method is based on the
process which reservoir engineers commonly use to solve history matching or optimization
problems. For simplicity, DECE optimization can be described as an iterative optimization
process that first applies a designed exploration stage and then a controlled evolution stage. In
the designed exploration stage, the goal is to explore the search space in a designed random
manner such that maximum information about the solution space can be obtained. In this
stage, experimental design and Tabu search techniques are applied to select parameter values
and create representative simulation datasets. In the controlled evolution stage, statistical
analyses are performed for the simulation results obtained in the designed exploration stage.
Based on the analyses, the DECE algorithm scrutinizes every candidate value of each
parameter to determine if there is a better chance to improve solution quality if certain
candidate values are rejected (banned) from being picked again. These rejected candidate
values are remembered by the algorithm and they will not be used in the next controlled
exploration stage. To minimize the possibility of being trapped in local minima, the DECE
algorithm checks rejected candidate values from time to time to make sure previous rejection
decisions are still valid. If the algorithm determines that certain rejection decisions are not
valid, the rejection decisions are recalled and corresponding candidate values are used again.



230 Theoretical Background CMOST User Guide



The DECE optimization method has been successfully applied in a number of real-world
reservoir simulation studies, including:
History matching for a highly heterogeneous black oil model
History matching of cold heavy oil production with aquifer
History matching of cyclic steam stimulation process
NPV optimization for a post-primary SAGD model with aquifer
NPV optimization for a 6 well pair SAGD model
The results demonstrate that DECE optimization method is reliable and efficient. Therefore,
it is one of the recommended optimization methods in CMOST.
11.5.2 Latin Hypercube plus Proxy Optimization
Use of this optimization algorithm involves the following four steps:
1. Latin Hypercube Design: The purpose of Latin hypercube design is to construct
combinations of the input parameter values so that the maximum information can be
obtained fromthe minimum number of simulation runs. Latin hypercube design is
chosen here because it can handle any number of input parameters with mixed levels.
See Latin Hypercube Design for further information.
2. Proxy Modeling: In this step, an empirical proxy model is built using training data
obtained from Latin hypercube design runs. The proxy model options available are
polynomial regression model and ordinary kriging. Polynomial regression models
have been widely used for the analysis of physical and computer experiments due to
their ease of understanding, flexibility, and computational efficiency. The cost of the
ordinary kriging interpolation estimate is normally significantly higher than the cost
of the polynomial regression estimate; however, it is still orders of magnitude faster
than actual simulation and it often provides more accurate prediction than
polynomial models. Refer to Proxy Modeling for further information.
3. Proxy-based Optimization: Due to the intrinsic limitations of a proxy model, it is
generally recognized that they usually cannot give accurate predictions for highly
nonlinear multidimensional problems. Therefore, the optimal solution obtained
based on the proxy model may not be the true optimal for the actual reservoir
model. This means that certain suboptimal solutions of the proxy model may
become the true optimal solution for the actual reservoir model. To counteract
false optimum predictions, a pre-defined number of possible optimum solutions
(i.e., suboptimal solutions of the proxy model) are generated to increase the chance
of finding the global optimum solution.



CMOST User Guide Theoretical Background 231



4. Validation and Iteration: For each possible optimum solution found through proxy
optimization, a reservoir simulation needs to be conducted to obtain the true
objective function value. To further improve the prediction accuracy of the proxy
model, the validated solutions can be added to the initial training data set. The
updated training data set can then be used to build a new proxy model. With the
new proxy model, a new set of possible optimum solutions can be obtained. This
iterative procedure can be continued for a given number of iterations or until a
satisfactory optimal solution is found.
The following figure illustrates the workflow of the Latin hypercube plus proxy optimization
algorithm:

GenerateinitialLatinhypercubedesign
Runsimulationsusingthedesign
Getinitialsetoftrainingdata
Buildaproxymodelusingtrainingdata
Findpossibleoptimumsolutionsusingproxy
Runsimulationsusingthesepossiblesolutions
Addvalidated
solutionsto
trainingdata
Satisfystopcriteria?
Stop
No
Yes
Polynomial
Ordinarykriging

One unique characteristic of Latin Hypercube plus Proxy optimization is a jump in the
solution quality after the Latin hypercube design is finished and proxy optimization starts.
For example, as shown in the following figure, after the initial 60 Latin Hypercube design
runs, the global optimum solution is quickly found within two iterations of proxy
optimization (there are 10 experiments in each iteration).



232 Theoretical Background CMOST User Guide




11.5.3 Particle Swarm Optimization
Particle swarm optimization (PSO) is a population-based stochastic optimization technique
developed by J ames Kennedy and Russell C. Eberhart in 1995, inspired by social behavior of
bird flocking and fish schooling.
Social influence and social learning enable a person to maintain cognitive consistency. People
solve problems by talking with other people about them and, as they interact, their beliefs,
attitudes, and behaviors change. The changes can be depicted as the individuals moving
toward one another in a sociocognitive space.
Particle swarm simulates this kind of social optimization. The system is initialized with a
population of random solutions and searches for optima by updating generations. The
individuals iteratively evaluate their candidate solutions and remember the location of their
best success so far, making this information available to their neighbors. They are also able to
see where their neighbors have had success. Movements through the search space are guided
by these successes, with the population usually converging towards good solutions.
For information about configuring a CMOST particle swarm optimization, refer to Particle
Swarm Optimization (PSO).
Latin hypercube design
Optimization using proxy
Latin hypercube design
Optimization using proxy



CMOST User Guide Theoretical Background 233



11.5.4 Differential Evolution
Differential Evolution (DE) is a powerful global optimization algorithm that was introduced
by Storn and Price (1995)
1
, based on their solution to the Chebychev polynomial fitting
problem.
DE uses a fixed number, Np, of vectors (solutions) in each population (or generation), and
with each vector, a combination of parameter values. To create new solutions, DE evolves the
population by arithmetically operating on these vectors (solutions). The process involves four
stepsinitialization, mutation, crossover, and selection.
The system is initialized with a population of random solutions or predefined solutions. It
then searches for optima by updating the populations/generations. The mutation process
involves adding a scaled difference of two solutions, using factor F, to the best solution in
each generation, to evolve the population/generation. The crossover operation uses factor Cr
to increase the population diversity. Finally, a selection operator is applied to preserve the
optimal solutions for the next generation.
For information about configuring a CMOST differential evolution, refer to Differential
Evolution (DE).
11.5.5 Random Brute Force Search
The brute force search method is a straightforward optimization method that evaluates all
possible solutions and decides afterwards which one is the best. It is feasible only for small
problems in terms of the dimensionality of the search space, since CMOST requires that the
search space (the number of all possible parameter value combinations) be less than 65536.
To address the limitations of the brute force search, CMOST has implemented random search
methods for optimization, based on exploring the domain in a random manner to find optimum
solutions. These are the simplest methods of stochastic optimization and can be quite effective
in some problems (small search space and fast-running simulation jobs). There are many
different algorithms for random search such as blind random search, localized random search,
and enhanced localized random search. The algorithm implemented in CMOST is blind random
search. This is the simplest random search method, where the current sampling does not take
into account the previous samples. That is, this blind search approach does not adapt the current
sampling strategy to information that has been garnered in the search process. One advantage of
blind random search is that it is guaranteed to converge to the optimum solution as the number
of function evaluations (simulations) gets large. Realistically, however, this convergence
feature may have limited use in practice since the algorithm may take a prohibitively large
number of function evaluations (simulations) to reach the optimum solution.
For information about random brute force search configuration settings, refer to Random
Brute Force Search.


1
Storn, R. and Price, K., Differential Evolution - A Simple Efficient Adaptive Scheme for Global
Optimization over Continuous Spaces, Technical Report 95-012, Int. Comp. Sci. Inst., Berkeley, CA, 1995.



CMOST User Guide Glossary 235



12 Glossary
The following terms are:
CMOST terms, or terms that have specific meaning within the CMOST context.
Terms needed to describe the application and use of CMOST.
Term Definition
Base 3tp File File created by Results 3D using the base IRF. The base 3tp file is
used by CMOST as the basis for displaying plots in Results Graph.
Base Dataset A valid dataset for any CMG simulator that is used as the basis for a
CMOST study. The master dataset is derived from the base dataset.
Base IRF Simulation results file which uses default parameter values.
Base Session File File created by Results Graph using the base IRF. The base
session file is used by CMOST as the basis for displaying plots in
Results Graph.
Box-Behnken Design In Uncertainty Assessment, a set of experiments designed to have
more runs at the middle values of the input parameters.
Brute Force Search History Matching and Optimization method in which all
combinations of parameter values are tested.
Candidate Values List List of values that will be substituted for a discrete type parameter
in a Master Dataset.
Central Composite
Design
In Uncertainty Assessment, a set of experiments with runs which
are evenly distributed at low, middle, and high values of the
input parameters.
Characteristic Date
Time
Dates used in the calculation of an objective function.
Characteristic date times include:
Built-in fixed date times, the simulation start and end date
times derived from the SR2 files.
Fixed date times, entered by users.
Dynamic date times from original time series, such as the date
the value of an original time series exceeds a certain quantity.
Dynamic date times from user-defined time series.



236 Glossary CMOST User Guide



Term Definition
CMG DECE See DECE.
CMM Editor Tool for viewing, navigating, and editing the CMOST Master
Dataset (.cmm) and related include files (.inc) files. For further
information refer to CMM File Editor.
CMM File See Master Dataset.
CMM File Editor See CMM Editor.
CMOST CMGs sensitivity assessment (SA), history matching (HM),
optimization (OP), and uncertainty assessment (UA) tool.
CMR File Results file from earlier version of CMOST. Refer to Converting
old CMOST Files to new CMOST Files for information about
converting CMR files to new CMOST project and study files.
CMT File Task file from earlier version of CMOST. Refer to Converting
old CMOST Files to new CMOST Files for information about
converting CMT files to new CMOST project and study files.
Constraint In History Matching and Optimization, used to prevent
unnecessary simulation runs and to allow users to
change Objective Function values when constraints are violated.
For further information, refer to Hard Constraint and Soft
Constraint.
Cross Plot XY plots which are used to identify trends and relationships. The
axes can, for example, be parameters or objective functions. For
further information, see Parameter Cross Plots and Objective
Function Cross Plots.
Date Time Refer to Characteristic Date Time for information about date
times used in CMOST.
DE (Differential
Evolution)
History matching and optimization method in which the run is
initialized with a population of random solutions or pre-defined
known ones. DE attempts to find parameter values in an
intelligent manner to get optimal solutions. Refer to Differential
Evolution (DE) for more information.
DECE (Designed
Exploration Controlled
Evolution)
CMG-proprietary History Matching and Optimization method.
For further information, see CMG DECE.
Dictionary File Text file that contains a repository for descriptions (name,
dimensions, data range, and so on) of simulation data items in the
SR2 files.
Dynamic Date Time A date time from an original or a user-defined time series, on
which a condition is met for the first or the last time; for example,



CMOST User Guide Glossary 237



Term Definition
the first date time at which a property reaches a critical value.
Experiment A CMOST experiment is defined by a unique set of
input parameters and objective functions.
Experimental Design Definition of a set of experiments, optimally selected to obtain
information about a response.
FHF (Field History
File)
A text file containing reservoir production or injection data for
one or more wells. Field History Files are required only
for History Matching.
Fixed Date Simulation
Results Observer
A Results Observer that collects data at one point in time for each
simulation.
Fixed Date Times User-defined fixed date, for example, YearEnd2011, used in the
determination of an objective function, such as the cumulative oil
produced by the end of 2011. Refer to Characteristic Date Times
for information about specifying fixed data times.
Fluid Contact Depth
Series
If the SR2 files contain fluid saturation data, CMOST can
calculate gas-oil, water-oil, and water-gas contact depths at well
locations. These depths are calculated for each time step that fluid
saturation data is available. These depths can then be used as time
series data for history matching.
Formula Equation entered in a Master Dataset to perform calculations on
values during a CMOST run.
Fractional Factorial
Design
In classic experiment design, a sampling method in which a
subset of the samples determined from a full factorial design is
chosen to determine information about the important aspects of a
study.
Full Factorial Design Study with experiments that take on all possible combinations of
parameter values at two or three selected levels.
Fundamental Data Data (time, time series, distance vs. depth, and fluid contacts) that
is obtained or calculated directly from SR2 files, and used for
history matching and optimization.
Hard Constraint If a hard constraint is violated, the simulation run will not take
place as these constraints are checked by the CMOST engine
prior to the start of the run. See Hard Constraints for information
about specifying hard constraints.
Histogram A graph with values or ranges of values on the x-axis and bars,
the height of which represents occurrence, in the y direction.
HM (History Match) CMOST analysis to match simulation results to production
history.



238 Glossary CMOST User Guide



Term Definition
History Matched
Model
See Matched Model.
History Match Error Percentage relative error between simulation results and production
history obtained from, for example, a Field History File.
Include File A data file (an array of porosity data, for example) that is
included in a Master Dataset by reference.
Intermediate
Parameter
A parameter that is entered in the Parameters table to help defined
the relationships between two other parameters. For an example,
refer to To add an intermediate parameter.
IRF (Indexed Results
File)
Text file in the SR2 file system describing the data in the MRF
(Main Results File) and how to obtain this data.
Kriging Estimating the values of geostatistical variables at locations
where samples have not been taken, using weighted values of
neighboring samples.
LHD (Latin
Hypercube Design)
Technique for constructing combinations of input parameter
values so that the maximum information can be obtained from the
minimum number of simulation runs.
Latin Hypercube Plus
Proxy Optimization
Latin hypercube design is used to construct experiments then an
empirical proxy model is built using the training data obtained
from the Latin hypercube design runs. The proxy model is then
used to determine the optimal solution. See Latin Hypercube plus
Proxy for further information.
Local Objective
Function
A function that the user wants to minimize (history match error,
for example) or maximize (net present value, for example). Refer
to Objective Function for additional information.
Master Dataset
(CMM)
Version of the Base Dataset that has been modified to tell
CMOST where to enter different parameter values, thereby
creating new datasets for each experiment.
Match Quality In history matching, a measure of the match between the results
of a CMOST study and a field history file. Refer to History
Match Quality for further information.
Matched Model Model produced by minimizing the history match error. Also
referred to as a history matched model.
Monte Carlo
Simulation
Simulations that involve repeated generation of outputs using
randomly generated inputs which follow defined probability
distributions.
Monte Carlo
Simulation Using
Uncertainty Assessment method. Using Monte Carlo simulation,
inputs are randomly generated from probability distributions to



CMOST User Guide Glossary 239



Term Definition
Proxy simulate the process of sampling from an actual population.
These inputs are then fed into the response surface (proxy) model,
which is used to generate outputs and determine the uncertainty in
the reservoir model. See Monte Carlo Simulation Using Proxy for
further information.
Monte Carlo
Simulation Using
Reservoir Simulation
Uncertainty Assessment method. In this case, the inputs selected
from the Monte Carlo simulation are run through the simulator to
generate outputs and determine the uncertainty in the reservoir
model. See Monte Carlo Simulation Using Simulator for further
information.
MRF (Main Results
File)
Main Results File, a binary file in the SR2 file system containing
simulation data.
NPV (Net Present
Value)
Stream of future cash flows discounted to a given date (present
date or base date) to reflect the time value of money and other
factors, such as investment risk.
Objective Function An expression or quantity that the user wants to minimize or
maximize. In the case of History Matching, for example, the user
wants to minimize the error between field data and simulation
results. In the case of Optimization, the user may want to
maximize net present value.
Observer See Results Observer.
OPAAT (One
Parameter At A Time
Sampling)
Traditional method for performing Sensitivity Analysis studies, in
which information about the effect of a parameter is determined
by varying only that parameter. The procedure is repeated, in
turn, for all parameters to be studied. Refer to One-Parameter-at-
a-Time Sampling for more information.
OP (Optimization) Identification of an optimal field development plan, and operating
conditions that will produce either a maximum or minimum value
for objective functions that the user has specified, in particular
the global objective function (OF, for example, the net present
value, or NPV) and subsidiary OFs, which reflect the influence
of selected operating parameters.
Optimal Model Model determined from an optimization study; i.e., the parameter
values which, for example, maximize net present value, or NPV.
Optimizer Algorithm used to find the optimal solution for history matching
or optimization studies. In the case of CMOST, these algorithms
include CMG DECE Optimizer, Latin Hypercube Plus Proxy
Optimization, Particle Swarm Optimization, and Random Brute
Force Search.



240 Glossary CMOST User Guide



Term Definition
Ordinary Kriging Kriging that relies on the spatial correlation structure of the data
to determine the weighting values that should be applied for a
particular location; for example, the further the data point is from
the location, the lower the weighting factor.
Origin Source of simulation data, for example, a well or a field.
Original Time Series Time series data obtained directly from a simulator SR2 files.
Orthogonality In CMOST, the orthogonality of an experiment design is
measured by the maximum pair-wise correlation of the columns
of a design matrix. Refer to the information about Orthogonality.
.out File A text file that echoes the contents of the .dat file, and also
includes simulation results. Users are able to read this file.
Parameter Depending on the experimental design, values are substituted
for parameters in the Master Dataset, either from a Candidate
Values List or a formula.
PSO (Particle Swarm
Optimization)
History Matching and Optimization method in which the run is
initialized with a population of random solutions. Navigation
through the search space is guided by the best success so far,
which usually results in a convergence towards the best solution.
Refer to Particle Swarm Optimization for more information.
Plackett-Burman
Design
In Sensitivity Analysis, a screening method in which the resulting
number of experiments is a multiple of four. This method can be
used when you have a large number of potential factors and you
want to quickly determine those that will most affect the objective
function.
Pre-simulation
Commands
Commands that are used to modify the experiment dataset before
it is submitted to a simulator; for example, users may want to
adjust variogram parameters in History Matching.
Prior Probability
Distribution Function
In Uncertainty Assessment, the probability distribution of the
input parameter values. The information is used to formulate the
parameter values used in uncertainty tests, so that their
distribution reflects reality.
Project A collection of studies defined for the purpose of characterizing
the performance of, for example, a field, sector, group, or even a
single well. CMOST projects consist of one or more studies, each
of which consists of one or more experiments.
Property vs. Distance
Series
Type of data series, such as saturation vs. distance along the well,
which is retrieved from the SR2 files for one instant in time. Used
for history matching, property vs. distance series data can be
compared with data obtained from one or more well log files.



CMOST User Guide Glossary 241



Term Definition
Proxy Dashboard Through the Proxy Dashboard, you can immediately start to
inspect and assess the effects of varying parameter values on time
series while the study is running. Refer to Proxy Dashboard for
further information.
Proxy Model An empirical model built using data obtained from simulation
runs. The proxy model will typically run several orders of
magnitude faster than actual simulations. Refer to Proxy
Modeling for more information.
Proxy-based
Optimization
Optimization method, in which a predefined number of possible
optimal solutions, obtained from the proxy model, is run through
the simulator to obtain the true optimal solution. See Proxy-based
Optimization.
Random Brute Force History Matching and Optimization method in which all
combinations of parameter values are tested, with the starting
point and path through the parameter values different for each
run. See Random Brute Force Search for further information.
RSM (Response
Surface Methodology)
For Sensitivity Analysis and Uncertainty Assessment using
classical experimental design or Latin hypercube design, a
response surface methodology is applied. Response surface
methodology (RSM) explores the relationships between input
variables (parameters) and responses (objective functions). A set
of designed experiments is used to build a proxy model
(approximation) of the reservoir objective function. The most
common proxy models take either a linear or quadratic form.
After a proxy model is built, Tornado plots displaying a sequence
of parameter estimates are used to assess parameter sensitivity.
Refer to Response Surface Methodology for further information.
Restart (.rst) File This file contains the information that allows a simulation to
continue from a previously halted run.
Results File (CMR) With CMOST 2012 and earlier, results are saved to a CMR file.
Results Observers Simulation outputs that CMOST caches in its results file. During
CMOST runs, results observers display results specified by the
user. As the run progresses, more and more curves or plots will
appear on the plots, with the optimal runs highlighted. The user
can also highlight the results of specific experiments.
R-Square (R
2
) Indicates how well a proxy model fits observed data. An R
2
of 1
occurs when there is a perfect fit (the errors are all zero). An R
2
of
0 means that the proxy model predicts the response no better than
the overall response mean.
R-Square Adjusted Modification of R
2
that adjusts for the number of explanatory



242 Glossary CMOST User Guide



Term Definition
terms in a model. Unlike R
2
, the adjusted R
2
increases only if the
new term improves the proxy model more than would be
expected by chance. The adjusted R
2
can be negative, and it will
always be less than or equal to R
2
.
R-Square Predicted Indicates how well a proxy model predicts responses for new
observations. Ranging between 0 and 1, larger values suggest
models of greater predictive ability.
Run Configuration Specification of the machines to which CMOST will submit jobs;
for example, to the users local machine or to a cluster of
machines accessible through the network.
Sampling Method Method by which the parameter space is sampled when
performing a Sensitivity Analysis or Uncertainty Assessment. For
further information, refer to Sampling Methods.
SA (Sensitivity
Analysis)
Analysis carried out to determine which parameters have the
greatest effect on simulation results. This information is then
useful in suggesting parameters that can be eliminated from
consideration in subsequent studies.
Soft Constraint Allows the user to override objective function values if they
violate the constraint. Checking for this violation takes place
while the simulation is being run. A penalty for constraint
violation can also be defined. See Soft Constraints for
information about specifying soft constraints.
Special Dictionary File Dictionary file required to process SR2 files produced by a
special simulator, such as the STARS-ME simulator.
SR2 Files Group of files containing the results of a simulation runan IRF
(Indexed Results File) and an MRF (Main Results File). Results
Graph and 3D use the SR2 files for post-processing of simulation
output.
SR2 Processing Stack
Size
Stack size (MB) used by the SR2 reader to read SR2 files. The
default stack size is 40 MB.
Study A set of parameters is varied in a defined way to assess the
sensitivity of parameters on objective functions (SA, Sensitivity
Analysis), to match simulator outputs with a history file
(HM, History Match), to optimize the value of objective functions
by varying operating conditions (OP, Optimization), or to assess
the variation of an objective function due to uncertainty in the
value of a reservoir parameter (UA, Uncertainty Assessment).
Study File (.cms) A file that contains all of the configuration data needed to run a
CMOST study.



CMOST User Guide Glossary 243



Term Definition
Study Folder (.cmsd) Folder that contains all of the study .dat, SR2, .log, and .vdr files.
The retention of .dat, .log and SR2 files is as specified by the user
in the J ob Record and File Management area of the Simulation
Settings page.
Time Series
Simulation Results
Observer
Results observer which collects and plots data that changes with
time, such as rate and pressure for all times during the simulation
runs.
Tornado Plots A tornado plot is produced for each objective function.
Parameters are ordered vertically, from those that have the
greatest effect on the objective function (longest bar) to those that
have the least effect (shortest bar). The effect is a graph that looks
like a tornado.
Training Data Data used to build a proxy model by analyzing the relationship
between input parameters and output objective functions.
Three-level Classical
Experimental Design
Experiments take on all possible combinations of three values or
levels for each input parameter; i.e., a low, median, and high
level.
Two-level Classical
Experimental Design
Experiments take on all possible combinations of two values or
levels for each input parameter; i.e., a low and high level.
UA (Uncertainty
Assessment)
Analysis carried out to determine the likely variation in simulation
results due to uncertainty, in particular, of reservoir variables.
User-defined Time
Series
Time series that is not directly available from the SR2 files, but
which can be derived from available SR2 data. Refer to User-
Defined Time Series for further information.
Variogram Used in kriging, the variogram describes the variance of the
difference between the field values at two locations (x and y)
across realizations of the field (Cressie, N., 1993, Statistics for
Spatial Data, Wiley Interscience).
VDR (Vector Data
Repository) Files
Files containing compressed simulation data from CMOST runs,
which are used to calculate objective functions. The files are
compressed to reduce disk space and runtime.





CMOST User Guide Index 245



13 Index
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A
Advanced objective functions, 100
Advanced settings, 61
B
Base dataset, 21
Base files, 20
Base IRF, 21
Base session file, 21
Base SR2 files, 21
Base SR2 Info area, 60
Basic simulation result, 90
Best practices (for using CMOST), 32
C
Characteristic date times, 88
Classical experimental design, 216
CMG DEC optimization, 229
engine settings, 117
CMM File Editor, 173
block selection, 178
comments, 175
context menu, 174
creating/inserting CMOST parameters,
174
deleting parameters, 175
enable/disable syntax, 178
find/replace text, 178
include files, 175
keyboard shortcuts, 180
multiple views, 179
navigation tools, 177
starting, 173
syntax enabling and disabling, 178
toggle outlining, 177
CMOST
base files, 20
best practices, 32
closing, 57
components, 20
concepts, 20
configuring to work with Launcher, 193
file system, 21
formulas, 27
master dataset, 24
names, 52
navigating, 35
opening, 35
overview, 17
project components, 20
required fields, 52
running and controlling, 111
tab display, 53
tables, 54
user interface, 30
CMOST Formula Editor, 181
built-in functions, 183
constants (in formulas), 181
formula calculation order, 183
functions (in formulas), 181
operators (in formulas), 182
parts (of formulas), 181
variables (in formulas), 182
CMOST formulas, 27
Constraints
hard constraints, 80
soft constraints, 107
Control Centre, 111
Converting files to new CMOST, 11
Creating and editing input data, 59
Curves, 50
D
Data points, 50
Default Field Values, 53
Differential evolution, 233
engine settings, 121



246 Index CMOST User Guide



E
Engine settings, 114
CMG DECE optimization, 117
differential evolution, 121
external engine, 123
Latin hypercube plus proxy optimization,
118
Monte Carlo simulation using proxy, 118
Monte Carlo simulation using simulator,
120
one-parameter-at-a-time (OPAAT), 120
particle swarmoptimization (PSO), 121
randombrute force search, 121
response surface methodology, 122
Experiment
creating, 139
highlighting, 50
quality, checking, 146
Experiments Table, 131
checking experiment quality, 146
configuring, 143
creating experiments, 139
exporting to Excel, 147
navigating, 132
reprocessing experiments, 147
viewing simulation log, 147
Exporting time series data, 161
External engine
external engine and user-defined
executable, 123
F
Field data info area, 60
Field Default Values, 53
Field history file, 21
File Editor (see CMM File Editor)
File system, 21
Fluid contact depth series, 70
Formula Editor (see CMOST Formula
Editor)
Formulas
CMOST, 27
examples, 27
Fundamental data, 61
G
General information area, 59
General properties, 59
Getting started, 35
Global objective function candidates, 105
Glossary, 235
H
Handling large files, 181
Hard constraints, 80
Head nodes, 30
Help
obtaining, 16
Highlighting (an experiment), 50
History match (HM)
overview, 18
History match quality, 91
I
Include files, 28
Intermediate parameter, 75
J
J script
using in CMOST, 189
L
Large files, handling, 181
Latin hypercube design, 212
Latin hypercube plus proxy optimization,
230
engine settings, 118
Launcher
configuring, 15
configuring to work with CMOST, 193
Licenses, 15
M
Manual
about, 15
Master dataset, 24
editing parameters, 77
referenced files, 29
syntax, 26
Monte Carlo simulation using proxy
engine settings, 118
Monte Carlo simulation using simulator
engine settings, 120
Multiple studies, managing, 41
N
Net present value, NPV, 96



CMOST User Guide Index 247



O
Objective functions, 88, 205
advanced, 100
global, 105
history match quality, 91
net present value, NPV, 96
Observer plots
property vs. distance series, 162
time series, 160
One-parameter-at-a-time (OPAAT)
engine settings, 120
Optimization (OP)
overview, 18
Optimizers
CMG DECE, 229
Latin hypercube plus proxy optimization,
230
particle swarmoptimization, 232
randombrute force search, 233
Original time series, 62
Orthogonality, 210

P
Parameter correlation, 78, 217
Parameterization, 72
Parameters
adding, 72
copying, 77
deleting, 76
editing in master dataset, 77
importing frommaster dataset, 78
intermediate, 75
moving in table, 77
prior probability distribution functions, 75
Particle swarmoptimization (PSO), 232
engine settings, 121
Plots
copying image, 49
data points and curves, 50
highlighting, 50
saving image, 49
zooming in and out, 50
Pre-simulation commands, 82
Prior probability distribution functions, 75
Probability distribution functions, 203
Production history files, 21
Project
creating, 39
folder, 21
Property vs. distance series
configuring, 67
observer plot, 162
Proxy dashboard, 147
adding experiments, 151
assessing predictions, 151
building proxy model, 149
changing proxy role, 152
opening, 140
using, 151
Proxy modeling, 218
R
Randombrute force search, 233
engine settings, 121
Requirements
files, 20
computers, 15
licenses, 15
Reprocessing experiments, 147
Response surface methodology
engine settings, 122
proxy modeling, 218
types of response surface models, 218
verification plot, 220
Reuse pending, 133
Running and controlling CMOST, 111
S
Sampling methods, 209
classical experimental design, 216
Latin hypercube design, 212
one-parameter-at-a-time (OPAAT)
sampling, 211
Screen operations and conventions, 49
Sensitivity analysis (SA)
overview, 17
Simulation
files, 20
settings, 126
Simulation jobs, 152
Simulation settings, 126
job record and file management, 131
schedulers, 127
simulator settings, 129
Soft constraints, 107
Study
adding existing, 46
changing display name, 46
copying, 49
creating, 41
engines, 23
excluding, 47
importing data from, 48



248 Index CMOST User Guide



loading, 46
types, 23
unloading, 46
workflow, 23
Study manager, 41
Study process
generalized CMOST, 19
T
Tab display, 53
Tables, 54
columns, 55
entering cell data, 54
headings, 55
inserting, deleting and repeating rows, 54
organizing rows and columns, 55
Three-level classical experimental design,
216
Time series
exporting time series data, 161
observer plots, 160
original, 62
user-defined, 64
Troubleshooting, 199
Two-level classical experimental design, 216
U
Uncertainty assessment (UA)
overview, 18
User-defined time series, 64
User guide, about, 15
V
Validating input data
Validation tab, 57
Validation Error Summary, 111
VDR files, 21
Viewing and analyzing results, 155
displaying multiple plots, 155
objective function plots, 163
parameter plots, 157
property vs. distance plots, 162
screen operations, 156
time series plots, 160
Z
Zooming in and out (of plots), 50

También podría gustarte