Está en la página 1de 280

FIELD TRAINING MANUAL

Version 3.0 October 1996

CONTENTS

SECTION 1. SECTION 2. SECTION 3. SECTION 4. SECTION 5. SECTION 6. SECTION 7. SECTION 8. SECTION 9.

DATALOG SENSORS QLOG - REALTIME DATA ACQUISITION MISCELLANEOUS APPLICATIONS CHROMATOGRAPHY BASIC QNX COMMANDS QLOG / QNX APPLICATIONS ADVANCED QLOG / QNX TROUBLE SHOOTING DATAUNIT PROCEDURES

SECTION 1 - click here to go back to main menu DATALOG SENSORS

1.1 1.2

Introduction Digital Sensors a. Crown Depth b. Depth Wheel c. Proximity Sensor (Strokes and RPM) a. Ultrasonic Sensor b. Float Sensor a. b. c. d. Mud Density Mud Temperature Mud pH Mud Conductivity

1.3 1.4

Pit Level Sensors Mud Sensors

1.5

Pressure Sensors

a. Pump and Casing Pressure b. Hookload c. Hydraulic Torque

1.6 1.7 1.8

Electric Torque Mud Flow Paddle Gas Detectors a. Ambient H2S b. Ambient Combustibles

1.1 Introduction The majority of sensors are simple 2 wire sensors; Analog sensors providing a 4 to 20mA output and Digital sensors providing a voltage output (4-20mA in the case of a non intrinsic system). The hook up of the sensors is very straightforward with standard cables used and wiring instructions detailed with the sensors.

The two wires are

red white

carrying the power +24v (+) signal return (-)

Certain sensors also require a shield, a bare wire. Other sensors are 3 wire sensors, requiring a ground (black). The only 4 wire sensor is the depth sensor (crown).

These variations, together with variations due to whether an intrinsic (elcon) or non-intrinsic DAU is used, will be illustrated with each sensor.

The sensors are hooked up directly to terminals in the main junction boxes. The standard junction box contains terminals for 14 channels; each terminal is configured as shown below. Each terminal is then connected to one multiple cable (the junction boxes are pre-wired for this) which carries the signals from each sensor back to DAU. The DAU provides the power supply to the sensors.

For Intrinsic System: red

+ 1 -sh

to multi-core

Sensor
white black shield

to DAU

The non-intrinsic system is slightly different, in that for each channel there are terminals for +v, signal and shield. Every third channel contains a ground terminal.

1.2 Digital Sensors

1.2a Crown Depth Sensor The crown depth sensor operates on a system of 2 proximity sensors detecting metallic targets that are placed on the fast shiv wheel at the crown. Measures the movement and direction of the travelling blocks Voltage output 0-5V (intrinsic) 4-20mA output (non-intrinsic) Distance from target required < 15mm LED function on tubes indicate trigger action for easy & fast hookup Resolution is set by the software

When placing the targets on the shiv wheel, you should use as many as possible to give the best depth resolution as possible without affecting the accuracy of the sensor ie the targets have to be big enough to accommodate both prox sticks at the same time, but there also has to be enough space between the targets to allow for both prox sticks - this positioning should never be marginal otherwise the performance of the sensor will be affected. The metallic targets are attached to the shiv with silicon glue. The prox sticks have to positioned at right angles to these targets. The position of the two sensors should be offset in relation to the targets, so that one is activated before the other - this is made possible by the rotating bracket assembly housing the sensors. As the target then rotates past the two prox sticks, a sequence of signals is produced allowing for the direction to be determined. This sequence is shown over the page:-

Sensor activation:A S1 S2 B C

As the target rotates:A B C D E

S1 OFF ON ON OFF OFF

S2 OFF OFF ON ON OFF

There is therefore a definite sequence activated by the sensors; if the order is reversed, the software determines that the direction has changed.

When installing the sensor, there is a difference between the intrinsic and non-intrinsic types:A. For the non intrinsic, the two prox sticks are connected directly to a circuit board, from there to the junction box and onto the DAU. JBox circuit brd + PROX A Sig 2 Gnd 3 + PROX B Sig 5 Dir 1 + 4 +V Pulse 2 + SH 1 Sh G SH

B. For the intrinsic, the two prox sticks are connected to a terminal bar, from there to the junction box, and then connected to the circuit board which is housed at the DAU, before passing through the Elcon module.

terminal + PROX A Sig Sh + PROX B Sig

JBox + 1 Sh + 2 Sh

Care in installation: Ensure targets are well stuck to the wheel, and that the faces and edges are clean/smooth. Ensure that all nuts are tight and that the proximity sensor and brackets are secure. If there is any free movement, they may become misaligned or damage the targets - this is the most common cause of any problems. Ensure that both prox sticks are activated correctly with the blocks moving in both directions.

Calibration: The blocks should be moved a known distance in both directions. Record the number of pulses recorded by viewing the Test Mode. If the are not the same, then probably one sensor is not registering one target in one direction. Repeat this 3 times if possible; if the pulses are consistent, then you have your number for calibration. This number has to be converted to ticks per 100m - this is the calibration figure which is stored in the equipment table.

1.2b Depth Wheel

Only used with non-intrinsic systems 0-10VDC output voltage (3-wire) 3 wire sensor; +v, signal, ground. The shield must should also be connected. Hooks up directly to the geolograph cable Resolution set by software; 500 ticks/100m in the equipment table MUST BE accompanied by an ON/OFF BOTTOM SENSOR. This is linked to the drillers air line which disengages the geolograph when the driller picks off bottom.

Operational problems may be experienced if the cable does not run smoothly between the wheels. Ensure that the line feeds properly with free movement. If there is a consistent problem with the geolograph line, the calibration may need to be altered from the default 500 ticks/100m (since the wheel diameter is 0.2m). If the geolograph line becomes slack (this is the rigs responsibility), the tracking of the line will not be good. 1.2c Proximity Sensor To measure Pump Strokes and RPM (the same sensor is used to make up the crown depth sensor). Outwardly the same for both intrinsic and non-intrinsic systems, although there is a difference due to the different types of barrier used. 0-5V DC output (intrinsic) 4-20mA output (non-intrinsic) 2 wire sensor; +v and signal; the shield should be connected Activated by metallic target; required distance <15mm A rotating bracket assembly allows for easy positioning and installation; the correct activation of the sensor can be easily determined by way of an LED facility in the tube. Care should be taken to ensure the sensors are accurately located and are firmly anchored with C-clamps and nuts fully tightened. The C-clamps should be greased to prevent seizure.

1.3 Pit Level Sensors 1.3a Ultrasonic Sensor

Compatible with both the intrinsic and non-intrinsic systems 2 wire sensor providing 4-20mA output Operating range 0.25 to 5.0m LED display allowing easy calibration

Installation Ensure that there is a clear path from the sensor to the bottom of the tank ie that the path is not blocked by any internal pipes in the tank. Avoid positioning towards the edge of the tank if its bottom is angled. Try to position the sensor away from any agitators or flow line entries - this will avoid exposing the sensor to unnecessary agitation on the pit surface. The sensor can be sited directly on to pit grating if necessary since there is a blanking facility which tells the sensor to ignore any signals from within a certain distance (min 0.25m). This obviously restricts the maximum pit level that can be monitored, so preferably, the sensor should be mounted above the grating if at all possible. Ensure that the sensor is firmly anchored to avoid unnecessary vibration from the tanks. Calibration The calibration is initially done at the sensor by inputting the 4 and 20 mA distance settings. 4mA - maximum height, ie sensor to tank bottom when tank is empty 20mA - minimum height, ie when tank is full

This is done by pressing the 4 and 20mA keys on the sensor at the same time to take you through the set up menu:4mA setting 20mA setting Blanking distance Averaging

Example:-

0.30m grating max level 0.25m

2.50m

4mA setting = 3.05m 20mA setting = 0.55m

This will give you 800 counts in test mode. To determine the counts here, position the sensor towards a blank surface at a distance of 0.55m. You can then read the number of counts from test mode.

These two count readings are therefore your minimum and maximum calibration values. 800 counts is obviously equal to zero pit volume. The maximum value has to be taken from the rigs pit volumes or your own measurements.

NOTE OF CAUTION - the rig pit volumes are often determined to the level of the grating. This is not necessarily the maximum mud level. You should be sure of this before determining the 20mA setting. Blanking distance - can be set anywhere between 0.30 and 0.55m. You are simply eliminating any responses from the grating. Setting the Average There are 3 settings 1 - high smoothing 2 - low smoothing 3 - takes every reading

The 3rd setting is never used, because this will use bad as well as good readings (bad readings may be caused by an agitated surface sending a signal back at an angle, rebounding of walls etc therefore not a true, accurate reading). Settings 1 and 2 will not accept these bad signals - the sensor will wait for the next good signal (ie vertical) before updating its reading. Normally, setting 1 is used. However, for the trip tank, a more rapid response is required because of the greater resolution ie a lower volume per height change - setting 2 should therefore be used.

1.3b Float Sensor


Versions for intrinsic and non intrinsic systems. 4-20mA output (2-wire) non-intrinsic version 0-10VDC output (3-wire) intrinsic version 1/2 resolution on float probe

The float sensor works by a series of magnetic switches housed inside the tubing being activated/de-activated as the float passes by them. Installation: position the sensor so that the float has uninhibited movement to the bottom of the tank. Locate it away from agitators and flow inlets to avoid exposure to turbulent mud surface. Make sure that the probe is firmly anchored at top and bottom. Calibration: As with the ultrasonic, maximum and minimum heights, and the equivalent pit volume need to be determined. Note that the diameter of the float has to be considered here; also that there is a stopping clip preventing the float reaching the bottom of the probe; also, the density of the mud will determine how far the floats sinks in the mud:ie A. Minimum Height B. Mud Density

Low

High

stop clip tank bottom

minimum height

During operation, you should regularly clean the probe and float in order to prevent the float from sticking. This is especially important if using water based mud that will tend to dry, or cake, on the assembly. If the mud level is static for a period of time, it is quite likely that the float will stick at that position - simply check the floats regularly and prod with a bar to ensure that there is free movement.

1.4 Mud Sensors 1.4a Mud Density For use with both intrinsic and non-intrinsic systems 2 wire sensor giving 4-20mA output The sensor works by recording the differential pressure between 2 diaphragms set a known vertical distance apart. This can then be converted to a density.

Calibration:- this is preset:

4mA - 500 kg/m3 20mA - 2500 kg/m3

These values can be entered into the calibration file and then checked against the value determined by the mud engineer. Installation/Maintenance:- ensure that the height is set so that both diaphragms are completely immersed in the mud. Avoid positioning the sensor where there could be mud flow directly into the top of the protective casing. For the density out sensor, regularly check the sensor for cuttings built up inside the casing, covering the diaphragms. Clean the sensor regularly, but do not use a high pressure hose directly onto the diaphragms - this will damage them.

1.4b Mud Temperature Intrinsic system 2 wire, giving 4 to 20mA output

Non Intrinsic 3 wire, including a ground. The sensor works on a resistance range (100 to 138.5 ), which is converted by the 1072T Elcon module into a 4 to 20mA output.

Calibration:- this is preset at the range 0 to 100C. This can be entered into the calibration file and then checked against actual measurements. Installation/Maintenance:- much the same as for mud density. Make sure that the sensor is kept clean, and ensure that the temperature out sensor does not become buried in cuttings.

1.4c Mud pH

For use with both intrinsic and non intrinsic systems. 2 wire sensor giving a 4 to 20mA output Operating range 7 to 14pH The sensor is temperature compensated by a thermistor. This means that our reading may differ from that recorded by the derrickman or mud engineer - theirs will not be compensated.

Calibration This should be done quite regularly, once a week if possible, and requires the use of buffer solutions and a loop calibrator. Disconnect the cable from the sensor and connect a 4-20mA loop calibrator. Ensure that the sensor is clean and place in the pH 7.0 buffer solution. Allow 30 seconds for reading to stabilize and adjust the calibrator to read 4mA. Rinse the sensor in distilled water before placing in the pH 10.0 buffer solution. After stabilization, adjust the calibrator to read 10.86 mA. Repeat this process, maybe 3 times, until no further adjustment is necessary. Reconnect the sensor to its cable then, in turn for each buffer solution, record the number of counts displayed in Test Mode.

Care and Maintenance The electrode must always remain wet, therefore during transportation or periods of downtime at wellsite, place the cap (should contain pH 4.0 solution) over the electrode. The electrode is very fragile; never clean with a hose, but wash with a damp cloth; make sure the protective guard is always in place, even during calibration. As with density, you will have to clean the sensor regularly to avoid mud caking on the electrode, and to prevent cuttings building up inside the protective guard. The sensor is prone to grounding from metallic contact. You may have to cover the probe, Cclamp and bracket (or wherever there is a chance of contact with metal) with insulating tape. NOTE - the presence of H2S will produce a drop in pH.

1.4d Mud Conductivity For use with both intrinsic and non-intrinsic systems 2 wire sensor giving a 4 to 20mA output Operating range 0 to 100mS

Calibration:- preset at the above range, these are your minimum and maximum values. Installation/Maintenance - as with the other mud sensors. Notes on Operation The operating range is small. A saltwater mud will be outside of this range. Oil based mud is obviously non conductive, you will therefore get zero reading. Changes in conductivity will be produced by changes in salinity of the mud; influxes of formation fluid into the well bore can therefore be detected. Whether that change is an increase or a decrease will depend on the initial salinity of the mud, and the salinity of the formation fluid.

1.5 Pressure Sensors These sensors are used to measure pump pressure, casing pressure, hookload and hydraulic torque. There are different types of transducer used (Rosemount and Druck) at several different pressure ratings. Not all will be illustrated here. 1.5a Pump or Casing Pressure 0 to 5000 psi

TYPE A Druck Can be used in intrinsic & non-intrinsic systems. 0 to 10,000 psi span also available 2 wire sensor, providing 4-20mA ouput Installed in a NEMA type 12 enclosure for protection against dirt, dust, oil & water Zero & span trim pots for fine adjustments

TYPE B Rosemount Can be used in intrinsic & non-intrinsic systems. Measures 0-6000psi (Calibrated to 0-5000psi) Adjustable from 0-1000psi to 0-6000psi 2 wire sensor, 4-20mA output

Installation:- simple male/female hydraulic connectors, connected to a knock on head on the standpipe manifold or choke manifold (depending whether pump or casing pressure). Calibration:- the transducers are pre-calibrated; whatever the upper range is should be entered as the maximum calibration (20mA). This can be compared, and adjusted if necessary, against the rigs own measurement.

Operation:- A leak in the hydraulic system will obviously be shown up as falling pressure. Should air or mud get into the hydraulic system, the compression characteristics of the hydraulic fluid will be affected - this will show up as slow responses to pressure changes and lower values recorded. If a leak is detected, it will have to be determined which part of the hydraulic system is leaking: if our hydraulic hose is Td into the rigs, either system may be responsible for the leak; the leak may be at the connectors. The knock on head may also have to be investigated - there may be a leak in the diaphragm. The driller will usually have to participate in this investigation.

Similarly, if air or mud has got into the hydraulic system, the fault will most usually stem from the diaphragm in the knock on head. But whatever the source, the hydraulic fluid in the hose will have to be replaced. Firstly, when there is no pressure on the manifold, disconnect the sensors hydraulic hose. Test for air by applying pressure to the nipple on the male fitting; if there is air in the system, the fluid/air will be released under pressure (take an umbrella!). If there is no contamination, the hydraulic fluid will just flow out gently.

If there is contamination, the hydraulic system will have to be purged and reprimed. Keep pressure on the nipple until all the hydraulic fluid has escape. Using a priming pump, refill the system until you cannot push the pump any further. Gently press the nipple to ensure that there is no air in the hose; prime with the pump again; repeat if necessary.

1.5b Hookload Sensor Can be used with intrinsic and non-intrinsic systems. Span of 0-4000psi, although calibrated 0-1000psi Range adjustable from 0-400psi to 0-4000psi, to accommodate rigs of different rating 0-1000 psi span also available 2 wire sensor, providing 4-20mA output

Installation:- simple male/female hydraulic couplings; to connect (often Td into the rigs own sensor) to the load cell attached at the dead line anchor. Calibration:- the sensor is measuring pressure, but obviously, the reading we want is the stringweight or hookload. The calibration will have to be determined from the Test Mode. The minimum number of counts will be when only the blocks are suspended by the drill line. The maximum number of counts will be when the string is lifted out of the slips. The larger the string weight, the more accurate the calibration will be. Therefore, if we were setting up at the start of a well, the hookload will have to be repeatedly recalibrated as the string weight increases with drilled depth. Operation:- Leaks or contamination will obviously have the same affect as already described. Whilst drilling, this will be shown up as continually increasing WOB (since the pressure on the hookload is decreasing), or at connections etc or when the string is lifted, you will notice a drop in the hookload.

1.5c Hydraulic Torque Can be used in intrinsic & non-intrinsic systems. Measures 0-1000psi (non-adjustable) 2 wire sensor, providing 4-20mA output Installed in a NEMA type 12 enclosure for protection of dirt, dust, oil & water

Installation:- direct to the rigs hydraulic supply Calibration:- zero torque for 800 counts; the maximum calibration will have to be determined from the number of counts in test mode given by a particular torque which will have to be taken from the rigs recorded value. The greater span we have for the calibration, the more accurate the calibration will be.

1.6 Electric Torque Can be used with intrinsic & non-intrinsic systems. Measures 0-1000A DC 4-20mA output, 3 wire sensor requiring a ground

Installation/Operation:- the clamp should be placed around the main power cable to the rotary table. It is therefore measuring the magnetic field induced, from which the current is automatically determined. It must be perpendicular to the cable and placed the right way around in relation to the current flow- this should be indicated on the clamp itself. Calibration:- can only really be calibrated by reading the counts in Test Mode produced by a given current - this should be determined from the rigs measurement. The minimum, 800 counts, is obviously 0 Amps. If you are required to give measurements in Ftlbs or Nm, you will have to obtain a conversion table or graph from the rig and enter the conversion factor in the equipment table. For increasing current, the conversion is non-linear and different for each rig, depending on the make up of the equipment.

1.7 Mud Flow Paddle Although outwardly the same, there are different models for the intrinsic and nonintrinsic systems. Non intrinsic; 2 wire sensor giving 4-20mA output Intrinsic; 3 wire sensor requiring a ground wire. The sensor operates on a 0-2K potentiometer span producing a 0-10V DC output. This is converted by the 1072P Elcon module to give a 4-20mA output.

Installation/Operation The paddle should be placed in the flowline and the length of the paddle adjusted to accommodate the depth of the flowline. ie while the paddle is in its undeflected position, the tip of the paddle should just barely be touching the bottom of the flowline - this ensures that minimal flow can be registered. When there is mud flow to be recorded, you may have to adjust the positioning of the counter balance to give acceptable readings. For example, if you were getting a very erratic reading, you would move the counter balance further out on its arm to give more resistance to paddle movement. Different density muds may also require adjustment of the counterbalance; for example, the counterbalance may be set so that it gives too much resistance for a light weight mud to deflect it sufficiently - it may have to be removed all together. For a heavy mud, there may be too much deflection so that the counter weight has to be moved outwards to give more resistance and force the paddle to sit in the mud. Keep a regular check on the paddle for cuttings build up in the flowline - this may bury the paddle preventing movement. Calibration When the paddle is at the point of no deflection, record the number of counts in Test Mode - this will be your low calibration representing zero flow. When you have flow being recorded, again record the number of counts and see what the calculated mud flow in is - this will be your high calibration. Because of the non-linearity of the flowline and paddle regarding flow, any significant changes in flow (for example, this will occur when the hole size changes, or when there are lower flow rates during coring) will require the high calibration being reset.

1.8 Gas Detection Sensors 1.8a Ambient H2S Sensors Non Intrinsic System 2 wire sensor, giving 4-20mA output Minimum 2 years sensor life Installed in weather proof enclosure for added protection Stainless steel dust cover for protection of sensor against dust, oil, or wind

Intrinsic version 2 wire sensor, giving 4-20mA output

Installation The sensors should be located at critical areas when the mud returns to surface ie at the bell nipple, in the cellar, at the shakers and/or flowline etc. Since H2S is heavier than air, the sensors should be located close to ground level and away from any areas where there is a possibility of water etc coming into direct contact with the sensor. Calibration The sensors are pre-calibrated for an operating range of 0 to 100 ppm. These should be your minimum and maximum calibration values. Check this by exposing the sensor to your calibration gas (normally 50ppm). Ensure that the gas is properly reaching the sensor and allow a long enough exposure for the sensor to fully react. Checking that the sensor is responding is normally sufficient, the calibration should not need to be altered. If you do not get the reading expected, it is not necessarily the calibration that is at fault. H2S has a limited life span in retaining its original concentration - this may well have dropped rather than the calibration being inaccurate. Be certain that the gas concentration is accurate before changing the calibration - ie check with a different gas sample, check the concentration against a different H2S meter if possible (mud engineer, safety representative).

1.8b Ambient Combustible Sensor Intrinsic and non intrinsic versions.

Measures 0-5% combustible gas Both versions are 3 wire sensors, providing a 4-20 mA output Catalytic (platinum bead) sensor head

Installation As with the H2S sensors, these sensors should be placed at critical areas where the mud returns from surface. The sensors are detecting combustible gases, the most important of which is Methane. This is almost halve as light as air, so the sensor should be placed high. Preventing exposure to water etc is again an important consideration. Operation and Calibration As mentioned, the sensor is detecting combustible gases, principally Methane. Methane has a Lower Explosion Limit of 5% by volume. This means that if the concentration is below 5%, it cannot be ignited. Important to the rig is when the concentration exceeds the L.E.L, so that there is then chance of the ambient air igniting. The sensor is therefore spanned from 0 to 5% by volume. This can be used as your calibration limits (4 to 20 mA), so that you would be measuring the actual concentration by volume. Alternatively, you can calibrate the sensor so that you are measuring in terms of the L.E.L. In this case, your minimum calibration would be 0, your maximum would be 100%, ie 100% of the L.E.L. If your reading should reach 100%, you know that your actual concentration of Methane in air is 5%.

SECTION 2 - click here to go back to main menu QLOG - REALTIME DATA ACQUISITION

Rev C - October 1996

CONTENTS

2.1 INTRODUCTION TO THE QLOG SYSTEM 2.2 ADMINISTRATOR PROGRAMS 2.3 INTRODUCTION TO THE QLOG MENU 2.4 SETUP MENU a. Configuring Channel Numbers b. Unit and Decimal Place Selection c. Calibration of Sensors d. Creating Displays e. Printer Controls f. Invert Binary Sensors g. Override Sensors h. Pit Setups 2.5 REALTIME MENU a. Displays b. Setting alarms c. Controls i. ii. iii. iv. v. vi. vii. viii. ix. x. xi. xii.

Depth Adjustments Equipment Table Ream Mode Profiles (casing, hole and pipe dimensions) Pump Data Realtime Zeros Slip Thresholds Test Mode - external sensor signals Set Line Wear Set WOB Trip Mode Chromatograph

2.6 DATABASE MENU a. Depth and Time databases b. Database Cell References c. Lithology Editor d. Accessory Symbols e. Bit Database f. Survey Database g. Well Data

2.7 REPORTS MENU a. X-Y-Z plots b. Plot Configuration c. Plotter Setup d. Starting and Stopping Plots e. Defining User Ratios f. Schematic Hole Profile 2.8 ENGINEERING MENU i. Drillstring Design ii. Hydraulics Optimization iii. Drilling Optimization iv. Pump Output v. Kick Kill vi. Stuck Pipe vii. Directional Analysis viii. Casing Design ix. Maximum ROP x. Leak Off Test xi. Surge Swab xii. Pressure Test xiii. Rheogram 2.9 GEOLOGY MENU a. Ratio Analysis - Pixler gas ratio plots b. Coal Bed Methane analysis program c. Overburden Gradient d. Formation Pressure and Fracture Gradient e. Calcimeter 2.10 OTHER MENU a. Communications b. Spreadsheet c. Use of Penpal, the Word Processor d. Utilities - various system information programs e. Unit Converter f. Help Files g. Use of the Editor

2.1 INTRODUCTION TO THE QLOG SYSTEM


QLOG is the realtime data acquisition system developed by Datalog. QLOG is designed to be user friendly and easy to understand. It uses a menuing system which can be operated from either normal consoles or from a windows interface. From this menu system, the entire realtime system can be accessed and operated with no command line input necessary. A number of 'virtual consoles' can be mounted allowing the user to quickly change between full size screens (usually around 5 but the number can be changed). The first screen is real and the others are virtual consoles, accessed by use of the ctrl, alt and 1,2 or 3 or enter keys held down together. Any program can be operated from any console. This means that several programs and/or displays can be operated simultaneously, with immediate access by the user, allowing for an ideal logging/monitoring environment. The windows interface also allows for several pages of realtime information to be viewed at one time, with the added benefit that realtime screen plots can be monitored at the same time. The mouse provided allows for 'point and click' access to the menu system. An integral part of the QLOG system is a comprehensive catalogue of help files. These files allow new users to quickly understand the system with instructions and explanations on how all the different programs and functions are operated. These individual help files can be accessed simply by pressing F1 while within a QLOG program. The complete catalogue of help files can be accessed by selecting the Help Files option in the Other menu. When the computer system is first switched on, you will be asked to login and enter a password. The purpose of logins and passwords is for system security and for allowing different levels of access to the system for different users. User accounts need to be set up by users (superusers) already on the system. Once logged in, you can type qlog at the command prompt to enter the QLOG menu. In QNX, the computer operating system, the prompt will be a either a $, # or %, depending on the security access given to you when your user account is first set up. A $ sign signifies a superuser, a user that has complete access to the system; all field mudloggers should have superuser access. To get a login prompt from a completely blank screen (ie no prompt), press crtl-z. The QLOG system is designed so that a number of users can use the system at any time and carry out operations completely independently of any other user on the system. Individual users are able to select their own preferred units with which they want to work and they are able to select their own alarm set points, with the alarms only being activated on consoles that that particular user is logged on to. The system is therefore ideal for a wellsite network where the loggers, engineer and geologist etc can all operate the system to their own specifications without affecting each other.

All programs required to run the QLOG Data Acquisition System in the field are found through the main menu. All similar programs are grouped together under 7 main headings: Realtime Reports Database Engineering Geology Other Setup Each of these headings has submenus. To move around the menu, simply use the arrow keys to move horizontally or vertically. A particular program is then accessed by pressing 'enter' with the cursor over the heading. To escape from a particular menu or sub-menu, use the horizontal arrows to take you to the adjacent menu header, or use the esc key to take you back up the menu structure you are already in. When you are positioned on a menu header, pressing esc will allow you to exit QLOG and put you at a QNX command prompt. The '...' following some menu items indicates that a further submenu exists. Some menu items are displayed in red when viewed from a normal text console. This means that the particular program contains graphics and has to be run from the windows interface. To start this interface, type windows at a command prompt. The QLOG menu will normally appear automatically, but if not, press the right hand button on your mouse and select Programs and then QLOG from the menus that appear on screen. The QLOG menu will then be created at the top of the screen. The operation of windows will be dealt with more fully later in this section. Certain F keys have standard uses throughout the programs in the QLOG system: F1 F4 F7 F8 help exit without saving changes to save, proceed or calculate to create plots of program data

There are some exceptions with some programs requiring an H for help. The Word processor (penpal) and Spreadsheet packages have their own on-line help.

CHARTS AND LOGS Logs can be depth based or time based using either of the databases mentioned above. This enables both realtime and historical data to be plotted by depth or time increments. Logs can be plotted at 1:240, 1:500, 1:600, 1:1000 scales. All logs are completely configurable in the data they contain, track width, position, scales and whether black and white or colour. This therefore allows clients to design their own logs/charts if they so require, or for the loggers to tailor logs to the exact specifications of the client.

2.2 ADMINISTRATOR PROGRAMS For the entire QLOG system to be operated correctly, a number of administrative tasks need to be running. This enables different parts of the system to operate and enables interaction between the different components of the system. These tasks need to be started from a command line when the system is first booted up and the user logged in. The tasks are run by putting an ampersand (&) after the task name, which means that they are run as 'background tasks' without output to the screen (see Basic QNX Commands). This frees the consoles, enabling other programs to be run. dau_admin & administers the realtime data collection and also starts the share administrator which handles data calculation

dbadmin & or dbadmin d=4:/datalog/dbms &

database administrator

the first command will create both time and depth databases in 3:/datalog/dbms, whereas the second command will create the depth database in 4:/datalog/dbms on a partitioned hard drive. This is the normal situation and command to use to start the administrator. m200admin & plot_admin & ESSENTIAL PROGRAMS dbdepth & Talks to the realtime system (dau_admin) and saves depth data through dbadmin NOTE that all 3 of these administrative tasks have to be running in order for data to be stored As above but for time based data Conversions, allowing different users to use different units Updates hole and pipe profiles realtime as depth increases, allowing the correct depth and profiles to be saved if the system should crash or need to be rebooted Sounds an alarm should the suction on the gas line become reduced or blocked This program will copy all depth and time database data to a second node so that a continuous backup is kept. administers the chromatograph administers the plotting software

dbtime & convert & upd_prof &

flowalarm & hotback [2]3:/ &

The command dau_kill typed on the command line will produce a list of current tasks running. The same dau_kill command, followed by the task name, is used in order to stop a task, but the user should notice that different names have to be used when killing these administrators. ie DAUadmin DBadmin PLTadmin DBdepth DBtime converts Hotback

The m200admin name remains the same.

upd_prof and flowalarm also remain the same, but do not need the dau_kill command to stop them. They are stopped by using the slay command:slay upd_prof for example

2.3 INTRODUCTION TO THE QLOG MENU Any menu items that are listed here in italics will appear red in the QLOG menu, meaning that they can only be run from the windows interface. Here, each menu will be briefly described to illustrate their main usages. The operation of individual programs will then be described in more detail and from the viewpoint of the user arriving at wellsite and having to set the system up from scratch and then operating it in a realtime environment. REALTIME The realtime menu holds the most often used QLOG programs from a mudloggers point of view. This menu contains the majority of programs required to run the system on a realtime basis. It is where, for example, displays are activated; parameter alarms set; depth adjustments made; realtime constants stored; annular profiles stored; the chromatograph calibrated and controlled; and the trip monitor started. Displays...

Alarms... Controls...

Text Display Windows Display Historical Historical and Real Personal Horn Depth Adjustments Equipment Ream Mode... on off Profiles... Hole Pipe Casing Pump Data Realtime Zeros Slip Thresholds Test Mode Set Line Wear Set WOB Trip Mode... Trip Mode Cancel Tripmode Chromatograph..Setup Calibrate Tweak Calibrations Configuration Sheet Example Chromatogram

REPORTS This is where logs and realtime plots can be designed, started and stopped; report style plots activated; calculated ratios defined by the user. X-Y-Z Plots Plotter...

Configure Plotter Setup Start Plotter Plot Info

Define Ratios Hole Profile

DATABASE Simply, this is where the user gains access to all the different databases stored on the system; where the user accesses the windows lithology editor and where accessory symbols can be modified or created. Edit... Databases Lithology Accessory Symbols

Bits Surveys Well Data ENGINEERING This is where all manner of engineering and hydraulic programs are run from. Some are completely offline, others will access the realtime system. Drill String Design... Maximum WOB/Neutral Point Maximum Torque Drill Pipe Collapse Critical RPM Hydraulic Optimization..Current Profiles New Profiles Drilling Optimization... Bit Parameters Drill Off Test 5 Point Test Pump Output Kick/Kill Stuck Pipe... Determine Depth Stuck Determine Sticking Mechanism Directional Analysis Casing Design Maximum ROP

Leak Off Test Surge Swab Pressure Test Rheogram GEOLOGY This is where specific logging or geological analysis/calculations are performed, including gas ratio analysis; formation pressure analysis; calcimetry; coal bed methane calculations. Ratio Analysis Wireline Analysis...

Induction Neutron Density GR Sonic Hard Rock Thermal Neutron Decay Dipmeter Water Saturation Mineral Analyzer Overburden Overpressure

Coal Bed Methane Pressure Analysis... Calcimeter

OTHER Here, the user has access to a number of miscellaneous programs such as communication programs; spreadsheet, word processor and editor; QLOG help files and a unit conversion program. Communications... APB Beep Chat Mail Who is Online Qterm/Modem

Spreadsheet Word Processor Utilities...

System Activity Drive Usage Task Display

Unit Converter Help Files Editor

SETUP This is where the main components of the system are set up, primarily at the start of a job. It includes such things as sensor configuration and calibration; parameter units and decimal place settings; the design of display screens; defining peripheral printers and defining pit system totals. User Unit Preferences User Decimals System Unit Preferences System Decimals Analog Calibration Create Display Printer Controls Sensor Configuration Invert Binary Sensors Override Sensors Configuration Sheet Pit Setups

2.4 SETUP MENU


When a user first arrives at wellsite at the start of a new job, sensors will have to be installed and calibrated and the system itself will have to be configured ready for use. This means that channels have to be configured for each sensor so that the signals can be read and processed; the required units of measurements and decimal places for each parameter have to be selected; display screens need to be created or adjusted to suit the particular job; printers have to be defined so that the computer can communicate with them; and individual pit totals need to be defined. This is all done with programs accessed in the Setup menu. 2.4a Configuring Channel Numbers This, as a rule, will be done by technicians before a system leaves the workshop. However, the mudlogger needs to be fully familiar with this operation because they will often be referring to it at wellsite..... when rigging up the system when installing any extra sensors requested when troubleshooting any lost signals The purpose of this process is to assign a particular channel number on the computer to each individual sensor. This number will relate to a particular channel number on the DAU or ELCON board, which, in turn, will relate to a particular channel number in the junction box/boxes that the sensors are connected to. The numbers at each 3 stages are not the same, but the correlation is always the same ie there is a standard configuration. This is determined by the engineer by way of a configuration sheet. This will either be already completed by a technician prior to rig up or can be completed by the engineer as each sensor is wired into the junction box. With the channel in the junction box then known, the engineer simply needs to follow along the line to determine the channel number to assign to the computer. For a full data unit, there are presently 3 standard configurations (in terms of number correlation, rather than the particular sensor that may be connected to that channel) in use. Any variations on this will be fully detailed by a technician before a system leaves the workshop. The 3 standards apply to the following different DAU types:1. An Elcon barrier system, where intrinsically safe barriers are utilized. 2. Standard DAU system, where circuit cards are used for each sensor. There is an and a newer configuration bringing the total to 3.

older

If CONFIGURATION SHEET (program name cfgsheet) is accessed from the Setup menu, the standard sheet that is illustrated is the older DAU configuration. Unfortunately, the main configurations now being used will be one of the other two, depending on where in the world you are working - ie depends on safety requirements.

Each configuration is therefore included at the end of this section, for the users reference. They will be fully explained at that point. At this point, part of a typical configuration for an Elcon Barrier system is shown to illustrate the determination of the channel number.

Digital Analog Card Card Junction Box Signal Name Channel Channel Slot Type No: Chan ___________________________________________________________________ 1 (IRQ1) 1 1842 1 1 Depth Pulse 9 (BIN1) 1 1842 1 2 Depth Direction 2 (IRQ2) 2 1842 1 3 Pump 1 3 (IRQ3) 2 1842 1 4 Pump 2 4 (IRQ4) 3 1842 1 5 Pump 3 5 (IRQ5) 3 1842 1 6 RPM 4 1882 System Power (spare) 4 1882 System Power (torque) 4 (20) 5 1012 1 7 Analog (spare 3 wire) 3 (19) 5 1012 1 8 Torque (electric) 1 (17) 6 1022 1 9 Hookload 2 (18) 6 1022 1 14 Analog (spare)

Digital Channel - CPU channel number for a digital sensor. Analog Channel - first number relates to the channel number on the Elcon circuit board terminals - number in brackets is the CPU channel number for an analog sensor Card Slot - the number of the Elcon Barrier (note that each barrier carries two channels) In this example, Pump 3 is wired into terminal 5 of junction box 1. This correlates with the first channel of the 3rd Elcon barrier (card slot), which in turn correlates with Digital Channel 4 on the CPU. Likewise, Hookload is wired into the 9th terminal of junction box 1, correlating with the first channel of the 6th Elcon barrier, in turn correlating with Analog Channel 17 on the CPU.

Pump 3 therefore has a channel number 4, and hookload is channel number 17. It is these numbers that should then be entered into the SENSOR CONFIGURATION program in the Setup menu (program name aconfig).

On entering the program, the user is placed into the menu for Analog sensors. The sensor is selected by moving the cursor with the arrow keys and pressing F7. The following information needs to be entered for each sensor: Board - the number of the DAU or Elcon board. Typically 1. It will only change if more than one is used. A zero would disable that particular sensor if it is not being used. Channel - the number as determined above for each sensor

AvgSize - this is a dampening effect applied to allow for different time durations over which a change in sensor value will stabilize. Each sensor is sampled by QLOG 11 times a second. By applying a factor, or averaging, of 100 means that the final signal will be the average of the last 100 samples. Thus, if a particular sensor value changed from 10 to 20, it would take 100/11 or 9.1 seconds for that signal to change or stabilize. Likewise, an Avg of 50 would allow 4.5 seconds for the signal to stabilize 20 1.8 seconds 500 45 seconds An erratic signal caused by MFO, for example, may require a higher averaging, whereas Triptank, requiring a more rapid response, should have a lower value. Sensor Type - all are current, except for the CC/TCD detectors and Block Temperature which are voltage.

The F2 key is used to toggle between the analog and digital sensors. For the digital sensors, only the board number (typically 1) and channel number (as described) need to be input. State refers to the binary sensors (depth direction and flow switch) which are either on or off, 1 or 0.

2.4b Unit and Decimal Place Selection for each parameter. Parameters are stored internally, by default, in metric units by the QLOG system. This cannot be changed. What can be changed are the system units (the units that are used by system programs)* and individual users units. These are the units that will be displayed over the entire system when a particular user is logged into the system. * system units can be changed, but it is not necessary and not recommended. SYSTEM UNIT PREFERENCES (program name userprefs) There is no need to change these settings, because they have no bearing on user operation. They are the units that system programs use, and because of the convert program it makes no difference what actual units are being used at wellsite. The units are stored in a configuration file 3:/datalog/config/units.cfg USER UNIT PREFERENCES (program name as for system preferences) The QLOG system does allow for different users to have different unit selections and work on the system at the same time, independently (note that the convert administrator task needs to be running). These units will be the ones displayed in displays, database, plots and logs etc whenever that user is logged on to the system. Every parameter in the database, whether directly measured or calculated, can be selected from the program menu. The parameter is selected by moving the cursor with the arrow keys and using F2 or F3 to go back and forth through the pages. Press F7 to bring up the unit options available, select the unit with the arrow keys and press F7 again to save.

When a user makes a change as described above, the units file will automatically be saved in their home directory (eg 3:/user/fred/units.cfg). This file will then be read automatically every time that user logs in to the system. If principally metric or imperial units are going to be selected, it may be more convenient to use one of the default files stored on the system, rather than having to make many changes. The files are stored as 3:/datalog/defaults/units.cfg.met 3:/datalog/defaults/units.cfg.imp The desired file simply has to be copied to units.cfg in the users home directory. Note that permissions may have to changed for the user to use this file - see the attributes section in Section 7 of this manual.

SYSTEM DECIMALS (program name ranges) These settings have to be correct in order for the correct values to be saved in the database. This setting is the degree of accuracy to which a particular parameter is recorded. There are particular values for different types of parameter, and if the setting is not correct, an incorrect value will be stored. For example, the correct setting for ROP is 3 decimal places in order for the correct value to be recorded. If this setting was changed to 2, the ROP value stored would be a factor of 10 greater. If it was changed to 1, the value would be a factor of 100 greater. The correct settings are stored as default on the system (3:/datalog/config/dp.cfg) and under no circumstances should they be altered. Because of this, the user cannot gain access to this program from the QLOG menu. USER DECIMALS (uranges) These decimal settings are particular to each user and can be changed if required. These values affect the degree of accuracy to which parameter values are displayed, and do not affect the values that are stored by the system. Settings may need to be changed so that values displayed are sensible. For example, if C1 is being measured in %, the decimal setting should be set to 4 so that the value displayed is accurate to 1ppm (ie 0.0000%). However, if C1 is being measured in ppm, a setting of 4 decimal places would obviously be meaningless. In this case, the setting should be set to 0, so that only the whole number is displayed. Note that the user decimal value cannot be greater than the default system decimal value. Again, if changes are made to this file, it will automatically be saved in the users home directory as user_dp.cfg and accessed every time the user logs in to the system. If the default metric/imperial user unit files were to be used as described above, then recommended user decimal settings for each are also stored on the system and could be copied to the users user_dp.cfg in the same manner as described above. The files are called 3:/datalog/defaults/user_dp.cfg.met 3:/datalog/defaults/user_dp.cfg.imp

The program menu is operated in the same way as the user unit program, with the exception that the user has to type in the desired decimal setting.

2.4c Calibration of Sensors When the sensor channel numbers have been configured and the desired units selected, the user is ready to calibrate the sensors. This is done through ANALOG CALIBRATION (calib) in the Setup menu. Here, the general method is illustrated; specific calibration techniques are discussed at the end of this QLOG section. Only the analog sensors need to be calibrated. The digital sensors just detect pulses. The analogs operate on a 4 to 20 mA range - this is the minimum and maximum signal. QLOG converts the milli-ampage into a number of counts for calibration purposes. Typically, on a normal DAU system, 4 to 20 mA equates to 800 to 4000 counts. on an Elcon system, 4 to 20 mA equates to 800 to 4095 counts. This minimum and maximum range may be used for several sensors, representing the low and high calibration settings. Otherwise, a high calibration setting may be taken from a current signal being read. For example, if Mud Flow Out was showing 2200 counts and the flow rate was 1.8 m3/min, this would be the high calibration setting. Note that current readings can be viewed in Test Mode under the Realtime-Controls menu. The particular sensor is selected by moving the cursor with the arrow keys and pressing F7.

This will put you into the low calibration - now represents the current counts being read - Old represents the counts used in a previous calibration Press enter to put yourself into the now column showing the current signal (number of counts) Press any key to hold that current signal Press F7 to accept the current counts, or change it to your desired count value and press F7 Type in the parameter value that the number of counts equate to, press F7. The new calibration range will now be displayed and you should type in Y or N to accept the change. You will be returned to the main menu and need to repeat the process to change the high calibration setting.

The calibration range displayed is the actual high and low values of the particular parameter, for example the number of M3 if a pit volume is being calibrated. The range given is the number of units, ie M3, that equate to 0 and 20mA (not 4 to 20mA, the operating range). The calibration settings are stored in 3:/datalog/config/calibs.cfg

2.4d CREATE DISPLAY (disply_set) This program is used to configure or change the 10 text displays selected under the RealtimeDisplay menu option, and the two text displays selected from the windows interface. Note that these are system settings and not particular to individual users. To change the title of a screen, press F2, type title and press F7. A particular screen display is selected using the arrow keys and pressing F7. To select a parameter, move the cursor to where you want it position, press F6, select the parameter required using the arrow keys, press F7 twice. To delete a parameter, move the cursor onto the one in question, press F2, then Y or N to confirm. To move a parameter, position the cursor on the one in question, press F3, move to desired position and press F7. Finally, all these changes need to be confirmed by pressing F7. F4 would exit the program without saving the changes.

These configurations are stored, 2 files for each screen, in 3:/datalog/displays as: eg screen01.des - the actual screen design screen01.hdr - the title of the screen

2.4e PRINTER CONTROLS (prt_ctl) In order for the computer to be able to communicate with a printer/plotter, the port (normally parallel) has to be defined in this configuration file for each printer name. It is the name that will appear in menus when selecting the output device in such programs as Plotter Setup and X-Y-Z Plots. Other programs, such as the Bit and Survey Databases will give you the option of report or local printer when you select the print option, so these names should be defined in printer controls. On entering the program, you should select the printer name using the arrow keys, press enter, type in the name of the printer/plotter port (typically node number and $lpt or $lpt2) and press F7 to save. If you should want to add or change a printer name, it has to be done by entering the program from the following command line:prt_ctl -l

There are going to be occasions when you are going to be printing or plotting to a file rather than to hard copy on a printer. In this situation, the printer name could be selected as mudlog for example, and then the name of the file given in the port/filename option (eg [1]3:/tmp/mudlog). Note that the temporary directory is used for this purpose and that the full path should be given.

2.4f INVERT BINARY SENSORS (switch) This program will toggle the on/off state of the digital binary sensors. There are two binary sensors in question, depth direction and flow in (gas flow line). The program just requires you to enter the channel number (9 or 10 respectively) and press F7 - the signal will then be inverted. This can be done quickly from a command line: eg switch 10 A typical example of this is with the depth direction when using a crown sheave sensor. Once you have hooked the sensor in place, connected to junction box and configured the channel, you will then check the test mode to ensure you are getting a signal. On doing this, you see that as the blocks are moving up or down, the computers depth is going in the wrong direction (this is just dependent on which of the 2 proximity sensors is activated first). To correct this, you simply invert the signal as described. Note that if the system was to crash, upon reboot the signal will revert to its original state and the signal will have to be inverted again. Therefore, the long term fix would be to switch round the position of the two proximity sensors so that the other one is activated first.

2.4g OVERRIDE SENSORS (overrides) This is used to simulate a signal in case of sensor failure but we do not advocate this for general use - the problem should be rectified or the sensor replaced if possible. We certainly do not advocate this as a way of deceiving drilling supervisors should a sensor fail - be upfront about it. Note that only analog sensors can be simulated in this way. Depending on the calibration, the number of counts required to produce the desired value need to be entered into this program (not simply the parameter value). Simply select the parameter required using the arrow keys, press F7, enter the number of counts, press F7 to save. A value of -1 disables the override function.

2.4h PIT SETUPS (pit_status) This program allows individual pit volumes to be added together in separate pit totals. Up to 4 pit totals may be selected. The program shows the sixteen possible individual pits; for each pit the user simply enters the number of the required pit total (1 to 4) next to each pit. Press F7 to save the setup. This may be used in the following circumstances:Example 1 We have 6 pits; pits 1 and 2 are equalized as the suction pit; pits 3 and 4 are equalized as the settling pit; pit 5 is the premix and pit 6 the slug. The pit setup should look like Pit1 Pit2 Pit3 Pit4 Pit5 Pit6 1 1 2 2 0 0

Pits 1 and 2 are totalled as Pit Totals 1 Pits 3 and 4 are totalled as Pit Totals 2, note that the individual pit volumes are still recorded Pits 5 and 6 are left independent and not included in any totals Example 2 If in the above example, we simply wanted a PVT, a total pit volume excluding the slug pit, then pits 1 to 5 would all be assigned 1 with pit 6 left as 0. Pit Totals 1 would then be our PVT. Unfortunately, we are unable to have individual pit totals as shown in example 1, together with a PVT - it has to be one or the other.

2.5 REALTIME MENU

This menu contains the majority of information required for accurate realtime monitoring and where most day to day operations pertaining to the realtime data acquisition and running of the QLOG system are carried out. 2.5a Displays Text Displays (display) - There are 10 real time displays which can be interchanged by using the F1-F10 keys (or simply the numerical keys 1-0). The parameters will be displayed in the units, and to the decimal places, selected by individual users. The layout of each display screen is configurable, through the use of the CREATE DISPLAY option under the SETUP menu. The user should make full use of the 10 screens to ensure that all recorded and calculated data is represented and displayed in a format that is easy to understand at a glance. For example, you would probably create screens that include the following information: all important logging and drilling parameters; all gas values and ratios; all pit volumes and mud measurements; hydraulic calculations; pressure calculations etc Windows Display (wdisplay) - This is a text display designed for use in windows, enabling text size to be increased; this is of particular benefit to drillfloor monitors so that information can be viewed at a distances. The size of the text can be changed by using the controls and fonts option. Historical Display - a databased display showing important parameters over the last 20 records. Historical and Real - split screen display showing realtime information and databased information over the last 8 records.

2.5b Alarms ( alarms) Allows for a high and low alarm to be set on any parameter monitored by QLOG. An alarm will sound if the value crosses the pre-set limit. These alarms are personal for each individual user and will not affect other users on the system. They will be stored in the users home directory in a file called alarms.cfg and accessed every time the user logs in to the system. The alarm limits are set by cursoring to the parameter you want to alarm (there are several pages, use F2 and F3 to go back and forth), press F7 to edit, enter the desired low and high values, press F7 to accept, F4 to exit the program. When an alarm condition is detected, the terminal will beep and a red message will appear at base of the display screen. This message will indicate which parameters alarm has been activated, and whether it is the high or low alarm. This message will remain while the parameter

remains outside of the alarm limits. If the value returns to within the set points, the message will turn white and the user can press c to clear the message. This procedure rearms the alarm. If the parameter remains outside of the alarm set points, the limits will have to be reset from the Alarm menu. If you are in the windows display, the alarms can be set/modified directly by using the mouse. Move the cursor over the name of the desired parameter and press the left hand mouse. This brings up an alarm window where you can set the high and low limits and enable the alarm. When the alarm is activated, an alarm will sound and the displayed value of that parameter will change colour - red if high alarm, green if low alarm.

2.5c Controls i) Depth Adjustments This program affects the real time data acquisition system and is therefore only to be used by mudloggers. It simply requires the editing of bit depth and/or hole depth and/or hook position (if using a crown sheave sensor) or hole depth (if using depth wheel) and using the F7 key to save. Alternatively, by holding down the Alt, Ctrl and F9 keys together, the bit depth will be changed and made equal to the hole depth (ie will force an on bottom status).

ii) Equipment (equipment) This program contains important QLOG setup information required for accurate realtime data acquisition. It should be edited specifically for the job before mudlogging commences, and modified accordingly during the drilling of the well. All of the constants are stored in 3:/datalog/config/equip.cfg Depth Method Floater Rig Ticks per 100m the

D(epth wheel) or C(rown) or W(draw works) Yes or No This is the depth calibration requiring an exact value. If Depth Wheel is selected, the figure is 500. A calibration will have to be performed for Crown Sheave by moving the blocks up and down a known distance and recording the number of ticks from the test mode. The calibration for the Draw works sensor is more complicated, and the help file should referred to. Note, the calibration is always ticks/100metres even if imperial units have been selected. Torque conversion for ft/lbs to amps. This is non linear and will require conversion table from the toolpusher. This value may have to be changed while you are drilling as torque increases or decreases - in this the converts program will have to be stopped and change to take effect. Depth resolution for depth database (metres or feet depending on your user units). The value can be changed while drilling so that greater resolution can be given during coring for example (minimum resolution is 0.1m). Determines how often records are saved to the time database, typically seconds with a minimum resolution of 10 seconds) Used if inputting lithology via the database - the database will expect the % lithology based on the sample interval. Since the lithology drag facility to copy lithology, this function is really normally left the same as Log interval. Lag time from gas trap to logging unit (in seconds). This will be added the actual sample lag time before gas data is recorded to the database Used if the RPM sensor has to be located on the drive shaft of the rotary table where there is more than one rotation for each rotation of the table eg if the shaft turns 5 times for each turn of the rotary table, enter 5. This facility may also have to be used on some top drive units.

be

Amps per FtLb a situation, restarted for the Log Interval

Time Interval 60 Sample Interval editor has a redundant and is Gas Pump Time to RPM Gear Ratio

Average Stand Length Used to determine how many stands are in or out of the hole based on the current bit depth - obviously of most use in trip monitor calculations for stands pulled and stands to go. This figure will need to be changed for casing runs where the average joint length will be required. Pressure Gradient This is the Normal Formation Pressure Gradient for the region being drilled eg 997 kg/m3 (8.32 lbs/gal) in Canada; 1043 kg/m3 (8.66 lbs/gal) in the North Sea. Used for pore pressure calculations. Updated automatically from, and used for realtime calculation of, the Overburden Gradient.

Bulk Density

Mud Density Override If no density sensor is being used, a value here will enable hydraulics, ECD and DCexp to be calculated. Start Depth ROP Average Int Rig Cost per Hour Trip Time Formation Gradient Fracture Gradient Theta 600/Theta 300 calculations. to 200 or Surface Conn Loss Starting point of the depth database. Allows ROP to be averaged over a certain depth interval. Used in cost calculations Time taken for a round bit trip. Again, used for cost calculations as above. A value entered here will override the realtime calculations Override facility as above Taken from the mud engineers report and used in hydraulics Any pair of viscometer readings can be used. If 600 is changed 6, the pairs 100 or 3 will automatically replace Theta 300. The value here will range from 0.2 to 0.5, and is used in the calculation of Surface Pressure Losses (ie through kelly, standpipe etc), part of the Total System Pressure Loss calculated by the hydraulics program. When a mud motor is being used, this is the number of bit revolutions unit volume of mud pumped. This will ensure that the RPM parameter in the database includes both table and motor RPM.

Mud Motor Factor per

Mud Motor Threshold This is the amount of mud flow required to start the mud motor turning. Lag Volume Adjust This is to be used if the hole is washed out and the lag time is therefore greater than theoretical. The equivalent extra hole volume should be entered. This is calculated by taking the time difference, calculating the number of strokes pumped in that time, then multiplying by the pump output.

Air Drill Lag Time

For use when drilling with air, nitrogen, foam etc. By entering the lag time in seconds, the lag calculated from hole and pipe profiles will be overridden. Select either the DAU system with cards for each channel, or the Elcon system with intrinsically safe barriers. Select either C(CC) T(TCD) or B(Both). This determines from which sensor the Total Gas Sensor takes its values from. The value at which the Total Gas Sensor switches from taking its values from the CC detector to taking them from the TCD detector ( the default value is 4.5%) The point at which the CC detector is turned off to save filament wear (5.0% default value) This is the maximum allowed difference between the CC value and TCD value when the CC detector is supplying the Total Gas Sensor when gas value < 4.5%). This prevents a jump in the value at the CC Switch Point. (default value

DAU/Elcon Gas Detector Mode CC Switch Point

CC Shut Off Point TCD threshold reading (ie Total Gas Sensor 0.5%) CC stabilize time

This is the time allowed for the CC sensor to stabilize (before the Total Gas Sensor will accept its values) after the sensor is switched on (default 30 secs)

Poisson factor Used in the realtime calculation of fracture gradients and is automatically updated from the overpressure program. The value should be calculated from offset data if using Eatons method. Otherwise, the value can be taken from the lithological values detailed in the overpressure help file. Pressure slope by Pressure Offset This is the gradient of the Normal Compaction Trend and is generated and updated from, the overpressure program. Used in the realtime calculation of formation pressure and fracture gradient.

Again updated from the overpressure program, this value is the degree of offset off the selected trend from the NCT and is used in the realtime calculations as above. This value may be manually adjusted should trends shift due to lithology etc and produce erroneous pressure calculations. If the equipment table is accessed from the command line with the +p option (equipment +p) then the Padding Factor can also be accessed. This factor ensures records are written at the specified interval without any extra rogue records being produced. This factor, by default is set at 0.5 and it should be unnecessary to change this value.

iii). Ream Mode Enter On to initiate this option. As long as the bit depth is set somewhere between 1 and the total hole depth, the rig status will display 'Reaming'. This allows for the display of WOB and ROP values while reaming. The status will change automatically to Drilling when the bit depth becomes equal to the hole depth. Selecting Off will disengage the reaming option. iv). Profiles Hole Profile (values stored in 3:/datalog/config/hole.pro) - Allows for 10 sections to be entered. The diameters of the hole/casing are in mm if metric units are selected for depth and inches if imperial. Edit by moving the cursor to desired position, enter values and then press enter again to move to next hole section/diameter. Sections can be inserted at the present cursor position by F2, and deleted by F3. The number of the last casing section in the hole is important as it is used in kick/kill calculations (F7 to save). The first section is that from surface and the last is the section being drilled. As long as upd_prof is running, the open hole section will increment while drilling. The F6 Recalc option allows you to update the hole profile program based on the information that has been entered into the casing profile. Pipe Profile (values stored in 3:/datalog/config/pipe.pro) - Edit the section length, ID and OD. Units are as above. Edit, insert and delete as above. QLOG will assume that drill collars are the last entries and uses these values for internal hydraulic calculations (ie flow regimes). The first entry (drill pipe) is automatically corrected for the current hole depth if upd_prof is running and will increment while drilling. The internal volume of the pipe and annular volumes are calculated from these profile values, and corrections are not automatically made for tool joints. You may therefore want to edit diameters very slightly to get more accurate volumes and hence lag times, but be aware that these changes will also affect hydraulic calculations, in particular flow regimes in each annular section. Casing Profile (values stored in 3:/datalog/config/case.pro) - this program is used in conjunction with the hole profile to generate graphic illustrations of the hole in windows. Each new casing string should be entered with length, ID and OD, start depth (ie surface, hanger etc) and install (ie hole depth prior to running the casing) depth. When the information here is updated, the F6 option in hole profile can be used to update the hole profile. Note that the casing profile does not affect the calculation of annular sections - this is solely done from the hole and pipe profiles. v). Pump Data (values stored in 3:/datalog/config/pumps.cfg) QLOG allows for up to 4 pumps with different pump outputs to be operational at one time (used for flow rate and lag time calculations). The program requires the volume per stroke for each pump (you should input the theoretical value ie at 100% efficiency), plus the efficiency that the output should be calculated at. If the volume per stroke is unknown, it can be calculated in the pump output engineering program by inputting liner size and stroke length (and piston rod diameter if a duplex pump).

vi). Realtime Zeros (dau_zeros) Pressing F7 while the cursor is positioned over the selected option will return the value to zero. Pit system and Flow gain/losses for pit monitoring; one or all pump strokes to allow something particular to be lagged to surface (this will not affect depth intervals which are currently being lagged). In the case of WOB, this zero option should only be used when the driller correctly zeros his own weight indicator ie when he is just off bottom and rotating slowly. vii). Slip Thresholds (defaults) This is a display of kelly and hook weights together with hystereses (margins) values. These are used to determine whether the rig is in or out of slips and is required for connections to be recognized and for the trip monitor to function correctly, ie to register stands being pulled or run. If the measured weight (hookload) falls below the total weight displayed (ie combined hook, kelly and hysteresis in) the rig status will change to 'in slips'. The status will return to out of slips when the recorded weight rises above the combined values of hook, kelly and hysteresis out. example Hook Weight Kelly Weight Hysteresis In Hysteresis Out 10t 5t 2t 4t

Here, the status will change to In Slips when the hookload falls below 17t, and only return to Out of Slips when the hookload increases above 19t. When the rig status is 'tripping', only the hook weight and hysteresis should be used as the slip value, since the kelly will be racked. In practice, you would normally have to make this change in order for the last couple (if tripping out) of stands to be recognized. Values are entered by moving the cursor and entering the desired value, press enter to move cursor to next position, F7 to save, F4 to exit. Before setting the values in this program, calibrate the hookload (the threshold program will automatically be accessed after a calibration, with calculated values of hysteresis based on the calibration), it will need to be checked/edited after any subsequent calibrations of the hookload). The values that will actually be used by the QLOG system are the values displayed in the New column. This is a value derived from the value entered in Setpoint and may be slightly different from the value you entered. If you want a specific value to be used, then make slight adjustments to the value in Setpoint until you have the desired value in the New column.

viii) Test Mode (dau_test1) This program allows the electrical signals coming back from each sensor to be viewed. Its use is important when rigging up, calibrating sensors and troubleshooting. An analog to digital converter (ADC) mounted on the data acquisition card is used by the computer to convert the 4 to 20mA signal to digital values that can be used by the computer and QLOG. For the analog sensors, a typical DAU board will allocate 800 to 4000 counts for the 4 to 20mA signal, whereas an Elcon barrier system allocates 800 to 4095. The different channels are set using the sensor configuration program in the Setup menu. The signals (for the analog sensors, listed in the two left hand columns of the test mode), both current and counts, for each sensor can be viewed by pressing m (mode) which toggles between the different states. Each channel will show the following information:channel number channel or sensor name average and instantaneous signal state, ie configured (+) not configured (-) override (=) failed or disconnected (reading < 4mA) (F N/C) The digital sensors are listed in the right hand column, with the Interrupt sensors at the top and the binary sensors beneath. The accumulating pulses will be displayed for the interrupts and the state (whether on or off) displayed for the binarys. ix). Set Line Wear This can be reset to zero at the time that the drill line is slip and cut. The wear on the line will then be recorded realtime, whatever the rig operation, in order to determine the next time that the line should be slip and cut x). Set WOB This will correct the WOB as a temporary measure while drilling. If the hookload is calibrated correctly there should be no need for this. The current value, as given by the drillers console, should be entered; F7 will set.

xi). Trip Mode The tripmode can only be started by a Superuser. Entering tripmode will automatically start the trip monitor, changing the rig status from drilling, or off bottom, to 'tripping'. Once the tripmode is running, then the same tripping display can be accessed by any other user on the network. However, only the display on the console that the program was started on will contain the function keys that are required to operate the program. Any other display will only contain the F4 option to exit the display. The tripmode can only be stopped (Cancel Tripmode) at the console from where it was begun. The tripmode will display realtime information such as pit levels, bit depth, running speed, swab and surge pressures, strokes and pressure if breaking circulation etc etc. In addition to this, every time that a stand is pulled or run, the calculated and actual mud displacement will be recorded and displayed. This information will be displayed for the last 8 stands. There is a menu at the base of the screen allowing the trip to change direction, end trip, and also select whether its a wet or dry trip. When tripping out the monitor works by detecting changes in the trip tank volume, it then displays how many stands have been pulled. Trip reports can be saved to file using the F2 (here, a printer and/or a file can be specified) option. The file will be created in 3:/datalog/trips and given the name tripYYMMDD.qlog (ie the date). In addition to this recorded information, the mudlogger should make full use of realtime plots in the monitoring of trips. The program has many command keys, some of which are not given on the command menu displayed on the top of the screen. F9 the F3 F8 or F6 F5 toggles between the trip direction, whether in or out. When the program is first started, default direction is out. to chose whether stands or singles are being tripped. The default is stands. to select which pit should be monitored for mud displacement calculations. The choice is either the triptank or the Pit Totals 1 parameter (normally defined as the suction system PVT as a whole - this should be born in mind when defining pit totals at the beginning of a well). to select either a wet trip or a dry trip - ie whether a closed end or open end displacement should be calculated. to change the actual hole fill recorded for a particular stand. This may be required for example, if a slug is pumped part way through a trip, or whenever the trip tank is filled since we do not have the option of triptank + pit for displacement calculations. The stand needs to be specified - this is defined by the row number on the display. For example, the top row is row 0, the next is row 1 etc. The correct fill is then entered (you should enter the number without decimal places).

NOTE that the calculated displacements will be based on the recorded depth, rather than on an average value per stand pulled/run. Thus, if your depth is not correct, or does not track correctly, your calculated displacements and trip record will be incorrect. This may be quite a common problem if trip running speeds are very fast - the crown sheave sensor may simply be unable to sense each target if the wheel is rotating too fast. At the end of a trip, the program will either stop automatically or should be stopped by using the Cancel Tripmode option. The program will stop automatically when, on a trip into the hole, the bit depth becomes equal to the hole depth, or on a trip out of the hole, when the bit depth equals zero.

xii). Chromatograph Here, this portion of the QLOG menu will just be highlighted. The operation of the chromatograph will be looked at in more detail in the appropriate section of this manual. Setup This is where the chromatograph can be started and stopped; where the configuration for each column is stored and displayed; and where the Method, such things as injection time and column temperature are displayed. Calibrate This has to be run from Windows and is where particular chromatograms can be saved, then each gas defined and calibrated. Tweak Calibrations Again, this has to be run from windows and is used to make small adjustments in the position of set points selected during the calibration process. Configuration Sheet One of these should accompany every chromatograph to record its history. It is a printout of the setups used when the chromatograph was last tested. This should be updated if there are any significant changes made, or if columns are replaced. Example Chromatogram Displays the gas peaks analyzed with the standard columns.

2.6 DATABASE MENU 2.6a Depth and Time Databases All parameters are recorded in both a depth and a time database. Both are in a large spreadsheet format. The record intervals in which the two sets of data are stored is determined by the values entered in the equipment table. 1. Time - usually recorded every 60 seconds ( minimum 2 sec) 2. Depth - usually every meter or foot (minimum 10cm) The depth database is called dbdepth.qlog and is usually stored in 4:/datalog/dbms. The time database is slightly different, in that a time file is created for each day. This file will have the name timeYYMMDD.qlog (year, month, day) and is stored in 3:/datalog/dbms. Only if a particular time file is in this directory, will you be able to view/edit the data in the database. The QLOG system saves all parameters in metric units by default, whereas the individual users unit configuration will determine how they appear on the screen. As well as containing straightforward recorded values (ie WOB, ROP), many of the parameters are calculations using the recorded data (eg ECD, delta temp, Dxc). Gas values and certain mud parameters are lagged before being written to the database. The same database is used for the storage of geological descriptions, lithology percentages etc, although this information is input by the user from windows. On entering Edit...Databases from the QLOG menu (program name dedit), the user will automatically be placed in the depth database. F2 will toggle between the depth and the time databases. Different parameters, or fields have different displayed states. This is displayed, along with other information, at the top of the page. Edit Lag Adj View Recalc the parameter can be edited the parameter will be lagged to surface before being written to the database view only, changes will not be saved - associated with recalc a parameter that is calculated from others. To save processing time, this function is disabled by default. Should the user need to view this data, it has to be recalculated by pressing F9, then page up/down. All recalc parameters can then be viewed. No editing possible

Lock

The command options for use with the database will be shown after typing /. The command will be initiated by moving the cursor with the arrow keys and pressing enter. When the user becomes more familiar with the commands, he can initiate the command simply by pressing the initial letter, rather than bringing up the menu.

g(oto) f(ind) r(eference)

to move to a particular record number; enter number, press F7 to move to a particular depth; enter depth, press F7 to move to a particular parameter. Because there are so many in the spreadsheet, each field is given a reference number to make it quick and easy to move from one to another. The references are typical spreadsheet references. Input the reference and press F7. to duplicate a particular value in preceding or following records. Note that this can only be done for a particular parameter, you cannot copy horizontally to different parameters. Select c, move cursor up or down to highlight the records you want to copy to, press F7. duplicates an entire record to the following record ie copies every parameter for a particular depth to the following depth interval.

c(opy)

k(lone) o(rder)

allows you to change the position of columns. This does not change the reference of the column, only where it is displayed on screen. This allows you to have such things as WOB, ROP, gas, torque etc alongside each other when making geological interpretation. Select o, enter the reference of the column you want to move, press F7, edit the number displayed (this is the order number of the column in the database) to the number of the position where you want to move it to, press F7. Press F4 to exit. These changes will be stored in the users home directory as dedit.order. Therefore, other users are not affected by these changes. example, to move WOB into column position 2, next to RPM. o (to select order) r (to select reference), followed by dd F7 edit 108 (order for WOB) to 2 F7, F4 z(oom) i(conify) h(elp) q(uit) changes the size of the text - there are 3 settings. to iconify when in windows

2.6b Database Cell References a b-e f-i j-m n o-ad ae af ag ah ai aj ak al am an ao ap aq ar as at au av aw ax ay az-bb bc-be bf-bi bj bk bl bm bn bo bp bq br bs bt bu bv bw bx by bz-cf cg RPM SPM 1-4 Strokes 1-4 Pump Vol 1-4 Triptank Pits 1-16 Temp In Temp Out Cond In Cond Out Mud Dens In Mud Dens Out Hookload pH In pH Out Sulphide In Sulphide Out Heave Windspeed Wind Direction Total Gas Sensor Total Gas Chromat Torque Flow Out Flow In Standpipe Press Casing Press H2S 1-3 Combust 1-3 Pit Totals 1-4 Analog RPM Bit Depth Hole Depth Ream Depth On Bottom Time Off Bottom Time Bit Revs Bit Hours Bit Start Depth Bit Start Time Cost/m Hook Position Slip Status Rig Status ROP Instantaneous ROP C1 - C5 CO2 ci cj ck cl cm cn co-dc dd de df dg-dj dk dl-dm dn do dp dq dr ds dt du-en eo ep eq er es et eu ev ew ex ey ez fa fb fc fd fe ff fg fh fi fj fk fl fm fn fo H He CO SO2 O2 N2 Chrom Gas 1-15 WOB Theo HKLD Triptank g/l Pit Total 1-4 g/l Flow g/l Ratios 1-2 Chrom Hydrocbs D exponent DC exponent Hydrostat Press Formation Press Surge Pressure Swab Pressure Ann Vel 1-20 Theo Lag Time Lag Strokes Downtime Down Strokes Annular Volume Pipe Volume Pipe Displacement Lag Depth Gas Lag Depth String Weight Delta Temp Delta Cond Delta Mudweight Delta pH Delta Sulphide Total Stands Stands to go Stands pulled Trip Direction Running Speed Fracture Grad Overburden Grad TVD Normalised Gas Ton Miles Total Circ Time Circ Volume

ch fq fr fs ft fu fv fw fx fy fz-gs gt-hm hn-ig ih-ja jb jc-jl jm jn jo jp jq jr js jt ju jv jw jx-ka kb-ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt-kz

H2S Chromat ECD Bit HHP Bit Press Loss Bit HHP/Area Nozzle Velocity Impact Force % Ploss at Bit Total HHP Lag Time Reynolds Ann 1-20 Pressure Ann 1-20 Pressure Pipe 1-20 Reynolds Pipe 1-20 Total Press Loss Pipe Weight 1-10 Ann Press Loss Pipe Press Loss Mud in hole Wet Ratio Balance Ratio Character Ratio Hook Speed UD1 Sonic UD2 Resistivity UD3 Gamma UD4 Bulk Density UD 1-4 User Ratios 1-4 Ratio Analysis Calc FID Gas Calc Hotwire Gas Calc Est Porosity Day Number Sigma CC Gas Sensor TCD Gas Sensor Block Temperature Average ROP Pason Flow Out Total Strokes Table RPM Mud Motor RPM User Defined 1-7

fp la lb lc ld

Total Circ Strokes Gas Flow Out Impact Frc Area Molar Mass Gas SG le Hole Drag lf-ml Blank mm-mq Comments 1-5 mr-na % Lithology nb Interpreted Lith nc-nd Porosity 1-2 ne Porosity Type nf-ng Fluorescence 1-2 nh Grain Size ni Rounding nj Sorting nk-nl Lithology Comments 1-2 nm Total Cuttings Gas nn Calcimetry LST no Calcimetry DOL np Formation Factor nq Shale Factor nr Shale Density ns-nt Fossils 1-2 nu-nv Minerals 1-2 nw-nx Oil Shows 1-2 ny-nz Geological 1-2 oa-ob Engineering 1-2 oc-ox Index 1-22

2.6c Lithology Editor (lithed) This program can only be run from windows. It enables the user to enter information such as percentage and interpreted lithology, lithology descriptions, accessory symbols, fluorescence, porosity, grain size etc. by a simple point and click method utilizing the mouse. On entering the program, the user will be located in % lithology. Use mode to toggle between % and interpreted lithology. For each mode, you will notice that there is a text column, porosity, fluorescence etc - this relates to the 2 columns in the database ie porosity 1 and 2. Use mode to toggle between fluorescence and grain size. Select lithology by clicking on NA. This brings up a menu of the lithology symbols. Click on the one you want. This can then be input into the lithology columns simply by clicking on the mouse again. The two symbols at the bottom left of the main window allow you to toggle between lithology input and drag and click. ie the second option allows you to copy blocks of lithology to following records. After inputting the % lithology, the different types will be automatically placed in the correct order - either when you change on to a new line, new page, or select the drag option. The arrow symbols allow you to move up or down. The larger symbols move by a page, the smaller ones by 5 records. Any changes made in the editor will only be saved when you change page, although if you forget to do this, you will be prompted to save when you quit the program. Accessory symbols can be selected by moving your cursor over the accessory column to the right of the main window. A column for each of the 5 accessory types will automatically appear. Each group has 2 inputs available for each record interval. Using the right hand mouse, click on the record and accessory group required - a menu of the symbols will be displayed. Simply click on the symbol required. Moving your cursor back to the left side of the window will close the accessory menus. 2.6d Accessory symbols A program allowing the editing and construction of accessory symbols. These symbols are in 5 directories: Fossils Minerals Oil Shows Geology Engineering Load the directory of files you wish to edit (File...Load), click on the particular symbol to be changed, pick up the pencil or eraser by use of the mouse and edit as required. Save when finished (File...Save). To create a new symbol, select Symbol...Add, give the symbol a name, create it using the pencil and eraser as above, save when completed (File...Save). Selecting Symbol...Gallery shows all of the symbols in the current file.

2.6e Bit Database (stored in file 3:/datalog/dbms/bit.dbase) The bit database works in 2 principal ways. Firstly, it gives information to (eg bit size, jet sizes), and takes information from (eg bit hours, bit revolutions), the realtime system. Secondly, it provides a mean of storing all of the details pertaining to individual bit runs; these details can then be printed out in a report format. A new bit run is started by entering 0 in the Bit Run Number. You will be asked to confirm whether you want to append a new bit - you should enter Y. A new Bit Run will be initiated, and given the next number in the sequence. For the bit database to function correctly on the realtime system, a minimum amount of information must now be entered:Bit Number, and whether a re-run Bit Size Jet Sizes* Whether a mud motor is in the BHA * the jet sizes should be entered in mm if metric depth units have been selected, and in 32nds of an inch if imperial depth units have been selected. The Time in and Depth In will be automatically taken from the realtime system. By pressing F7, this information will be saved, and the bit database will begin recording the Bit Run realtime ie the time and depth will increment, along with bit hours and revolutions. The remaining information, such as bit type, serial code, comments etc is not needed for the realtime operation of the database, but will obviously need to be entered in order for a complete report to be generated from the database. When the bit run is completed and you need to stop the bit database from running, you should proceed as if to enter a new bit run, ie enter 0 to append and enter Y to confirm. If you want to start the following bit run immediately, proceed to input the information and press F7. If you just want to stop the present bit run and not begin a new one, you should just type F4 after you have confirmed that a new run should be appended. This will stop the present run, but not begin a new one - this would be required at casing points for example. When a bit run is stopped, the time and depth will be confirmed, bit hours and revolutions confirmed and stored. Bit Run averages will also be automatically calculated from the depth database. These will also be printed out in a final report format. To print the report, you should type F2, then R or L to define whether the output is to the report or local printer.

POINTS TO NOTE All information in the bit run pages can be edited once the run has been completed and saved. Once a bit run has been added and saved, it cannot be deleted. You should therefore be sure before stopping and starting bit runs. Revolutions due to a mud motor will not be incremented in the Bit Rev part of the database only Table RPM is included. This can be approximated at the end of the bit run by multiplying the average RPM (which includes both) by the total number of on bottom hours x 60. If ream mode is run, the number of reaming hours will be included as On Bottom Hours. When the bit gets to bottom and begins drilling, this should be corrected.

2.6f Survey Data (stored in 3:/datalog/dbms/survey.dat) This program requires the basic directional survey information (measured depth, azimuth and inclination) in order to calculate directional information by way of the Minimum Curvature Method. As with the bit database, to enter a new survey, you should enter 0, followed by the survey information. The azimuth should be entered in a N...E format (QLOG will automatically convert it to the correct compass direction). Press F3 to save and F7 to recalculate. This will update the directional calculations displayed in the data file and also update/correct the TVDs in the depth database should they be inaccurate. F5 can be used to insert or delete survey records. F8 would then produce a series of directional plots that can accessed through windows via Reports...X-Y-Z plots: surv_nview.plot surv_wview.plot surv_tview.plot surv_3Dview.plot well profile viewed looking north well profile viewed looking west plan view of the well 3-dimensional profile of the well

The 3-d plot can be modified by the user, whereas the others are default:Alt F8 to edit the 3-d information: - Elevation and Direction from which the well is viewed - Start Depth (default is 0, but you could select the Kick Off Point for example) - With or without Impulse - With Impulse drops a vertical line from each survey point, and is useful in highlighting degrees of curvature in a wells profile.

The plots can also be plotted in conjunction with the target:This, first of all, requires the necessary target information to be entered into the database (eg target direction, radius, and departure etc) by selecting F6 - the Plot with Target function must be set to YES. Secondly, a target data file has to be created by using the editor. This file (3:/datalog/plots/data/target.dat) should contain 3 columns; North-South co-ordinates, EastWest co-ordinates, and TVD. Thus, the whole of the projected well profile can be entered and plotted alongside the well drilled. The last record will be assumed to be the target and a Target Radius will be plotted around it. Should you only require the actual target to be plotted, then enter just the one record in your data file. NOTE that when either of the sub-menus (target info or 3D info) are altered, the F3 function should be used to save the changes. A directional report can be generated by selecting F2 and defining either the report or local printer (remember that these should be defined in Setup...Printer Controls). Tie-Ins Should the wells reference point not be the wellhead, but is in fact a Tie-in point, then the user has to force the first record in the database and adjust accordingly the directional information for that record. Enter 1 in the Survey Number field Press Alt-F6 to allow editing of the calculated directional data. Enter the TVD and the North and East co-ordinates Press F7 to save and recalculate.

2.6g Well Data (stored in 3:/datalog/config/tomb.dat) This file contains specific well information that will be automatically used for final log headers. Because of this, users should be careful with the syntax they use for information. There are 4 pages of information: Page 1 Page 2 Page 3 Page 4 General Well Information (location, well number, spud date etc) Mud data. Casing data Hole data

F2 and F3 are used to go back and forth through the file, F7 to save changes.

2.7 REPORT MENU 2.7a X-Y-Z Plots This option is used to access plots that are created automatically throughout the QLOG system, or indeed, plots that are created by the user. Examples of QLOG plots:survey plots (Database...Surveys) gas ratio plot (Geology...Ratio Analysis) pressure plots (Geology...Pressure Analysis..Overpressure) engineering plots eg Swab Surge Leak Off Test Kick Kill

The program needs to be operated from windows: Click on the required plot with your mouse Select the open option Select window or plotter output (remember that a plotter can be defined as a file in the printer controls table - this enables you to save a plot to file).

By default, these plots are large format. Should you be plotting for a final report, you will need to change the set ups on the plotter. Set horizontal and vertical scales to 50%. You may also want to change pen colours for better presentation. 2.7b Plot Configuration (config2) This is where we can design the layout for realtime plots and/or final logs. One huge advantage of the QLOG system is that we are able to exactly tailor a log to meet the clients requirements. On entering the program, you will be shown a list of all the plots/logs presently on the system. Select one of these by edit, then moving cursor to desired log and pressing enter. Should you wish to create a new log, select new and enter in the name of the log, press F7. There are then 2 parts to the configuration editor. Firstly, there is the Header Page (F6 to edit) that creates a file called logname.extra in 3:/datalog/script. Secondly, there is the design of the log itself (F7 to edit), creating a file called logname.script in the same directory. (logname refers to the chosen name of the plot or log) Edit Header (extra file) There are 3 components to this. The Title and Comments sections are both printed out on the log header page. The description is for reference only, providing a description of the log when you are in the first, main menu of this program.

Edit Chart (script file) The chart is the name given to the column on the log. Within each column or chart, a certain number of parameters will be plotted or printed. The name given to the parameter is the channel. There are then 2 components to this section of the program: 1. configuring the charts 2. selecting the channels within each chart. Initially in the chart configuration page, you can add, delete or move charts by positioning the cursor on the chart number and pressing enter - your option can then be selected. To get to the select channels page, move the cursor down the chart menu to the #Channels option and press enter. Channels can be added/deleted etc in the same way as detailed above. Chart options: Divisions Ticks Width (cm) The number of main divisions within the column or chart Subdivisions within the divisions above Width of the column - the total width of the log is shown at the top of the page - this will depend on the size of your paper Spacing (cm) Inserts a space on the right hand side of the column Border colour Colour of borders and divisions - Default black Grid colour Colour of the ticks - Default yellow Type Linear, Log or Text* #Channels Shows the number of channels or parameters that have been defined for this chart - press enter here to go into the channel selection option. * The text option here could be selected for parameters such as pit levels or ROP - the values would then be printed out rather than plotted. This is a very useful addition for realtime plots. The text option does not need to be selected if a text parameter has been selected in the channels option - the system will recognise this by default as a text parameter. If a log scale has been chosen, the number of divisions has to be the same as the number of cycles in the log. For example, if gas has been chosen with a scale of 0.01 to 10.0%, there are 3 cycles in the log, therefore the number of divisions has to be 3. This is a common source of error. Channel Options: Right Bound Left Bound

Strip Colour Plot Method Source

> left and right scales - you can have the scale increasing to the > right or to the left as required. Take care with log scales - do not have them starting at zero; this is another common source of error. Colour of the curve Histogram or Point to Point Press enter here to bring up the menu of the parameters - cursor to the one required and press F7. This should be done before selecting scales, plot method etc. If a text or similar parameter is chosen (eg Comments, LithComm), then the system

method

automatically recognises that and leaves scales and plot blank.

Once you have made all the changes, use the save option before exiting the program. The next step is to create the control file which will define whether the log will be time or depth based, real or historical data, speed, vertical scale, output etc etc. 2.7c Plotter Setup (starter) This set up menu is where the control file for the log is created. This will then create a 3rd file (in addition to the script and extra files already detailed above), called logname.ctrl in 3:/datalog/script, required for a log to function. The command options with which to run this program are detailed along the bottom of the page simply cursor along to the appropriate one and press enter. The first thing on entering the program is to either select, or define a new, control file. Select Ctrl, then select new, enter the logname and press F7. If selecting an existing control file, the previous settings will be restored from when the file was last altered. Select the appropriate Script file Define what type of plot you require by entering on a combination of the following: Dbase, Real, Depth, Time - this gives you these possible combinations:Databased - Depth or Time Real - Depth or Time You should then select Edit to set up scales and speeds etc. If a time based plot has been selected, you will principally be concerned with the plot speed and will be editing the left side of the control page. If depth has been selected, you will be concerned with scale, start and end depth, and will be editing the right side of the page. Depth Scale Start depth End depth m/tick ROPAVE Samples/plot normally 1:240 or 1:500

Depth

interval at which depth will be printed on log interval over which to average ROP - normally left as 0 normally left as the default 5

Time

cm/hour speed speed at which paper runs through plotter normally 20 or 30 cm/hr plots per hour normally 60 or 120 ie once a minute or every 30 seconds start time end time samples per plot this affects the granularity of the curve; the higher the number, the smoother the curve.

Affects of plots per hour and samples per plot For realtime plots, the combination of speed, plots per hour and samples per plot affects how smooth plots will be, but also how much work (ie data storage and processing) is being done to slow the system down. Speed obviously has the same has scale on depth based plots. Examples Plots/hr 60 60 120 120

Samples/plot 5 10 5 10

Plot Frequency 1 min 1 min 30 secs 30 secs

Data storage freq. 12 secs 6 secs 6 secs 3 secs

You therefore have to be careful that you do not overload the memory buffer on the plotter ie if the plot frequency was larger (less often) but data storage was more frequent, then there may be too much data for the buffer to hold and data would be lost. The maximum number of data packets that the plotters buffer can hold is 35. If the plot was time-databased, then the plots per hour should be set to 60 if the time database is at 1 minute intervals. ie 60 plots/hr is 1 plot/min, the same as the database interval. No difference would be made if a value <60 was entered, but if a value >60 was chosen, extra plots would be created. For real-depth plots, it depends on the scale chosen, whether metric or imperial, and on samples per plot. Metric units; if the scale is < 1:500, plots will be every 1m > 1:500, plots will be every 5m If samples per plot were set at 10 in the above cases, then there would be one data point for every 0.1m and 0.5m respectively. Increasing the samples per plot would increase the granularity. Imperial units, if the scale is < 1:500, plots will be every 5 ft > 1:500, plots will be every 25 ft For depth-databased plots, the samples per plot makes no difference. If plots are output to a windows screen, the samples per plot affects the amount of information shown in the screen window. For example, if the samples per plot were set too low, only a portion of the window will contain data. In order for information to cover the whole window, the samples per plot needs to be increased. The determination of this value is really by trial and error. The software is designed with a 20cm window in mind, but the data required to fill this window is dependent not only on speed/scale, plots per hour and samples per plot, but also on the number of columns and the number of channels. All types of screen plot are affected in this way ie whether databased, real, depth or time.

Should you start a plot running and only a portion of the window is taken up by data, simply increase the samples per plot and restart the screen plot. Repeat this until the full 20cm is occupied. Control options continued: Plot - either to end point (end depth selected) or to end of database. Black and white or Colour - different lines will be given different symbols if B/W is selected Out - the output device - you will be given the list as defined in Printer Controls from which to choose. Head - toggles between header and no header ie to print with the log Wr - Write. Once all the changes have been made, even though many of the steps have required an F7 to save, the file must be written to disk in order for all the changes to be saved.

2.7d Starting and Stopping Plots There is a windows or a text console facility for doing this. Start Plotter is the windows function. On entering, you will be given a list of all the control files that are on disk. Select the one required and open - you will then have the option of selecting a window or plotter output. It makes no difference here if the output is configured as a plotter in the logs control file; by starting the plot with the windows option, any plot can be output to screen. Plot Info is the normal facility. F7 allows you to start a plot F2 allows you to stop a plot (you may have to clear the plotters buffer if a large amount of data is stored) F3 will temporarily suspend a plot, and F8 will resume it. F5 allows you to put a text comment on a realtime plot (the script file has to be configured for text to be able to do this)

When using these commands, the plot is normally referred to by the slot number. This is simply an ordered number assigned by plotinfo when plots are started. Commands have to followed by an F7 for them to be carried out.

For each plot that is running, the following information is displayed:Node node that the plot is being run from (not necessarily the same as the node that the printer is connected to)

Tid task identity number assigned when the plot is started Script file Control file Device where the plot is being output Status Active - data being sent to the plotter Suspended - after using the F3 option Doomed/Pending - after using F2 to kill the plot, the status will be doomed until all buffered data has been cleared. 2.7e Defining User Ratios QLOG can have up to 4 User Defined ratios, primarily designed for gas analysis but able to use any combination of 2 parameters. These ratios are stored in reference columns KB - KE. These ratios are not stored in the database, but are recalc parameters. There is therefore no reason, even though it is unlikely in practice, why the ratio cannot be redefined part way through, or even after the completion of, a well. When the F9, recalc, facility is used, the newly defined ratio will be read by the system and the ratio calculated from it. If the user does use the User Ratio facility, then the configuration file should be saved along with all the well data. The file is called 3:/datalog/config/ratio.cfg 2.7f Hole Profile (hole_pict) This is a windows function, producing a graphical display of the current well profile. It reads in data from the casing, hole and pipe profiles to produce a realtime schematic. By clicking on the appropriate sections, the user can open a window each for annular, pipe and bit information. Information will include dimensions, volumes, flow regimes and hydraulic parameters.

2.8 ENGINEERING MENU

The programs in the engineering suite may draw their information directly from the realtime system, or maybe offline with the user entering the information, or maybe a combination of the two. There are several programs that can help evaluate drilling and hydraulic parameters as well provide monitoring and calculations for pressure tests or well control situations. The user should become familiar with how the important or often used programs function so that they are able to respond effectively at wellsite. The programs are simple to use, being principally menu driven, requiring the input of numbers and pressing the calc F7 key ! i. Drill String Design Maximum WOB and Neutral Point This calculates the available bit weight (maximum weight that may be applied) and the neutral point (where stress changes from tensile to compressive) of the current drill string design. While the string is suspended, the stresses will be tensile throughout, but will become compressive when the bit hits the bottom. The neutral point will move further up the string as more weight is applied to the bit. The reality here is to keep the neutral point within the drill collar section since drillpipe could not handle being in compression. To make sure of this, the normal is to ensure that the top 10-15% of the drill collar section remains in tension. Maximum Torque This program provides an approximation to the actual torque delivered to the drillpipe while drilling. It would be used when the torsional strength of the drillpipe becomes critical during the drilling of deep +/or deviated holes or during reaming. The calculated value should not exceed the make up torque of the tool joints. Drill Pipe Collapse This program calculates the collapse pressure of drill pipe at any given depth, should the annular pressure exceed the pressure inside the string. The program can also be used to calculate the hydrostatic pressure at any depth by setting the mudweight in the drillpipe to zero, and making the depth to fluid top in the drillpipe as the required depth. Critical RPM This program calculates, for a given section of drillpipe, hevi-wate drillpipe or drill collar, the critical rotary speeds that would lead to nodal and/or pendulum vibrations and therefore poor drilling conditions and excessive stress on the pipe.

ii. Hydraulic Optimization Current Profiles (onhyd) This is an optimization program that works based on realtime information such as pump output, mud density and pressure losses. These values can be changed should a change in parameters be the reason for running the optimization program. The minimum and maximum jet velocities are suggested values. The program can then be run to give you the parameters required for optimum hydraulics based on both Hydraulic Impact Force and Hydraulic Horsepower at the bit. Impact Force relates directly to the erosional force of the drill fluid and is therefore good optimization for bottom hole cleaning. Hydraulic Horsepower optimization generally requires lower annular velocities so that flow type is more likely to be laminar. New Profiles (offhyd) This program is offline so that you can input any hole and pipe profiles, mud parameters, flow rate and jet size and calculate the resulting hydraulic parameters such as pressure losses, flow types, annular velocities etc. This program would be used when pre-determining the correct parameters for a new hole section or bit run. By changing the inputs, you can attempt to optimize the hydraulics. To optimize for hydraulic horsepower, the %HHP at the bit should be 65% of the Total HHP. Since HHP is determined by pressure loss, this equates to Bit Pressure Loss being 65% of the Total System Pressure Loss. To optimize for hydraulic impact, the %HHP at the bit should be 48% of the Total HHP. iii. Drilling Optimization Bit Planning By inputting information from the bit records of up to 4 offset wells (bit size, cost, depth out and rotating hours), together with trip times and rig costs, parameters can be selected to give the lowest drilling costs. Drill Off Test This program requires data from physical drilling tests in order to determine the optimum WOB. This is defined as the weight above which the ROP does not increase in proportion to WOB increases. The test should be conducted with optimum and constant RPM and hydraulics. A known interval is drilled with constant WOB and the time taken is recorded. This is repeated for incremental increases in the WOB. This information is then entered into the program and the optimum WOB determined. Use F8 to produce a plot. 5 Point Drill Test This program calculates the drillability constants Threshold RPM and RPM exponent required for drilling optimization and bit life expectancy. The program requires intervals of homogeneous lithology to be drilled with combinations of low/high WOB and RPM and the ROP recorded. This information is entered into the program and the constants calculated.

iv. Pump Output Determines the pump output or capacity for both triplex and duplex pumps. Remember, for duplex pumps, you need to know the piston rod diameter in addition to liner length and diameter. If you are calculating the value in order to enter it into the Pump Data configuration file used by the realtime system, you should calculate the output at 100% efficiency. v. Kick/Kill This program takes data both from the realtime system and from user input. Any data taken from the realtime system can be edited if required. There are 2 pages of data required in order to run the program; use F5 to go to the second page. Page 1 Data Pump speed and pressure for Slow Circulation Rates. These should be performed regularly by the driller and the mudlogger should update this program every time they are performed. The pump output will be calculated automatically from the pump speed and the output stored in Realtime-Pump Data. Use enter to update the calculation. Pump to use - ie which pump are they going to use to circulate kill mud. Drillpipe and Annular Capacities - calculated automatically from hole and pipe profiles. Original Mudweight - taken from the realtime system. Trip Margin - enter the required pressure if a certain overbalance on the kill mudweight is required. Down Strokes and Lag Strokes - calculated from the current profiles, but will only be updated if the rig is circulating and the system is registering pump strokes. Since, when running this program, the well is likely to be shut in, you may have to enter the correct strokes. Casing Burst Pressure - obtain from the drilling engineer Depth of Last Casing Shoe - this will be taken from the hole profile but remember that this will be measured depth. If the well is deviated, the True Vertical Depth should be entered here. Formation Fracture Gradient - taken from the last Leak Off or Formation Integrity Test. Page 2 Data Shut in Pressures (drillpipe and casing) - taken from the driller when the well has been shut in and the pressures stabilized. Pit Volume Increase - ie pit gain due to the kick.

Pit Volume Total - this should be the total of the pits that will be used to make up and circulate the kill mud. This volume is required to determine how much barite is required to increase the mudweight. Total Vertical Depth - taken from the system, it may need to be edited if the kick does not occur at the bottom of the hole. Kill Method - 1 for Drillers, 2 for Wait and Weight, 3 for Concurrent Stroke/MW increment - For the Drillers and Wait and Weight methods, this is the stroke increment for the pressure step down when the kill mud is being circulated to the bit (as the kill mud goes from surface to bit, the pressure should be reduced from the Initial to the Final Circulating Pressure). For the Concurrent Method, it is the incremental increase in mudweight that should be entered the program will then determine how many circulations will be required. Options F7 to calculate: Initial Circulating Pressure Kill Mudweight Final Circulating Pressure Maximum Allowable Casing Pressure Total Barite Required Sacks of Barite to Add Fluid Invasion Type Trip Margin Mudweight ie kill mudweight + increment necessary to give the defined pressure overbalance Trip Margin Sacks (of barite) F3 for Table: Driller/Wait and Weight - table of strokes vs pressure for the pressure step down (Initial to Final) as the kill mud is circulated to the bit. Concurrent - for each circulation required, the final pressure is shown Prints out the table above Shows pressure reduction vs strokes for the above step down.

F2 to Print: F8 for Plot:

vi. Stuck Pipe Determine Depth Stuck The program will determine the depth of the stuck pipe from the inputs; weight of pipe (here, enter the weight/unit length of the drillpipe - the program assumes that there will be no stretch in drill collars), pipe stretch, initial string weight and stretch string weight. Determine Sticking Mechanism This program is run from windows. By answering the questions related to pipe movement prior to and after sticking, rotation and circulation after sticking, the program will determine which type of sticking mechanism is involved (pack off, differential or well bore geometry) and give the correct procedure with which to free the pipe. This program is based upon the Amoco TRUE (training to reduce unscheduled events) course for stuck pipe. vii. Directional Analysis This is the survey database (ie same as Databases-Surveys) viii. Casing Design The user must input the casing specifications obtained from the manufacturer. The program will then calculate the maximum allowable pressures exerted on the casing to assist in the planning of casing programs. Collapse and Burst pressure are critical in casing design. ix. Maximum ROP This program calculates the maximum allowable ROP before the formation is subject to breakdown due to the extra density caused by cuttings overloading the drilling fluid. The Cuttings Transport Ratio is the ratio of cuttings velocity to annular velocity and is typically 0.7 The Maximum Mud Density is the equivalent fracture gradient of the formation and is taken from the realtime system. You may want to edit this value should you be concerned with fracturing the formation at a weaker zone, the last casing seat for example, rather than the current depth.

x. Leak Off Test This program will read and record the pressure changes realtime and, at the end of the test, will calculate the fracture pressure and equivalent mudweight. By default, the casing pressure sensor will be the one monitored for pressure readings, so you should ensure that the test is being conducted on the same manifold as your sensor. Required information:Sampling interval, ie how often data will be recorded. Input by the user, typically 5 seconds. TVD - taken from realtime system hole depth - this may need to be edited for the depth of the test Mud Density - taken from the realtime system - this may need to be edited to show the value determined by the mud engineer and thus the value to be used for calculations. Mud Pump or Auxiliary pump Pump number to use - the pump output can then be determined from the pump data file. Volume or Time - the parameter that the pressure will be plotted against. If Mud Pump is selected above, you can select either volume or time so that the pressure will be plotted against either the mud volume pumped or time; if Auxiliary is selected, you have to select time here, since you will not have a stroke indicator. Once all the data has been entered, press F3 to start. The program will then start collecting data based on the sample interval selected. Once the test has finished, press any key to stop the data acquisition. Press F7 to calculate. The program will determine the maximum pressure recorded, and from that it will calculate the Fracture Pressure and Equivalent Mud Density. Use F2 or F8 to produce a printout or plot of the test.

xi. Surge Swab This program is used to determine the pressures induced by the defined maximum and minimum running speeds of the pipe. Thus, a safe speed can be deduced in order to avoid excessive pressures. Required information:Bit depth and hole depth - read from the realtime system, editable if required. Current surge/swab pressure - read from current recorded pressures, editable if required. Current Flow In - read from realtime system, editable if required. Use Current Profile - ie current hole and pipe profiles, the user should select Y(es). Maximum and Minimum running speed - limits defined by the user. Negative values should be used in order to calculate swab pressures. For example, for surge pressure, the minimum running speed may be 5m/min and the maximum 50m/min. For the same limits, the swab calculation requires the minimum to be set at -50m/min, and the maximum at -5m/min. Current running speed - read from realtime system, editable if required. Press F7 to calculate the maximum and minimum pressures. Press F2 to print the data out. Press F8 to produce a plot. The plot will be pressure against running speed and will show the pressures against the max/min limits defined together with the current pressure/running speed situation. xii. Pressure Test This program operates in exactly the same way as the LOT program described above. The program is intended for use during BOP pressure tests or casing integrity pressure tests. As in the Leak Off program, pressure is recorded realtime against time or volume pumped. The end result will simply be the maximum recorded pressure, with a printout and plot available. xiii. Rheogram The program reads in the Viscometer Theta values stored in the equipment table, but these can be edited if required. Any of the 600/300, 200/100 or 6/3 pairings can be used. Press F7 to calculate:- Plastic Viscosity and Yield Point for the Bingham hydraulic model. Shear Thinning Index and Consistency Index (n and K) for the Power Law Model. Press F8 for a plot:Log Theta against Log RPM to illustrate the derivation of n and K.

2.9 GEOLOGY MENU a. Ratio Analysis This program will calculate the ratios of C1 against all other individual hydrocarbons in order to produce a Pixler plot. These ratios have a proven usefulness in determining the likely content of any particular zones that gave gas shows. The user should select the background depth and gas show sample depth. On entering these depths, the plot name is produced automatically from the well name and the depth. It may be changed if required. The gas value above background will be read automatically from the database. Press F7 to calculate the Pixler Ratios. There is a facility that allows you to change the vertical scale of the plots. Remember that this is a log scale and that for each ratio, anything less than 2 indicates an unproductive zone. There is, therefore, generally no need for the minimum value to be any less than 1. Press F8 to produce the plot which can then be accessed from windows-reports-XYZ plots. For information on how to interpret Pixler Ratio plots, refer to the help file or the chromatograph section of this manual. A useful addition to the Ratio program is the ability to view sample ratio plots with an interpretation provided. To use this facility, press Alt F7. The plots will have the name gas_eval.plot and can be viewed from windows in the same way that you view survey.plot ie press your mouse on the red no entry sign to scroll through each plot in turn.

b) Coal Bed Methane This program is solely for the use of Desorption Analysis on coal samples. From data manually recorded and entered into a data file, the program will produce a desorption report together with a number of desorption and associated plots. Firstly, a data file has to be created (using the editor) for each coal sample in 3:/datalog/cbm. The information contained in this file and the format of the information and data is critical in order for the software to run. You should refer to the help file for the information and format required. Once the file is completed, you can load the data by accessing the cbm program, entering the data filename and selecting F3. Press F7 to calculate:-

For each of the 3 calculation methods

US Bureau of Mines Direct Smith and Williams Decline Curve, The Lost Gas (gas desorbed before the sample was sealed in the cannister) and Total Desorbed Gas will be calculated. Press F2 to produce a report of this data together with cumulative desorption results. This report can be output to either screen or printer. If screen is selected, a file called cbm.rpt will be automatically created in 3:/tmp.

NOTE that for the Smith and Williams calculations to be correct, you must enter the VCF value before the calculation. This Volume Correction Factor (between 0 and 2.5) is determined from a plot of STR (Surface Time Ratio) against LTR (Lost Time Ratio) which are also calculated when you press F7. Press F8 to produce a plot

3 plots will be created cbm_USBM.plot cbm_des.plot cbm_decl.plot USBM Direct - Lost Gas Regression Cumulative Desorption against Elapsed Time Decline Method - Cumulative Desorbed Gas against Time

c. Overburden Program (overburd)

In order for Formation Pressure and Fracture Gradient to be calculated, the Overburden Gradient must be known. The overburden program calculates the gradient for each log interval and will update it into the database. The program can normally be run direct from the command line with no user input required. However, the first time that the program is run, the command overburd +m (for manual) should be used. This allows you to specify the start and end depths and is, in fact, the version of the program that is run from the QLOG menu. The overburden gradient is calculated from the Bulk Density. There must therefore be bulk density values, for each record in the database, entered into the JW reference column. This data may be imported from offset wireline data or measured by the mudlogger at wellsite.

Running the program for the first time: Ensure that the bulk density value in the equipment table is set to zero and that the Bulk Density column in the database has values for every record over the required interval. Enter the command overburd +m, or enter the program from the QLOG menu. Enter your start depth as the start of the database. Your end depth should be the depth of the last bulk density value entered into the database. Choose to update the equipment file and database after the calculation. When the calculation is done, the equipment table will be automatically updated with the Bulk Density (equivalent for the present calculated overburden), which will then be used for subsequent realtime calculations. Calculating to the end of the database will calculate past the end depth entered. Press F5 to read in the bulk density values from the database. Press F7 to calculate the overburden gradient. If you do not select to update the database, the program will just display the calculated end result for the present end depth.

After the first proper calculation (detailed above) run has been completed, the program should be run at regular intervals while drilling. This should be done from a command line with overburd. The calculation will be automatic - no manual input of depths is required, the program automatically continues from the depth of the last calculation. Even better, the logger can set the system so that the program runs automatically at a predetermined time interval, by using the cron timing facility (see Advanced QLOG).

If you wanted to recalculate for the whole database, then run the program as in the first 2 steps above, using the overburd +m option. d. Overpressure Program (overpress)

This program enables you to calculate the Formation Pressure and Fracture Gradient. Before using this program, the user should be fully familiar with the theory and techniques of Abnormal Pressure analysis. The program requires certain information to be in place before running. To calculate Formation Pressure; the Overburden Gradient needs to have been calculated for the given depth interval; the Normal Formation Pressure for the given region needs to be entered into the equipment table. The user can then determine a Normal Compaction Trend based upon a given parameter, normally the Corrected Drilling Exponent. The Fracture Gradient calculation is based upon the calculated Overburden Gradient and the calculated Formation Pressure, together with a Poissons Ratio.

These calculations are performed offline for a depth interval already drilled. When the calculations are completed, the Poissons Ratio together with Pressure Slope and Offset (relating to the Normal Compaction Trend) are written automatically to the equipment table allowing for realtime calculation of the formation pressure and fracture gradient.

The parameter most commonly used to determine a Normal Compaction Trend is the Corrected Drilling Exponent using Jordan and Shirleys formula. The limitations of this parameter, however, have to be recognized. A trend can, normally, only be accurately determined for homogenous shale or claystone. Varying hydraulics, formation, bit type, size and wear, will all cause changes to the DCexp trend. Always consider DCexp along with changes in cuttings character, mud temperature and resistivity, connection gas, background gas, torque and drag of drillstring etc.

To use the program: Select the parameter you wish to use for the trend line from the first menu - normally DCexp. For the Start and End depths of the interval that you intend to update calculations for, enter the value of the Normal Compaction Trend (this value is determined from the scale of the source, ie Dcexp). Use ball park figures initially - you will probably have to run this several times before you have the NCT in exactly the position that you want. The end depth will be the depth to which the data is calculated and updated, so extrapolate your trend if you are in a transition zone and it will give you the calculated pressures within that zone.

Enter Start and End depths of the plot (in most cases, these will be the same as the NCT start and end depths), and horizontal plot scales (this is the Equivalent Mudweight, and would normally be left as the default 800 to 2500 kg/m3 EMW). Select the calculation method, Eaton or Zamora (otherwise known as the Ratio method). Eatons is the preferred method. Enter the Poissons Ratio. This is only used in the Fracture Gradient calculation. Properly, this should be a depth based value determined from offset data (overburden, formation pressure and fracture gradient). If this is not available to you, as a fall back you can use the lithologically determined ratios shown in the help file. Select Average Size. For example, if your database was every metre, and you selected an average of 10, the calculated data for each record in the database would be averaged over the previous 10 records. Select Interval Size. This does not affect the calculated data in the database, but determines the frequency of data points output to the plot. If 10 was selected for example, only every 10th record would be output to the plot. This means that the XYZ plot created (these have a limited memory capability) is capable of taking a greater depth interval. BEFORE calculating and updating the database, select F8 to produce an Overlay Plot - this will be a plot of the Dcexp together with your selected Normal Compaction Trend and is called overlay.plot, accessed from Reports-XYZ plots. You may have to re-select your Trend start and end values before you are completely happy with its positioning.

Once you are happy with your Normal Trend selection: Select whether to Update Database and Equipment Table. Obviously, this would write all of the calculated formation pressures and fracture gradients to the database and also would write the following parameters to the equipment table to allow for realtime calculations:Poissons Ratio, Pressure Slope and Pressure Offset (based on compaction trend) Calculate to end of database - this would calculate beyond the End Depth already selected. Press F7 to calculate. This will update your database and equipment table and also produce a pressure profile plot, formation pressure and fracture gradient against depth, called press.plot

NOTE that the parameters written to the equipment table allow for realtime calculations of formation pressure and fracture gradient based on your Normal Compaction Trend. Should there be a lateral shift in this trend, caused by such things as change in lithology, bit change, change in hydraulics, then it is quite legitimate for you to change the pressure offset in order to get accurate realtime calculations. This facility should only be used for these types of shift changes, not changes in your drilling exponent caused by a formation pressure change (ie do not change the pressure slope). You should only change the pressure offset, which effectively shifts your Normal Compaction Trend, if you are fully confident of what your formation pressure is (this only comes with experience and by taking into consideration all pressure indicators), - you can therefore alter the pressure offset so that you get the realtime calculations that you want.

Should you have an interbedded lithology sequence, for example sand and shale, then your Normal Compaction Trend is effectively shifting for each lithology change. It would therefore be virtually impossible to keep your realtime calculations accurate. In this situation, so that you have accurate information on display for engineers and geologists, it may be advisable to use the override facilities in the equipment table.

Normal Compaction Trends For calculation purposes, intervals have to be calculated for a single NCT. However, if you were producing overlay plots for a final well report, then multiple trends can be selected. This may be due to a number of causes:Shift changes due to bit changes change in hole size change in hydraulics or drilling parameters unconformities (this may also produce a different NCT gradient) etc

Multiple trends can be selected by editing the plot data file /datalog/plots/data/trend.dat which would normally contain the start and end depths plus NCT values that you selected in the overpress program. For additional trend sections, simply add depths and NCT values required:50 350 350 700 700 1100 1.26 1.42 1.56 1.68 1.44 1.60

#NCT 1, 50 to 350m

#NCT 2, 350 to 700m

#NCT 3, 700 to 1100m

Again, this facility can be very useful for providing detailed plots for final well reports but cannot be used for calculation purposes.

e. Calcimeter (calcim) Before using this program, the user should be familiar with the theory of calcimetry and with the hardware involved by referring to the extensive help file. The program is run entirely from the windows interface and the user will see tests graphically displayed as they are performed.

Components:-

DGH pressure module, transducer Sample jar Magnetic Stirrer

2 programs are running when using the calcimeter software:calcim calcim_drvr interface, controls and analysis controls DGH module, data acquisition and timing

Running the program: Similar to the chromatograph, the first thing the user must do is select the correct serial port that the calcimeter pressure sensor is connected to. This is done from Setup - Port. When you have communication, you should see idle status. Calibrate the Injection Pressure - this needs to be done so that the pressure change caused by acid injection is ignored during the analysis of sample reactions. Select Calibrate - Inject; this will start the run automatically; inject your volume of normally 20cc. You will see the pressure increase on the graph, when the reaction is complete click on Stop. The inject pressure will then be recorded in Setup-Settings.

acid,

Calibrate for 100% Limestone (ie pure CaCO3). Ensure the vessel is clean of acid and dry. Place your limestone in the vessel (normally 1gm), ensure it is sealed, have your acid ready to inject. Click on Start - the status will read Inject, Wait ie the program is waiting for the pressure to increase above the inject pressure before it starts analyzing. Inject the acid, status will change to Run. When the reaction is complete (normally around 30sec) click on Stop. If the pressure on the display goes off scale, wait for about 1min before stopping. Click on Calib 100% Sample and confirm. The highest pressure recorded will be stored in Setup - Settings as Carbonate Pressure. You should repeat the process to ensure the calibration is accurate. Calibrate for 50% Limestone / 50% Dolomite - again, ensure that the vessel is clean, dry and sealed, with the acid ready to inject. Follow the same procedure as above to run the sample. When the reaction is complete (you will have to extend the sample run time in Setup - Settings), click on Stop. Click on Calib 50/50% Sample and confirm

the

The software determines the calibration by monitoring the pressure change. When the pressure is at 50% of the Carbonate Pressure, the slope of the curve is determined. Obviously, anything above that pressure is regarded as being due to the dolomite reaction. The slope is then used by the sample analysis process to determine when the limestone reaction is completed and the dolomite reaction begins. This allows for automatic analysis, but the value can be adjusted or overridden if required. Set Up Parameters Run Time Inject Pres Maximum 30 minutes; can be adjusted at any time, even during a sample run Pressure created by the injection of the acid - this will be subtracted from the pressure due to sample reactions. Maximum pressure determined by the 100% Limestone calibration

Carbonate Press

Dolomite Adj Determined during the 50/50 calibration, this is a compensation factor to account for the different reactions of limestone and dolomite. Slope Determined during the 50/50 calibration and used for automatic analysis, this is the point at which all of the limestone has been consumed.

50/50 Sample This is the % of limestone in your 50/50 sample and should be set to 50 Zoom Factor Allows you to change the time scale during a sample run

Analysing Samples Once a sample run has been completed and you have clicked on Stop, you have the option of accepting the software calibrations and automatic analysis, or overriding this and doing your own analysis. Automatic - once the run is complete, select Analyse - Perform Analysis. The %s will then be determined automatically based on the calibration settings and displayed in a sub menu. Here, you can enter the depth interval of the sample and choose to accept or edit the calculations (on the graph of the sample run, it is quite easy to read off the limestone/dolomite percentages). When you click on OK, the data for that sample will automatically be written/updated to /tmp/calcimrun. Manual - select Analyse - Select Break; immediately move your mouse to the point on the curve that you consider to be the break point between the limestone and dolomite reactions (effectively, you are overriding the slope in the automatic calibration) and click the left hand mouse. Now you should run Perform Analysis as above. The data will have to be manually entered in to the database if required. Saving Individual Sample runs The actual sample run (ie the resulting graph, calibrations and analysis) can be saved to a file if required. Select Sample - Save, enter a filename and save as you would a chromatogram. You also have Load and Delete options for sample files. The file will be stored in /datalog/calcim_dat

2.10 OTHER MENU a. Communications beep Any user on the network can be 'beeped' or alerted to a message. The message will appear on all consoles that that user is logged on to, along with an audible alarm at the time of sending the message. In the QLOG menu, simply enter the userid of the user you want to send a message to, type in your message and press F7. From a command line: beep <user name> "message" APB All Points Bulletin. This works in exactly the same way as beep, but the message will appear on every console on the network. The message will only appear when the enter key is used, or a program is exited. chat This program enables up to 5 different users on the network to communicate with each other. On entering the program, your userid is automatically displayed. The usual procedure is to beep the users you wish to talk to, with a message requesting that they enter the 'chat' program. When you have finished talking, press ctrl_e. Similar to the chat program, but not in the QLOG menu is 2chat. This format is specifically for 2 users to talk to each other, providing each with more space in which to enter their messages. To enter the program, simple enter the command 2chat. mail This allows any user to send and leave mail for any other user who can log in to the system. As soon as that user logs in, a message will appear to indicate that there is mail waiting for them. There is a detailed help file with this program, but the essential operations are as follows:To send mail select send, enter the user name to which you want the mail to go; type in your mail message; enter on to a new line, type in 1 full stop, enter - your mail will be sent (use 3 full stops to cancel the message). position your cursor on the message you require; select read

To read mail

To delete mail position your cursor on the correct message; select delete

If your mail message is long, and you are sending it via a modem, it is best to write the message first, rather than writing live on air. Use the editor to create a file containing your message. Transfer this file to the remote station, then use the following command: mail se userid < filename who is online (Who) This will tell you all of the users logged on to the system at any one time, and where they are logged in, ie node and console. If a user has logged in to your network from a remote station, you will be able to tell who it is and 'beep' a message to them if so required. qterm This is the communications program allowing the user to connect to other systems around the world via a modem attachment. Once the system is set up correctly, it is simply a matter of entering qterm and a 'ctrl a' (to display the qterm menu) and 'd' for a dialing directory. Hit the enter key on the number you wish to call and the computer will dial and connect, giving you a login prompt from the remote system. This procedure is detailed more thoroughly in Section 8 of this manual.

b. Spreadsheet The spreadsheet used by QNX is called PCC and is very similar to Lotus 123 in use (ie / to get menus). Do not use this program on the server node as it has a tendency to freeze up the system. See Pcc manual for details on use. This can be used for morning reports.

c. Word Processor QNX uses a word processor called Penpal. This processor may not be as sophisticated as MSDOS processors such as Word, but it is menu driven and very easy to use. Each sub menu contains a help file explaining the function of each option; to access, press F1 from the particular menu. The escape key will allow you to move between the main menu and your document; the option is always displayed in the top left hand of the screen. To select different menus/menu options, select the letter indicated The important menu options are shown below:Main Menu Disk Access Load Document - tab between files and directory structure to select your file; press enter Save Document - enter a filename, or accept the one displayed New Document - will give you blank screen to start your document; you will need to save on completion View Disk Files - allows you to tab between directories and list the files Rename Document Delete Disk File Printer Type - normally standard is selected Output Device - ie node and portname Print Page - will print the page you are currently located Print Document - prints the whole file 1st Page Number - if you want the file pages numbered Lines to Skip - will leave a margin at the top of each page Copies - how many copies you require Penpal Format ASCII Format Check Spelling Length of Page - normally 54 (Canada) or 58 (Europe) Margins - allows you to set tab spacing Single - single line spacing Double - double line spacing Left - paragraphs aligned to the left Centre - central alignment Right - paragraphs aligned to the right Justify - paragraphs aligned to left and right

Print Options

Global Options

Format Paragraph

Block Features

Bold - heavy text Underline Move - to move a block of text Grab - to hold a block of text in memory and restore elsewhere or in other files - akin to cut and paste Delete - remove a block of text Restore - to restore Grabbed or Deleted text (when using these functions, follow the instructions given at the top of the page)

Quit and Exit d. Utilities These facilities are simply QNX programs that give useful information about the system. The program names are shown in brackets and are also detailed in the Basic QNX commands section of this manual. System Activity (sac) shows the processor activity used by each priority 1 to 15. (esc to exit) Drive Useage (query) gives details on disk space used and that free. Task Display (tsk) gives a list of all administrators and programs running.

e. Unit Converter As long as the program convert is running, this program will enable you to convert, for any particular type of measurement, one type of unit to another (eg psi to KPa, tons to KN etc). Choose the type of measurement required; press F7 Select the original units you wish to convert from, press F7 Select the new units you wish to convert to, press F7 Enter the value (to several decimal places, for accuracy); press F7 Decimal places are very important when changing from larger to small values. For example, changing 1000 kg/m3 to pounds per gallon:If you enter 1000, the result will be 8 ppg You should enter 1000.00, then the result will be 8.35 ppg, a significant difference! f. Help Files This is a complete listing of all the help files currently held on the system. Many are the files accessed from programs by pressing F1, others are on other material. Select the file and press F7 to view.

g. Editor Only the essential and common operations will be dealt with here. For a detailed guide on how to use the editor, the user should refer to the help file or to the back of the QNX Operating System manual under the section title "Full Screen Editor". The editor can be used to modify text files in QLOG and to perform simple word processing. In general, however, the word processor Penpal will be used for all reports and document type editing, The editor is used, typically, for editing files such as system initialization files, XYZ plot and data files and is also used in the mail system. The editor can be invoked from the Qlog menu system from "Editor" or by entering ed from a command line. ed filename (include directory path if necessary) If the file does not already exist, it will automatically be created. Similar to the editor is the big editor or bed. This works in exactly the same way as the editor, but is intended for files that are too large to be loaded into the editor. Command Mode The command mode (shown as an orange bar along the top of the screen) allows the user to enter commands to the editor. If you are in the text editor mode of the editor, the command mode is accessed by pressing the grey plus key located on the keypad. By pressing enter, you will be taken back to the text editor. When the editor is first entered, the user is automatically placed in the command mode. If the file is new, ie blank, press F1 to create a line and allowing you to enter text. If there is already text, you can simply press enter to access the file. A command in the editor can be repeated in order to force the command. For example, you could not quit a file that has been modified by entering a single q. However, a double qq will force the command, and the file would be left without having saved the changes. To issue a command, you should press grey + to access the command bar; type in the required letter or command and press enter.

The following are some of the more common commands and functions. w Save the file using the current file name.

r <filename> This will read a file in and place it after the current cursor position. This is used to merge files together. e <filename> Edit file. Loads in a new file and clears the current one. If changes have been made to the old file without having been saved, then 'ee' must be used. ee <filename> As above but will abandon any changes to the current file. Entering 'ee' without a filename will load the same file as previously edited - this is useful if you wish to backtrack to the last saved version. q qq be Quit file once changes have been saved. Quit and abandon all changes since the file was last saved. This should used if you have made a mess of the edit and wish to get out while you are ahead !

ww <filename> Writes the file to a new temporary filename. This is useful to print out the whole file. /text/ Search for the word 'text' and leave the cursor on the found word. To repeat this, hit F9.

Function keys: F1 on; Insert Line Used to enter a new file; to insert a line after the one you are currently and to create a line at the end of a file F2 Insert Line twice, F3 F4 F5 F6 F7 Inserts a line prior to the line that you are currently on. By pressing the line would be removed again. Deletes the current line. If the fill option is used (Alt f) F4 will format the current paragraph according to the Left and Right margins. Splits the current line at the cursor position. Combines the following line with the current line at the cursor position. Highlights a section of text

To highlight a whole line, simply press F7 once To highlight a section of a line, press F7 twice (quickly) to define your starting point, move your cursor to your end point and press F7 once. To highlight a column, move your cursor to the bottom left of the column, press F7 twice, move your cursor to the top right of the column and press F7 once. F8 F9 F10 Brings up options such as copy, move, delete - for use with a section highlighted by F7 above. Repeats the last command made in the command mode - useful if a search for text is being made. Places you into command mode and displays the last command.

Other Keys: Ins Ed will start in overtype mode. Ins will allow text to be inserted.

Home Takes you to the beginning of the file. End Takes you to the end of the file.

Ctrl Left Right Arrow Ctrl F2 Alt r

Moves one word left and right. Brings back the last deleted item.

Changes the file mode from QNX to MSDOS to POSIX (UNIX).

SECTION 3 - click here to go to main menu MISCELLANEOUS APPLICATIONS

3a Windows

Applications Windows, use and properties File Manager Analog sensor electronics Digital sensor electronics

3b Sensor Technology

3c Standard Configurations Elcon Barrier Standard DAU 3d Plotters and Printers AMT 5500 Dot Matrix Modes Communication Emulations Plotter Setups Ribbon/Alignment Maintenance Trouble Shooting

3a. WINDOWS QNX WINDOWS provides a graphical interface well suited to the QLOG data acquisition system. Its provides a flexible, easy to use environment, with which to make the most of the QLOG software. QLOG was designed to be run in a windows environment, therefore allowing the user access to the graphical facility at all times, and greatly improving the presentation of the system. Many QLOG programs can only be run under windows as they require the graphical interface; these include the chromatograph calibration and tweaking, calibration and use of the calcimeter; the overpressure program since it requires the viewing of plots; and the plotting facility of many of the engineering, directional programs etc. It is also of great benefit in providing screen output to realtime and/or realdepth plots - of use to both ourselves and the client. It also provides a specially designed text display screen which provides a far better display of information for use on the drillfloor in comparison to a normal text display. With the addition of graphical well profiles that can be used in conjunction with realtime programs, the QLOG system is rapidly becoming windows dependent. Normal text or command mode can still be accessed from within windows to allow the user normal access to the operating system. N.B Do not start any administrators from within windows as they will be stopped as soon as the shell is quit or windows itself stopped. To start windows Type windows from a command prompt. After a few seconds a grey background and a red arrow representing the mouse (cursor) will appear. If the QLOG menu does not appear, click, with the right hand mouse, on the backspace for the Workspace menu. From here, click on the programs menu. From here, qlog can be selected (the menu appears the same as in text mode). Each application or program selected runs in its own base window, which can be resized, moved, or iconed. Working on one window does not affect the others, so the user can multiple task.

The Workspace menu also enables you to access properties from where you can change, or disable, the screen saver:Click on the 4th button from the right to access the screen saver window; change the time or select zero to disable; click on apply to save the changes.

Windows Applications Buttons: used to select options or programs, issue commands, accept (Apply) or ignore (Reset) changes made. Scrollbars: Some applications have large data files or displays and therefore cannot fit into one window. By using the scrollbars (they will automatically be present on a window if they are applicable) allows you to move through file, either vertically or horizontally. Simply click on the scrollbar, or drag with your mouse to desired position.

Mouse: When the mouse is moved across a desk, the pointer on screen will move in the same direction. To perform most tasks, you will need to move the pointer to an object and 'click' on it with the mouse button. There are normally 2 buttons on a mouse: Left:- generally used to select objects or issue commands Right:- display hidden menus, choose from them.

The mouse buttons can be used in different ways: 1. One click to select an option, issue a command 2. Double click quickly to select and open a file 3. Drag an object - to resize or move (not all windows have this facility)

The Mouse Pointer has 2 states: 1. an arrow 2. a stopwatch normal operating state busy state, ie when drawing a plot to the screen. You cannot select other options while the mouse is busy

Menus Can be push pin menus pin to workspace unpin to remove (by clicking on pin with mouse)

Items followed by an 'arrow head' symbol indicates that a submenu exists. Dimmed items are not selectable (not loaded, or not appropriate)

Windows Firstly, when a window is open, the state of the bar across the top of the window indicates the current state of the window. This bar displays what task is being run, or which file is open, in that particular window. If the appearance of the bar is outstanding and light in colour, you cannot access the file change this, click on the bar with your left mouse - the bar will then become inset and dark in colour. You are then able to access the file. Once an application is open, click on the top bar with your right mouse for the menu options, or click on window itself (right mouse again) for the internal window menu. Window Menu Close Fullsize Properties Back Refresh Quit Print picture Print window this will iconify the window clearing screen space, the program still runs in this state For graphic windows only, this will make the window full screen size Shows you current state, size etc of the window - these are generally left as default Will bring the window hidden beneath the current one to the front Will stop the application from running and close the window Select printer port for this facility

Using a Base Window An applications base window has everything you need to know about running that program: data controls display commands Header Footer Window menu button Control area Base window menu Scrollbar Pane Resize facility application name ie file or program has messages, current status produces menu buttons to display files, view and edit etc Close, Back, Fullsize, Refresh, Quit. allows scrolling through long files workspace itself drag the window corner and increase/decrease the window size - this facility is not available for the normal QLOG text screens click and hold on window edge, drag with mouse to new position.

Move window

Viewing and Changing properties of windows You should generally only be concerned with changing colour or text size for example. The other properties displayed should be left as their default settings. 1. Click on the window with right mouse button 2. Select properties 3. Select fonts and colours Changing from 12 to 15 point will increase text size Different colours for the window can be scrolled through. 4. Apply and Save

Icons These allow you to make room on the workspace without closing the current programs running. An icon will have a picture or text illustrating which program it is. To icon a current window, either click on the top left hand corner of the window, or select Close from the window menu. Double clicking on the icon will reopen the window. A single click with the right hand mouse will display the window menu. Here, you will have the option of re-opening, quitting etc. There is a computer memory limit to how many windows you can open at one time (5-8 depending on memory available).

Programs menu This is accessed by clicking on the workspace background with the right hand mouse. From this menu, there are several applications available:File Manager Shell QLOG Calculator Clock see below gives you text screen and command prompt as if working on a normal console. You have to login to the shell when first opened. brings up the QLOG menu digital or face

Using the File Manager The top halve of the window shows you your present working directory, whereas the bottom halve of the screen shows you the contents of that directory (files and subdirectories).

Directories are indicated by blocked while; files are indicated by black outline; executable programs are indicated by blue outline. To move around the directory structure, simply double click with the left mouse on the directory to which you want to move. Highlight a particular file by clicking the left mouse once. Then press the right hand mouse for a menu of available options:-

Browse Edit Print Copy/Move File Properties number

- to view the file (uses the more program) - invokes the editor allowing you to edit the file - simply select the printer destination - ie to another directory, just type in the destination - allows you access to attributes, permissions, group and member (NB dont make changes here until you have studied advanced QNX)

Delete

Use of the command buttons:File:Open a file Print a file Create a directory Create a file (same as ed filename) will highlight all of the files in your present directory, from where you could copy, move or delete.

Edit:- Select all

Copy/Move Delete Clipboard gives you the cut/copy and paste facility File Properties access to attributes and permissions as above Goto:- to change directory; give directory path or select your home directory.

Exiting Windows Before leaving windows, you should ensure that all programs have been stopped. You can then leave windows by the following ways:Bring up the workspace menu by clicking on the background with the right hand mouse and selecting the Exit option. By clicking on the circle symbol in top left corner of the QLOG menu, and then clicking on the arrow symbol in the top left hand corner. By pressing 'ctrl' & 'print screen' keys together. This way can be used if the windows screen has frozen up. Having done this, you should ensure that you then close down any programs that were running and the windows programs themselves.

On exiting windows, the administrators will remain running for a short time, and if you type the windows command again, you will re-enter windows and any programs that were left open will still be running. To actually stop windows running, type 'windows down' (activates a batch file to stop the program).

3b SENSORS Introduction Electronically, there are 2 different types of sensor: Analog Sensors that work on a current loop Digital Sensors that are either in an on or off state. 1. Analog Sensors that work on a Current Loop Analog sensors have an output that is directly proportional to the applied input. A 4 to 20 mA current loop is used, the sensor changes varying the current used by the loop: 4 mA = Zero 20 mA = Full Scale

(if reading greater than 20 mA, there is a short)

Should a sensor fail, the circuit should still show 4mA. However, if the circuit breaks, the reading will be 0mA. This provides an immediate pointer when it comes to troubleshooting.

A Current Loop is used because of: a. b. High noise immunity Easy 2 wire wiring

power +24v - red signal return - white

c.

Immunity to ground loops and ground induced noise

The transmitters operate from a 24 volt power supply from the DAU.

The majority of analog sensors are the simple 2 wire sensors, but there are some exceptions:Non Intrinsic System Torque - requires a 3rd wire; black - ground a voltage signal is converted to a 4-20mA output Combustibles - again, a 3 wire sensor requiring a black ground, providing 4-20mA output.

Intrinsic (Elcon) System Torque - 3 wire sensor, requiring a black ground the 1012 type barrier provides a 4-20mA output Flow Paddle - 3 wire sensor, requiring a black ground the sensors operates on a 0-2K resistor, giving a 2 to 10V output. This is converted by the 1072P type barrier to give a 4 to 20mA output Temperature - 3 wire sensor, requiring a black ground The sensor works on resistance, with 0C giving 100 and 100C giving 138.5. This is converted by the 1072T barrier to give a 4-20mA output.

2. Digital Sensors

Digital sensors are either in an on or off state. A proximity sensor goes to the high signal state when activated, ie when in close proximity to a metal object.

The data acquisition system treats digital signals in 2 ways: Interrupt Need fast response from computer and used for timing, ie. Depth RPM Pumps Not used for timing- the computer checks these at its own pace On-Off bottom Direction Gas flow alarm

Binary

Proximity sensors for RPM and strokes are 2 wire as in the analog sensors, but should also have the shield wire connected to protect against interference. The depth wheel is a 3 wire sensor, requiring a black ground. Shield should also be connected. The crown sheave is a 4 wire sensor:- 2 wires from each of the 2 proximity sensors. Again, the shield should be connected.

The sensors themselves are normally wired in to 1 or 2 junction boxes depending on the complexity of the job, ie how many sensors there are. Each terminal number in the junction box is clearly identified. For each terminal.... The red wires (+24V) connect to the + terminal The white wires (signal return) connect to number terminal The black wires (ground) are connected to the - terminal (for an intrinsic system) Shields are connected to the shld terminal

Multicore cables are run from each junction box back to the unit where they are connected to the DAU or Elcon Barrier system. Each terminal number from the junction boxes correlate to a specific card slot or barrier number on the DAU. This in turn correlates to a specific channel number on the CPU.

This channel configuration can be determined from standard configuration sheets that are provided here, and has to be specified on the CPU in order for the computer to receive the sensor signals.

3c Standard Configurations ELCON Barrier System Digital Analog Card Card Junction Box Signal Name Channel Channel Slot Type No: Chan ___________________________________________________________________ 1 (IRQ1) x 1 1842 1 1 Depth Pulse 9 (BIN1) x 1 1842 1 2 Depth Direction 2 (IRQ2) x 2 1842 1 3 Pump 1 3 (IRQ3) x 2 1842 1 4 Pump 2 4 (IRQ4) x 3 1842 1 5 Pump 3 5 (IRQ5) x 3 1842 1 6 RPM x x 4 1882 x x System Power (spare) x x 4 1882 x x System Power (torque) x 4 (20) 5 1012 1 7 Analog (spare 3 wire) x 3 (19) 5 1012 1 8 Torque (electric) x 1 (17) 6 1022 1 9 Hookload x 2 (18) 6 1022 1 14 Analog (spare) x 5 (21) 7 1022 1 10 Pump Pressure x 6 (22) 7 1022 1 11 Casing Pressure x 7 (23) 8 1022 1 12 H2S 1 x 8 (24) 8 1022 1 13 H2S 2 x 10 (26) 9 1072P 2 1 Flow Paddle x 11 (27) 9 1072P 2 2 Analog (spare 2K pot) x 12 (28) 10 1072T 2 3 Temp In x 13 (29) 10 1072T 2 4 Temp Out x 14 (30) 11 1022 2 5 Density In x 15 (31) 11 1022 2 6 Density Out x 16 (32) 12 1022 2 7 Conductivity In x 17 (33) 12 1022 2 8 Conductivity Out x 18 (34) 13 1022 2 9 Trip Tank x 19 (35) 13 1022 2 10 Pit 1 x 20 (36) 14 1022 2 11 Pit 2 x 21 (37) 14 1022 2 12 Pit 3 x 22 (38) 15 1022 2 13 Pit 4 x 9 (25) 15 1022 2 14 Pit 5 16 16 x x x x 10(BIN2) 29 (45) 30 (46) 31 (47) 32 (48) x x x x x x Internal Internal Internal Internal Internal CC signal TCD signal Internal H2S Block Temp Gas Flow

NOTE that this is an example of a typical configuration. There may be variations depending on what sensors are required.

Non Intrinsic Barrier System NOTE that there are slight variations between the more recent DAU system and the older system. The newer version is illustrated here. The older version is available from the QLOG setup menu.
Digital Analog Card Card Junction Box Signal Name Channel Channel Slot Type No: Chan ___________________________________________________________________ x 1 (17) 1 ANA 1 7 Torque x 2 (18) 2 ANA 1 8 Pump Pressure x 3 (19) 3 ANA 1 9 Casing Pressure x 4 (20) 4 ANA 1 10 Density In x 5 (21) 5 ANA 1 11 Density Out x 6 (22) 6 ANA 1 12 Trip Tank x 7 (23) 7 ANA 1 13 Pit 1 x 8 (24) 8 ANA 1 14 Pit 2 x 9 (25) 9 ANA 2 1 x 10 (26) 10 ANA 2 2 x 11 (27) 11 ANA 2 3 x x 12 Power Brd x x Power Board x 12 (28) 13 ANA 2 4 x 13 (29) 14 ANA 2 5 x 14 (30) 15 ANA 2 6 x 15 (31) 16 ANA 2 7 x 16 (32) 17 ANA 2 8 x 17 (33) 18 ANA 2 9 x 18 (34) 19 ANA 2 10 x 19 (35) 20 ANA 2 11 x 20 (36) 21 ANA 2 12 x 21 (37) 22 ANA 2 13 9 (BIN1) x 23 DIG 1 2 Depth Dir (On/Off Bot) x x 24 MUX Brd x x MUX Board x 22 (38) 25 ANA 2 14 x x 26 ANA x x x x 27 ANA x x x x 28 ana/dig x x x x 29 ana/dig x x x x 30 ana/dig x x 5 (IRQ5) x 31 DIG 1 6 RPM 4 (IRQ4) x 32 DIG 1 5 Pump 3 3 (IRQ3) x 33 DIG 1 4 Pump 2 2 (IRQ2) x 34 DIG 1 3 Pump 1 1 (IRQ1) x 35 DIG 1 1 Depth Pulse (D.Wheel) x x 36 digout brd x x Digital Out Board 10 (BIN2) x Internal DIG Internal Gas Flow x (45) Internal ANA Internal Block Temp x (46) Internal ANA Internal Internal H2S x (47) Internal ANA Internal TCD signal x (48) Internal ANA Internal CC signal

NOTE again, variations will depend on what sensors are require. Here, the configuration is for quite a basic requirement including a depth wheel.

3d PLOTTERS and PRINTERS i. AMT 5500 The Advanced Matrix Technology AMT 5500 can emulate both printer and plotter and is programmable by use of the control panel set up dial on the plotter. In order to have both modes available, there has to be a digital card for both emulations in the housing provided:For plotter For printer Intell-Plot Turbo AMT Accel 535

A display on the front panel will tell you which mode the AMT is currently in:When in plotter mode In printer mode Control Panel functions while in Plotter mode: Exit Copies Alt Font Pitch Alt Mode Quality Clear Reset Quality Emul Clear Reset Ready Color Test Status Ready Color Test Status Setup Setup 0% DATA READY COURIER LQ READY

Printer mode:

Setup - gives you access to the set up menu; rotate the dial to advance Alt - holding Alt and rotating the dial allows you to change a particular setup parameter. Changing Modes Plotter to Printer - simply press the exit button twice Printer to Plotter enter set up and rotate dial to option 5) EMUL hold down the alt key, rotate dial to select HP-GL press set up again - the AMT will now boot into the plotter mode. You will see Testing RAM on the display as this happens.

When the AMT is first switched on, the mode that it will boot into depends on the positioning of the digital intellicards. Whichever one is positioned at the front is the mode that the AMT will default to. Should you wish to swap them around, firstly switch off the power to the AMT, remove and replace ensuring that each one is set correctly in its housing. Establishing communication with the computer Normally, parallel communication is used for printers/plotters. Determine which parallel port you are connected to ($lpt or $lpt2) and ensure that this is defined in the QLOG software ie Setup - Printer Controls. Ensure that the parallel interface has been defined in the setup of the AMT:enter the setup, rotate dial to option 51) INTRFCE ensure that Par is selected here. Using serial communication:- here, you have to ensure that both the AMT and the serial port are set correctly. AMT setups:- 51) Interface 52) Baud Rate 53) Parity 54) Data Bits 55) Stop Bits 56) Handshake 57) End 58) Timeout 59) Queue Serial port:Serial 9600 None 8 1 DTR SP/SPO 45s On

set baud to 9600, and ensure that hflow is turned on (this should be set in the relevant sys.init file)

eg stty baud=9600 +hflow >$term1

Computer serial ports are known as DTEs, and peripherals as DCEs. This is so that 2 way communication can take place between pin pairings ie TD/RD, DTR/DSR, RTS/CTS. There are known hardware problems with the AMTs that prevent this:1) AMT port is also set up as a DTE. It is impossible, then, for there to be communication between 2 DTE ports. This can be fixed by using a null modem on the printer serial port. This inverts the pairings on the AMTs port so that it is, in effect, converted to a DCE. Communication can therefore take place.

2) AMT is configured so that there is communication across pairings. A null modem will not solve this, the AMT has to be re-configured. 3) The port does not control flow of data properly - this means that data will continue to be sent even if the AMTs buffer is full - data will therefore be lost. Printer Emulation (setting 5 in the setup) The default for use with the QNX system is the AMT emulation itself. However, different emulations can be selected - in general this would be governed by which ever emulation is defined on your current system or processor (office use for example). Different emulations include:- AMT DIABLO 630 XEROX 4020 EPSON JX EPSON LQ 2500 IBM XL24 The Epson LQ is a good choice because it offers a wide selection of fonts and sizes, and is often defined by most word processors. Plotter Emulation HPGL is the emulation that is used by QLOG for log plotting. The particular model is a Hewlett Packard HP7475.

NOTE that you cannot cross emulations. ie if you are in plotter mode, it is no use trying to print out a text file. likewise, if in printer mode, it is no use in sending a plot - you will end up with just a string of hieroglyphics !

Plotter Setups Current settings can be printed out by pressing the alt & status buttons together. Most of the factory default settings can be used, but there are certain important changes that have to be made - these are highlighted below. Operations 1) RSTOR 2) SAVE 3) DFALT Plot Settings 4) EMUL 5) QUAL

allows you to restore the settings saved as a particular user when changes have been made, the settings should be saved here - there are 5 users available to save settings. these are the user settings that the plotter will access on booting

HP7475 normally high (240x240), medium (240x120) or low (120x120) high quality should be selected for final logs 6) COPIES 1 7) SIZE Wide - should be selected for our logs 8) COLOR On 9) FONT Simple 10) AUTO Off 11) HSCALE 100% (may change this for XYZ plots) 12) VSCALE 100% (as above) 13) LT/RT 0.0 14) UP/DN 0.0 15) ROTATE Off 16) AUTO FF Abut - if not, 1 line will plot, then a blank page will spew forward before the next line is plotted 17) LENG 66/6 18) BIN None 19) DUMP Off

Pen Colours 20) RIBBON 21) PEN1 22) PEN2 23) PEN3 24) PEN4 25) PEN5 26) PEN6 27) PEN7 28) PEN8 29) PEN9 30) PENA 31) PENB 32) PENC 33) PEND 34) PENE 35) PENF

Proc Black Magenta Cyan Yellow Violet Green Orange Red Purple Blue Aqua Burgundy Navy Forest Brown

Pen Weights 36 - 50) Communications 51) INTRFCE 52) BAUD 53) PARITY 54) Data Bits 55) Stop Bits 56) HNDSHK 57) END 58) TIMEOUT 59) QUEUE Saving Setups

PEN1 - PENF (a/a) Parallel 9600 None 8 1 DTR SP/SPO 45s On

.3mm

Any changes that are made are automatically saved when you release the Alt key - you will hear a little beep to confirm this. It is sensible to save different setups under different users so that they can be quickly recalled. For example:-

User 1

Default settings for log plotting - here you would save the settings as detailed above, and also select this as the default so that the plotter will always boot into these settings. Before exiting the setup, rotate the dial to 2) save select Usr 1 rotate dial to 3) DFALT select Usr 1

User 2

Default settings for fax logs for example here, color would be turned off (or use black ribbon) pen weights may be increased, eg .5 or .7mm

User 3

Default settings for XYZ plots for example here, you will change the scales to 50/50% you may change the pen colours for better presentation

Checking Ribbon Alignment If colours are not printing correctly, it is likely that the ribbon is misaligned. This can be confirmed by performing the plotter self test. To change the ribbon alignment:Go into printer mode From the setup, go to option 51) (on many models, you can normally only access up to option 50. If this is the case case, hold the font and quality keys down and rotate dial to 51) This will show you the current setting, which you should increase/decrease by one at a time until you have the correct alignment. Each time you make a change and release the Alt key, a row of As and Hs will be printed out. For the correct alignment, you should check the colour of the horizontal bars on these letters - the correct alignment is when they alternate between blue and red ie the letters should be half red and half blue. Ribbons Make sure that you install correctly - that the ribbon is flat behind the printer head, and that the dial on the ribbon rotates freely when you move the assembly back and forth. If printing text, use a black and white ribbon - this saves wear on the colours which are twice the price of the black and white. Avoid excessive pressure and wear by not setting the printer head lever fully forward - this is not good for the printer head either. Check alignment occasionally. Maintenance Remove pieces of paper that accumulate inside the printer Depending on the environment, thoroughly clean the inside every month or so, flush with air if possible. Thoroughly clean and oil the printer head bar so that there is a smooth, free movement back and forth. Clean the printer head, carefully, every few months Every few months or so, clean the felt washers located on either side of the printer head. If left, these allow build up of ink dust on the printer bar, inhibiting the movement. There are two

plastic cups covering the felt washers - remove these and clean the washers using a chemical cleaner. Trouble Shooting Nothing happens on power up - check correct power supply/setting, check the AC fuse. Printer controls work but it wont print - do you have the correct interface selected; do you have the printer and port properly defined in Printer Controls; is the printer in the correct mode ? Printer crashes on start up - display is OK, but no paper controls work; check the DC fuse very carefully by removing the back panel - refer to manual. Beware of the capacitor next to the fuse this holds a hefty charge. Only garbage printing out - check that you have the correct emulation; check communication settings if using a serial cable. Framing Error on display - incorrect serial settings on printer or port Paper spews forth - Option 16) Auto FF is not set to Abut Smudges on print - too much pressure, adjust lever on right hand side to bring printer head further from the paper. Weak or blotchy print - check ribbon alignment. Unusually, the printer head may be misaligned this requires a technician.

For further information, refer to the troubleshooting section of this manual.

ii. Colour Inkjet Plotter, model HP680c The use of this plotter requires the updated version of the plotter command, and also the hp2xx command. The following files and commands also have to be on the system:/datalog/config/hp2xx.colormap /datalog/help/hp680c.usage /user/cmds/logpc spool_start pcl_in pcl_out formfeed

The first step is to reconfigure the printer controls configuration file. You may have to use the command prt_ctl -l to add the new names to the menu:Printer Name HP680C_A HP680C_B Port Name $pcla_150 Spclb_150 Type PCL PCL

The 2 pcl port names represent the equivalent of $lpt and $lpt2. You should then edit 3:/user/cmds/spool_start to define the pcl:/datalog/cmds/pcl_out n=$pcla o=$lpt & /datalog/cmds/pcl_out n=$pclb o=$lpt2 &

You then the plot by issuing the following commands:spool_start this will start the 2 pcl_out programs plotter logname &

Should you need to stop the plot for some reason, enter the following commands:slay plotter slay pcl_out

SECTION 4 - click here to go to main menu CHROMATOGRAPHY

PART A

THE M200 CHROMATOGRAPH 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.10 4.11 4.12 4.13 Introduction Setting up the Chromatograph M200 Setup Method Parameters Configuration Settings Chromatograph Types The Chromatograph Software Calibration Procedure Use of the Tweak Option Applications of the Tweak Option Using Two Chromatographs Simultaneously Printing Chromatogram Files Trouble Shooting

PART B

GAS RATIO ANALYSIS 4.14 4.15 4.16 4.17 Wetness, Balance and Character Ratios Oil Indicator and Inverse Oil Indicator C1/C2 Ratio Pixler Gas Ratios

PART A THE M200 CHROMATOGRAPH 4.1 Introduction

One of the most important aspects of the mudlogging operation at wellsite is the detection of gases and the analysis, in particular, of hydrocarbon gases. To enable us to do this, ahead of the competition, Datalog uses a state of the art, high speed chromatograph called the MTI M200. This chromatograph, based on technology conceived at NASA, was developed at Stanford University, California. The M200 analyses gases by way of detecting differences between the Thermal Conductivity of the sample gas and the carrier gas (helium is used by Datalog, although hydrogen can be used aswell). Gases with low molecular weights have the highest thermal conductivity.

Features that make this chromatograph stand out ahead of FID's and other thermal conductivity chromatographs include:Miniaturization, portability and low energy requirement High speed analysis; C1 to C5 in under 30 seconds Detection of other gases; CO2, O2 and N2 as standard; other gases by column changes The use of Helium as a carrier gas, being non explosive No residual gas problems therefore zones and tops are easy to determine Accurate detection from a few parts per million up to 100%

The M200 has 2 separate columns (or channels) and is interfaced with an internal processor that controls the chromatograph parameters and perform the analog/digital conversions together with other 'housekeeping' tasks. The 2 channels have columns to analyse a specific range of gases:Column A heavy hydrocarbons Composite, C3, C4 and C5 (C6 and heavier can be detected but would require a longer time period for analysis) O2 and N2 Composite, C1, CO2, C2

Column B

light hydrocarbons

The M200 outputs, in digital form, the voltage recorded from the 2 channels 100 times a second. This is interfaced to the QLOG system which performs the integration of the gas peaks to determine the gas quantity, and which also controls the operation of the chromatograph. All of the valves, sample loop, detectors and injectors are fabricated on small silicon wafers the size of a postage stamp. This micro technology means that only a very small gas sample needs to be analysed, thus the very short analysis time. The Thermal Conductivity Detector responds to the difference in thermal conductivity between the carrier gas and the sample components passing through the detector. The TCD is configured as 4 nickel filaments suspended in 2 channels. These filaments are heated by applying an electrical current. With Helium carrier gas flowing across the filament, a certain amount of heat or energy is carried away. The filament now has a constant resistance that sets a constant baseline reference. When a compound with a lower thermal conductivity than the helium passes over the filament, less heat or energy is carried away. As the temperature of the filament increases, resistance is increased positively as a gaussian peak.

4.2 Setting up the Chromatograph a. Hardware It is imperative that no dust or particles get into the columns of the chromatograph since this could cause a blockage owing to their micro size. Therefore, during the transportation of the chromatograph, ensure that all of the ports/inlets are protected by the appropriate covers. Helium Supply Before connecting the helium supply to the chromatograph, you must ensure that there are no rogue particles that could get into the columns: If attaching the regulator to a new helium bottle, blow helium through the regulator - this will clean out any rust/dust from the bottle and/or regulator. Attach the stainless steel helium tubing to the regulator - ensure that the arrow on the helium filter is pointing in the direction of flow. Before attaching to the chromatograph, again give a good blast of helium through the tubing to ensure that it is clean. Perform a Leak Test:Close the high pressure side of the regulator and release any helium through the external side. Close the external (low pressure) side. Open and then close the high pressure side to 'fill' the regulator. Note the pressure on the gauge and monitor for maybe 15 minutes for a drop. If the pressure does drop, there is a leak on the high pressure side. Attach the helium tubing to the carrier gas port on the back of the chromatograph. Be very careful not to overtighten as you could damage the tubing on the other side of the port. Use a second spanner to lock the nut on the port to prevent it from turning and damaging the internal tubing. Take care not to strip the threads on any of the swagelock fittings. There is no need to use Teflon/PTFE tape on these fittings; a seal should be achieved without because of the swagelock ferrules. Perform a leak test as above, this time setting the pressure on the external or low pressure side to 80 psi, the operating pressure of the chromatograph. Monitor as above for a pressure drop. As an extra leak test, apply a small amount of snoop around the connectors and swagelock fittings. Any leaks will cause the snoop to bubble.

Magnesium Perchlorate Filter The gas sample is fed to the chromatograph from a port on the front of the CPU. This sample is therefore supplied after passing through all of the standard filters and driers and after passing through the Total Gas Sensor detectors.

One final filter assembly is then used for the chromatograph, being placed between the front CPU port and the chromatograph, to reduce or eliminate any remaining moisture or impurities from the gas sample. This is an important function, because any moisture that remains will be detected by the chromatograph and analysed as a gas. Unfortunately, if this moisture peak exists, it causes a response at the same time as C3 and will be analysed as C3.

The perchlorate filter is made up in the following way: An 8 to 10 length of polyflow tubing is filled with magnesium perchlorate (as coarse as the tubing will allow) and sealed at each end by a cotton wool plug. Do not pack the cotton wool too tight because that will restrict sample flow through the filter. On the chromatograph side of the filter, two 0.2 micron blue disk filters should be connected to prevent dust from entering the chromatograph.

When installing the filter, you should ensure that you have a good sample flow through it. Since this filter assembly will always be in place when sampling, the same assembly should be used when calibrating the chromatograph, so that conditions are the same.

b. Interface Before connecting the chromatograph to the CPU, ensure that the QLOG system is running and that the m200 administrator is running (when you run tsk you will see two m200admins running). Enter the m200 setup from the QLOG menu in order to select which serial port the chromatograph will be connected to. This is done by selecting option F8 Edit Port, entering the port name and pressing F7. The status at this point will read unplugged. If possible, one of the multiple CTI ports should be used due to the better communication properties of the CTI card. Connect the serial cable to the CPU and the chromatograph. The cable must be a null modem type - these are specially made up for use with the chromatograph. Turn on the power to the chromatograph. The chromatograph has its own power supply with a 12v DC transformer.

NOTE it is very important that the m200 administrator is running and that the correct port is defined before you put the chromatograph on line. Failure to do this could allow garbage to be sent to the chromatograph resulting in the alteration of internal settings.

SIMILARLY, IF YOU HAVE TO REBOOT THE COMPUTER OR SHUT DOWN THE M200 ADMINISTRATOR, ENSURE THAT THE CHROMATOGRAPH IS TURNED OFF FROM THE SETUP OPTION. DISCONNECT THE SERIAL CABLE TO PREVENT GARBAGE BEING SENT DURING THE REBOOT. At this point, there is now communication between the chromatograph and the computer. If you are connecting a new or different chromatograph, the status in m200setup will read **new**. This is because the settings on the current chromatograph differ from those that are stored on the computer from the previous chromatograph. The system therefore needs to be initialized before being able to use it. This is done through the m200setup.

4.3 M200 Setup This file contains all of the operational and configurational setups of the chromatograph. The operation of the chromatograph is totally governed by these settings and any changes to be made are done through this software and NOT on the chromatograph itself.

All of the settings in m200setup should be carefully checked before attempting to run the chromatograph. A copy of this file, made when the chromatograph was last serviced or tested, should accompany every unit at all times. This allows easy comparison of the settings, to ensure that all are correct, prior to use. The information contained in the top box of the file is known as the Method - these are the operational settings. Below this are the configurational setups together with the command options available.

Column A
Temperature Inject Time Sensitivity Filament 35 45ms Med On 35.1C

Column B
Temperature Inject Time Sensitivity Filament 40 40ms Med On 40.0C

Port: [1]$cti1
Sample Time Run Time Cycle Time 2 sec 30 sec 0 sec

On

On

CH Pressure Autozero Autozero CHP Scale Temp Offset Temp Scale Column Type

18.2 -105.2mv On 15 7 13 Other

CH Pressure Autozero Autozero CHP Scale Temp Offset Temp Scale Column Type

23.5 99mv On 18 6 12 Other

CHP Code Offset 0 Ext. Wait Ready Off Module Choice Both Auto Run Off # of Auto Runs 1 Run Interval 0 M200: Running

Record Sample: Not Pending F2=Controls F5=Record

Samples: 1560 F6=Edit Config F7=Edit Method

Errors : 0 F8=Edit Port Check Carrier Gas

As described above, should the setup in the software vary from the one stored in the chromatograph itself, then the status will be displayed as **new** and the software has to be initialized:Press F7 Initialise method will be displayed Press F7 again The setups will now be written to the software

At this point, the chromatograph will start running automatically - you should stop it immediately and check that the setups are correct for the chromatograph. Do this by pressing F2 Controls, then F3 or F6 to stop the chromatograph.

Control Keys F2 Controls running when F3 Stop After Run F6 Stop Run Now F8 Reset Options the F2 Start

to start the chromatograph running. The status will show idle when a sample is being injected and the sample is being analysed. to stop after the current sample has been analysed to stop immediately F2 Reset Config - to restore the default factory settings to the chromatograph should memory be lost F8 Reset CHP Offset - to reset the column head pressure to reference the local atmospheric pressure. F4 Abort Reset

F4 Return to Main Controls F5 Record

To record a current chromatograph sample. Enter a filename and the sample that is currently being analysed will be stored under that name in 3:/datalog/chrom_dat F6 Edit Config To make changes to the configurational setups ie dont save changes ie save changes

F4 Restore Old Config F7 Accept New Config F7 Edit Method F8 Edit Port F4 Exit Program

To change the method settings; press twice to initialize a new method

4.4 Method Parameters To change these settings, press F7 Edit Method. Move between the parameters by using the arrow keys. To make changes, some parameters (eg temperature) require you to press enter, type in the value and press F7 to save; others (eg sensitivity) require you to toggle between settings using the spacebar. Once all changes have been made, press F7 to save the Method. Temperature The value on the left is the set value, the value on the right is the actual measured value. It is quite normal for these to be slightly different. Increasing the temperature will shorten the elution or analysing time of the sample. A significant temperature change may therefore require a recalibration procedure in order to redefine the position of individual gas peaks. The normal operating range is in the order of 30 to 50 C. The absolute maximum operating temperature of the columns is between 160 to 180 C (recorded on the actual column modules), but these temperatures are only used to recondition the columns or to dry them should they have become damp. Inject Time (ms) This is the length of time that the injector valve is open to allow a portion of the gas sample into the columns for analysis. By increasing the inject time, more of the gas sample will enter the columns; by lowering it, less gas. This will change the size of the individual gas peaks and will therefore require a recalibration. Normal operating range is in the order of 40 to 50ms. Sensitivity This is simply a scaling factor to allow different ranges of gas values to be analysed. There are 3 settings, low, medium and high. Each one is a factor of 10 larger or smaller than the next. High sensitivity is only used for very low gas levels and when the calibration gas is in the order of 10ppm. Normal operating settings are medium for Channel A; medium for Channel B, switching to low sensitivity when methane reaches the order of 10% (this precise value will depend on what the injection time is set at). Changing the sensitivity does not require a recalibration.

Changing the sensitivity is generally only required for Channel B because of high methane. When the column is saturated, requiring a sensitivity change, the peak in the chromatogram will be flat topped and the displayed value will drop to zero with other gases continuing to rise.

a) normal peak, on scale

b) peak off scale, requiring the sensitivity to be lowered

Filament The filaments have to be ON for the chromatograph to run. You should ensure that the second reading of the two reads ON since this is the actual filament status. You may have to resave the method a second time when first setting up the system. Under no circumstances should the chromatograph be run if there is no helium. There is therefore a safe guard built into the software - if the helium pressure falls below 5 psi, the filaments will turn themselves off. Sample Time This is how long the sample pump will run for when taking each sample. This will fill the lines of the chromatograph. The sample time should not be confused with the inject time which is the length of time the injectors are open allowing the sample into the columns. The default setting is 2 seconds. Run Time The period of time allowed for the gas sample to be analysed. The default setting is 30 seconds, allowing hydrocarbons through to C5 to be analysed. Should heavier gases be required, the time can be increased. Cycle Time This is wasted time between gas samples. This should be set to zero so that as soon as one sample has been analysed, another sample is immediately injected.

4.5 Configuration Settings CH Pressure Column Head Pressure - this is a direct reading (psi) of the pressure exerted on the column. This affects how quickly a sample will be pushed through the column and therefore how quickly individual gases will be analysed. The higher the pressure, the faster the elution time. The pressure is adjusted for each column by way of a control dial at the back of the chromatograph. Only gradual changes should be made. Autozero This should be set to ON, then you have a direct reading of the current millivoltage on the detector. This reading is required for the chromatograph to have a base reference for each sample taken. Every time that a sample is taken, the current millivoltage is recorded. This will be the zero point for that sample. When the gas in analysed, this value is subtracted from the millivoltage due to a particular gas peak, giving a true reading for the peak. eg autozero = 100mv autozero = -100mv peak = 400mv peak = 400mv mv due to gas = 300mv mv due to gas = 500mv

The actual value of the autozero is not important; it can be anywhere in the range -450 to +450mv. If either of these values are shown, the detector will probably need replacing. If parameters (ie pressure, temperature) are kept constant, you should not see any great variation in the autozero. The configuration sheets kept with the chromatograph therefore provide a good history of the columns. Any significant changes in the autozero, while parameters remain constant, could be an indication that the detector for that column is becoming worn. CHP Scale Temp Offset Temp Scale

(column head pressure scale)

These three settings are unique to each individual column. They are calibration and scaling factors required to ensure that the correct pressure and temperature is being applied to each column. They must be set correctly

If they are not set correctly, you will likely get strange looking chromatograms, anomalous gas peaks, erroneous analysis etc. To check them, you should refer to the m200 setup sheets accompanying the chromatograph. They are also recorded inside the chromatograph should you wish to double check. Column Type This is a reference to the maximum operating temperature of the columns. The options are Other (160 to 180 C) or Haysep (140 C). Datalog only uses the one type - both columns should be set to Other. CHP Code Offset This has no known function ! Ext. Wait Ready This function enables a sample to be taken when a switch is closed. It should be set to OFF. Module Choice This should be set to Both to allow both columns to operate. Auto Run / Number of Auto Runs / Run Interval If the chromatograph was not interfaced to the QLOG software, these settings would perform the same function as Sample Time, Run Time and Cycle Time. Auto Run must therefore be set to OFF to override these settings. The values then entered in the Number/Interval do not matter.

M200 This is the current status of the chromatograph as already described:Unplugged **new** Running Idle No communication Software setup is different to the setup stored in the chromatograph Sample is being analysed Sample is being injected

Samples The number of samples taken since the m200 administrator was last started. Errors These are communication errors rather than gas analysis errors. If there are repetitive errors, there may be noise interference. Generally, you rarely see errors when using a CTI serial port;

more may be seen when using another type. It is also known for errors to occur when a longer injection time (eg 60ms) is used, or when parameters are changed. If you have seemingly excessive errors, you may have a hardware problem and you should seek technical advice.

4.6 Chromatograph Types

Currently, there are two versions of the m200 chromatograph being used by Datalog. Fundamentally, the two are identical working in exactly the same way with the same setups. The difference is that the older type has a front control panel allowing changes to the setup to be made on the chromatograph itself. The newer type has no such panel so that all changes have to be made from the QLOG software.

For each chromatograph version, there are versions of the m200 programs:m200admin_old, m200admin_new m200setup_old, m200setup_new Depending on the chromatograph being used, these programs should be copied to m200admin and m200setup (3:/datalog/cmds). NOTE that the m200setup and controls already described in this section refer to the new chromatograph type with no control panel.

The only significant difference between the two versions is that the control options in m200setup_old do not include any options to change the configurational setups on the chromatograph - these therefore have to be made from the front panel on the chromatograph. This procedure will be described below. It should be pointed out that the newer software versions are completely compatible with the old chromatograph. You can therefore use the old chromatograph with the new software and make any/all changes from the QLOG software. One WARNING here, however, is that it has been noticed with this combination that the filaments occasionally turn themselves off and will need resetting - be aware of this ! NOTE, this does not work the other way around. Under NO circumstances should you attempt to run the newer chromatograph with the old software. The procedures described over the page are only to be used if you are using the older chromatograph with the older software.

Use of the front control panel and older m200setup version

The first difference is that there is a remote control button on the chromatograph. This status is displayed on an LED display. While in remote mode, communication is open between the chromatograph and the computer. By pressing the button, the chromatograph is put into local mode - this effectively breaks the communication link in the same way as if you were to disconnect the serial cable as described above for the newer chromatograph. This facility can therefore be used if you intend to shut down the m200 administrator or if you are going to reboot the computer. The status shown in the m200 setup when the chromatograph is in local mode is offline. The status already described (unplugged, new, idle, running) also apply to this chromatograph.

M200setup_old This file looks and operates in principally the same way as already described:The Method is exactly the same and edited in the same way. The display of the configuration setups is exactly the same, but unlike the newer version, they cannot be changed from within this program. There are fewer control functions:F2 F3 Start Stop

To stop the chromatograph after the current cycle is complete this option should always be used to stop the chromatograph. Stopping the chromatograph by putting it into local mode will stop it immediately, and if a sample analysis is midway, a communication error will result.

F4 F5 F7 F8

Exit Setup Record current chromatogram Edit Method Edit Port As described previously; use twice to initialize a new method.

Changing the chromatograph configuration using the front control panel

Principally, the only reason that these setups may require changing is if one or all of the three scale factors are incorrectly set:CHP Scale Temp Scale Temp Offset

Stop the chromatograph running using F3 Put the chromatograph into local mode Kill the administrator:dau_kill m200admin rm 3:/datalog/config/m200admn.cfg

Remove the configuration file:-

Press the Reset key on the panel, and at the same time, the Config key The display will show UPDATE CHP OFFSET

Press Enter to confirm; you will automatically be placed in the settings for Channel A. Press Enter to take you through the options until you get to the one you need to change Use the up and down keys to increase or decrease the value, press enter to save When you have made the changes to Channel A, press A/B to switch to Channel B and repeat the process to make any changes Restart m200admin Disconnect and reconnect the power to the chromatograph; the setups are only read when the chromatograph is turned on. The new configuration will be automatically sent to the computer.

4.7 The Chromatograph Software

This suite of programs is accessed from Realtime_Controls_Chromatograph Setup Calibrate Tweak Calibrations Configuration Sheet Ex Chromatogram The m200setup as already described A facility to change calibration points, described later in the section. Printout of the m200setup file - this should accompany the chromatograph as already described. A view only facility, displaying the analysed gases on each channel. This is illustrated below.

Title: newbak.pict Creator: pictps CreationDate: Wed Apr 26 17:25:05 1995

4.8 Calibration Procedure

After having ensured that you have communication, that all the setups are correct and that the helium is applied at 80psi, you are ready to calibrate the chromatograph. If the chromatograph is newly set up, you should allow a period of 5 minutes or so for the temperature and pressure to set and the columns to stabilize - you will see this occurring by way of the autozero changing. Once the autozero is stable, the columns are stable and you can proceed with the calibration. The calibration can only be performed from windows: Select Calibrate from the menu Connect the calibration gas, via the perchlorate filter, to the chromatograph and set the gas flowing at a constant rate of around 3 psi - if a small pressure gauge is not available, you should determine the rate of flow by comparing the flow rate with the flow delivered from the CPU when the normal sample line is connected and the pump is running. Allow 2 initial injections to ensure that the chromatograph lines are full of the calibration gas. Select Record - you will be asked to enter a name eg calib1. The sample that is currently being analysed will be stored under this name in 3:/datalog/chrom_dat. This saving will occur when you hear the chromatograph take its next sample. Access this sample by selecting Select, click on the filename with the mouse and select Open. You will now have a graphic display of the gases analysed on both channels. The red graph represents Channel A and the blue graph Channel B.

Channel A (red) Channel B (blue)

Composite, C3, ic4, nC4, iC5, nC5 Composite N2 and O2, C1, CO2, C2

If the calibration gas is low end, use the Scale to magnify the peaks. This is just a scaling factor with options 1 to 10. Select Define; click on C1 for example and select the correct Channel number (B in this case). Just Channel B will be displayed now; move your mouse and simply click once on either side of the peak. By doing this, you are simply defining a time interval in which the chromatograph will look for this particular peak. The apex of the peak will be determined

automatically, then the software will follow each side of the peak back to the base line and determine its own set points. Once these set points are defined, enter the gas value that this peak represents. Repeat this process for each of the gases.

The calibrations are stored automatically. The current calibration settings can be viewed by bringing the calibration configuration window to the front of the display. This window is always found behind the main calibration window; simply click on it with the left mouse to bring it to the front.

The following information, for each gas calibrated, is displayed:Chan B B A Name C1 13.71 C2 26.13 C3 8.59 Start 14.77 28.35 9.82 End Area 37295302 3595240 4958472 Percent 1.5000 0.1000 0.1000

The Start and End values represent the set points selected by the software that define the time interval, between which, the apex of that gas peak will be found. The area beneath the peak is calculated and shown in the table; this is then integrated to give the equivalent gas value.

When the calibration procedure has been completed, open a display window showing all of the gases and ensure that the values displayed are correct. The calibration should be accurate to within 10 ppm, although in reality, there should be no problem in achieving an accuracy of just a few ppm. If any of the values are not accurate enough, then repeat the calibration process by recording another sample. If just one or two of the gases are inaccurate, there is no need to recalibrate every gas, just the erroneous ones. For the others, the values from the previous calibration will still be saved.

4.9 The Tweaking Option

As already described, the set points determined during calibration are selected by the software. In some cases, they may be set well away from the actual peak. The Tweaking option allows the user to move these set points to a position of their choosing. This procedure does not affect the calibration in any way, it is simply changing the time interval in which the apex of the gas peak will be found.

There are two common reasons why this process may be necessary and they will be illustrated in detail in the next section. 1. Redefining the C3 peak to exclude moisture being analysed at the same time 2. Redefining the C1 peak when a large volume changes the appearance of the peak

Example:The set points shown in diagram A are those selected by the software; you wish to change these set points to those shown in diagram B.

time (s) Select Tweak Calibrations Select Select Gas and click on the appropriate gas Click on the up and down arrows to change the value (the value will be the time in seconds) to what you want and Apply. Apply will save the changes, Reset will exit without saving.

4.10 Applications of the Tweak Option A) Preventing moisture from being analysed as C3 Moisture is always going to be a problem when sampling from the mud line. The first priority is to try and actually eliminate it completely from the gas sample by the use of driers and filters. The degree of the problem will vary widely depending on climate, mud type and mud temperature and this will determine what filters/driers are necessary. Obviously, there is a limit to how many can be used in a sample line, because of restrictions to the sample flow.

A recommended system is as follows: At the trap, bubble the sample through a drop out jar containing glycol and then pass the sample through a drop out jar containing Calcium Chloride drier. This combination has been shown to be very effective in removing a large amount of moisture, and the bubbling through glycol first prolongs the life of the CaCl drier. When the sample line enters the unit, place another drop out jar containing CaCl drier. For severe moisture problems, a 3rd drier may be beneficial, being placed outside the unit. Preferably, place this jar where there is a temperature change (eg if the sample line passes from a warm pit room to a cold outside). Before the sample line reaches the CPU, pass it through a combined filter assembly containing a moisture filter, a coarse particle filter and a finite filter. There should be one final finite filter inside the CPU before the gas sample reaches the Total Gas Sensor. Magnesium Perchlorate filter between the CPU and the chromatograph.

Obviously, these filters and driers should be routinely checked and replaced when necessary. The frequency that they will require changing will vary depending on the severity of the problem. Typical frequencies may be:Glycol Ca Chloride every few days twice a shift to once a day. The driers further from the trap will be less frequent; the one in the unit may only require changing once a week. Mg Perchlorate Once or twice a shift to every couple of days Blue Disks Every couple of days to once a week

Should, however, moisture remain in the gas sample, it will be detected and analysed like any other gas. The problem is that it just happens to be detected on Channel A at pretty much the same time as C3 occurs. Therefore, even if you dont have any actual C3 in the gas sample, any moisture will be analysed and recorded as if it were C3.

Our task, then, if we cant eliminate the moisture, is to stop it being analysed as C3. There are 3 lines of defense with which to attempt this:1) Determine whether there is enough time separation between the apexs of the 2 peaks to be able to use the tweak option - ie move the C3 set points so that the C3 peak is still defined, but so that the apex of the moisture peak will be outside of the set points. A minimum of around 0.2 seconds is required to do this. The 2 situations are illustrated below:-

In A, the moisture peak appears inside the C3 set points and will therefore be analysed as C3. In B, after tweaking the C3 end point, the apex of the moisture peak now falls outside, or after, the setpoint - it will therefore no longer be analysed as C3.

A C3 Moisture

If you need to get an accurate determination of the precise time that the two peaks are being analysed, you can use the calibration procedure. For each of the peaks, select the appropriate chromatogram, and proceed as if to calibrate. Proceed to the point of clicking on either side of the peak to determine the set points, but do not click. Move your mouse to the precise apex of the point - the time will be displayed to 2 decimal places in the top right hand of the window. At this point, just select one of the other options in order to abort the calibration process. Now you have the precise timing of both peaks, so that changing the C3 set point is a relatively easy process. However, there may be times when there is not enough separation between the peaks to enable you to use the tweak option effectively.

2) Increase the temperature of column A by 10 degrees or so - this may be enough to burn off the moisture. A small change in temperature probably wont significantly change the elution time, but you should be aware of this possibility which will mean redefining all of the peaks on Channel A.

3) Decrease the column head pressure - this will push the sample through the column at a slower speed and will have the affect of separating the gas peaks. How much you can decrease the pressure by is governed by the position of the C5 peak - this shouldnt exceed the 30 second run time. This is a very time consuming process; you have to change the pressure, redefine your gas peaks because their elution time will have changed, tweak the C3 set points, then test for both moisture and C3. The procedure will probably have to be repeated one or two times before it is successful - it can therefore take a long time and should only be attempted if you know you have this time to spare.

B) Large Levels of Methane When C1 increases to a level in excess of perhaps 50%, the peak will move to the left (ie will appear sooner) and its shape changes. From being an equilateral peak, the peak straightens up becoming near vertical on the initial side of the peak. These changes have the effect of taking the apex of the peak to the left of the start set point, no longer inside it. The software will therefore no longer see the peak, assumes that there isnt a peak, and thus the C1 value will fall to zero.

high level peak

normal peak

For the peak to be recognised, the Start set point has to be tweaked, or moved, to the left, ie the Start time decreased.

4.11 Using Two Chromatographs Simultaneously

A second chromatograph may be added to the system in order to analyse extra gases that cannot be analysed by the columns in the usual chromatograph. This may be due to column type or limitations due to processing time.

There are, however, certain restrictions on the operation of the second chromatograph:

Only gases analysed by the first chromatograph are included in the Total Hydrocarbons and Total Gas Chromat parameters. Any extra gases analysed by the second chromatograph are treated as individual gases only and not used in any of the system calculations. The same gas cannot be defined on both chromatographs under the same name. The chromatographs should run in staggered order. One chromatograph should be started and allowed to run part way through its cycle, before starting the second chromatograph running. This prevents both chromatographs using processing time at the same time when an analysis is done.

For the normal operation of a single chromatograph, the m200admin controlling the chromatograph must be registered with the QNX Operating System. This allows the m200setup program to locate and configure the m200admin and for the calibration program to operate. It also enables the dau_kill command to locate the administrator in order to shut it down.

For a second chromatograph to operate, its m200admin program must be registered with the QNX OS under a different name. This will allow the second chromatograph to operate independently of the first.

For a second chromatograph to be accessed, the following programs need to be renamed by using the t= option as shown:<command name> t=<new name>

m200admin t=m200two m200setup t=m200two graph t=m200two When using dau_kill for the second chromatograph, you should use dau_kill m200two.

All calibrations, configurations and setups will be saved separately for each chromatograph, based on the new name allocated:-

Chromat 1 m200admn.gas m200admn.cfg

Chromat 2 m200two.gas m200two.cfg

To access the second chromatograph from the QLOG menu, the realtime dial in 3:/datalog/menus will have to be modified for the m200setup and calibration:-

Chromatograph...@(Setup1|m200setup;Setup2|m200setup t=m200two; Calibrate1|graph;Calibrate2|graph t=m200two; Tweak Calibrations|.m200fine; Configuration Sheet|m200sheet; Example Chromatogram|eg_chromat.bat)^R;

Both the setup and calibration program names include the optional name defined in the t= option, to differentiate between the chromatograph governed by programs that use the regular name and the additional chromatograph using the optional commands.

4.12 Printing Chromatogram Files

This facility allows you to print to hard copy, or file, any chromatograms that you have stored on the system. This is of obvious benefit for a record of calibrations and services, but can also be used to provide a visual aid to any significant gas peaks that you may want to record while drilling. The command to use is gcprint Type gcprint ? in order to see the required format and options of the command.

Standard Format:gcprint <printer type> <layout> <options> Printer Type hpgl Layout Options portrait (vertical) or landscape (horizontal) f= name of the chromatogram file s= scaling (same as scale in calibration) +colour +a_trace can plot either or both channels (default is both +b_trace if not specified) -recalcs the program will calculate the gas values based o= current calibrations. The -recalcs option will disable this output (filename or printer destination)

on the

Example:You want to print out both channels for the chromatogram saved as peak.1110m; the current calibration file is valid. gcprint hpgl landscape f=peak.1110m s=2 +colour o=[1]$lpt

4.13 Trouble Shooting

Generally, most faults with the chromatograph are due to user error and easily traceable. Should the chromatograph actually break, there is little that can be done in the field. The unit will normally have to be returned to base and replaced. Genuine problems with columns can often be as a result of particles entering and causing blockages, therefore when transporting chromatographs, ensure that all inlets are covered.

Status reads unplugged

ensure chromatograph is switched on ensure m200 administrator is running ensure correct port is defined in m200setup

Ensure the port has the correct settings, and is connected via the special null modem serial lead used by the chromatograph: stty baud=9600 par=none stop=1 bits=8 >$cti1

No sample flow through chromat

- flow restricted in sample tubing - perchlorate needs replacing or is packed too tight, the cotton wool may be too tight also - change the blue disk filter - check flow from the CPU - filter inside sample port maybe plugged

Filaments will not turn on

this is a problem with helium pressure - the filaments will not turn on if the pressure at the column head is less than 5psi. Therefore, check that bottle, regulator, leaks etc to ensure that helium is reaching the chromatograph at the correct pressure. Occasionally, when first setting up the chromatograph, the filaments do not turn on. This will normally just require you to resave the method with filament setting on.

Slow drop off of gases

Firstly check that you have good sample flow through the perchlorate filter and change if necessary. If this is okay, the problem may be due to the sample not being pushed through the columns at a high enough pressure. Normally, you can expect columns to clear in a couple of injections. Should it be taking several minutes or longer, the pressure in the columns is too low. Should it be occurring on both channels, then the fault is probably with the helium supply - check the

regulator and helium it be occurring on only with a blockage in the sample exhausts at the back of the You should feel a little puff after each injection. feel this, then there is a blockage.

line and filter for any blockage. Should one column, then the fault is internal particular column. Check the chromat to confirm this. Should you not

No gas on one channel

This is probably a more serious case of the above, when the column has been completely blocked. Another possibility is the autozero. Should your base line on the chromatogram be offscale, you should check the value of the autozero in m200setup. If it is +/- 450 mV, then the column needs replacing. The worst scenario is that the injector is not working - the chromat will have to be returned to base if this is the case.

No gas on either channel

It is unlikely that both channels will become blocked, so the blockage is likely to be before the chromat - check for restrictions in the sample tubing; check that there is flow through the perchlorate filter - this may need changing or may be packed too tight to allow sufficient flow; the unlikely final possibility is that the filter inside the sample port has become plugged.

Generally spurious readings

Strange readings, abnormal extra gasses, peaks moving etc etc, can normally be put down to incorrect settings of pressure and temperature. You should check that the CHP Scale, Temp Scale and Offset are set correctly for that particular column. This should always be checked as a matter of course when first setting up the chromat.

When you run the administrator, "m200admin &", two m200admin tasks will appear on the "tsk" list. If on using dau_kill, one task remains, you will have a ongoing fault, as the next startup of m200admin will add a further two m200admins tasks and thus you could incorrectly have 3 in total. A reboot may be the only way to remove this rogue task.

You should avoided booting the computer while the chromatograph is on line. For the chromatograph with a front control panel, put the chromat into local mode. For the chromat with no control panel, simply disconnect the serial cable before rebooting.

PART B GAS RATIO ANALYSIS

The QLOG system allows for the analysis of several Gas Ratio types to assist in the interpretation of potential hydrocarbon zones. This is possible both on a realtime and offline basis. 4.14 Wetness, Balance and Character Ratios These ratios, determined from the absolute gas measurements, are calculated realtime providing an instant analysis of the hydrocarbon zone being penetrated. Wetness Ratio (WR) =

(C2 + C3 + C4 + C5) x 100 (C1 + C2 + C3 + C4 + C5)

Balance Ratio (BR)

(C1 + C2) (C3 + C4 + C5)

Character Ratio (CR) =

(C4 + C5) C3

Rules for the Wetness Ratio For a WR of: < 0.5 0.5 - 17.5 17.5 - 40 > 40 = very dry gas with low production potential = gas, increasing in density with WR = oil, increasing in density with WR = residual oil with low production potential

Rules for Wetness Ratio relative to Balance Ratio 1. if WR < 0.5 and BR > 100 very light, dry gas, unlikely to be productive

2. if 0.5 < WR < 17.5 and WR < BR < 100 productive gas that increases in density and wetness as the two curves converge 3. if 0.5 < WR < 17.5 and BR < WR productive gas condensate OR high gravity, high GOR oil:-

i) if CR < 0.5 productive wet gas or condensate ii) if CR > 0.5 productive high gravity or high gravity high GOR oil

4. if 17.5 < WR < 40 and BR < WR

producable oil, increasing in density (ie decreasing gravity) as the two curves converge

5. if 17.5 < WR < 40 and BR << WR non productive residual oil

4.15 Oil Indicator and Inverse Oil Indicator These ratios are again calculated realtime and stored in the database as Ratio 1 and 2. The two ratios are the inverse of each other, so you only have to deal with one of them when it comes to analysis. Inverse Oil Indicator C1 / (C3 + C4 + C5) Oil Indicator (C3 + C4 + C5) / C1

Ratio 1 Ratio 2

Use of the Ratios:-

Ratio 1 100 - 14 14 - 10 10 - 2.5 2.5 - 1

Ratio 2 0.01 - 0.07 0.07 - 0.1 0.1 - 0.4 0.4 - 1

Analysis dry gas, gas charged water condensate, saturated oil undersaturated oil residual oil

4.16 C1 / C2 Ratio

This is a very useful ratio which is normally defined, by default, as one of the User Defined Ratios available on the QLOG system. It is therefore calculated realtime and also has the recalc facility should it be defined after the database has been started.

<2 2-4

very low gravity oil (ie high density), too thick to produce low API gravity oil (10 to 15) bronze or dull orange fluorescence dark brown in colour medium API gravity oil (15 to 35) cream to bright yellow fluorescence light to medium brown colour high API gravity oil (>35) bright blue white fluorescence clear colour gas condensate or distillate bright yellow or blue white fluorescence like gasoline in appearance gas light gas, not producable due to low permeability

4-8

8 - 15

10 - 20

15 - 65 > 65

4.17 Pixler Gas Ratios

These ratios are used in the offline program (Geology - Ratio Analysis) and are calculated by taking the value of the gas peak above background level for each hydrocarbon gas. Ratios are then determined for C1 in relation to each individual hydrocarbon (ie C1/C2, C1/C3 etc). The ratios are plotted on a 3 cycle log graph on which is superimposed boundaries for gas, oil, and non-productive phases. The positioning and gradient of the resulting graph yields useful information about the zone and its producability.

NON PRODUCTIVE 100 GAS 10 OIL NON PRODUCTIVE 1 C1/C2 Evaluation C1/C3 C1/C4 C1/C5

1. Productive dry gas zones may show only C1, but abnormally high C1 only is usually indicative of salt water. 2. If C1/C2 is low in the oil section and C1/C4 is high in the gas section, the zone is probably non productive. 3. If any ratio is lower than the preceding ratio, the zone is probably non productive. For example, if C1/C4 < C1/C3, the zone is probably water bearing. 4. The ratios may not be definitive for low permeability zones; however steep ratio plots may indicate tight zones. A series of these plots (Reports: gas_eval.plot) can be viewed to aid analysis.

SECTION 5 - click here to go to main menu BASIC QNX COMMANDS

5.1 5.2 5.3 5.4 5.5 5.6 5.7

The QNX Operating System Multi Tasking Multi User Realtime Operation Networking and Distributed Processing First Steps Basic QNX Operations a. The Command Line b. The Directory Structure c. Changing Directory d. Devices e. Copying Files f. Moving, Deleting and Renaming Files g. Listing Files h. Using the More Command i. Redirecting Input and Output j. Creating and Deleting Directories k. Printing Files More About Tasks Stopping Programs Using Floppy Disks

5.8 5.9 5.10

5.1 The QNX Operating System

All computers require a means of communicating with system peripherals and devices such as disk drives, screens and printers. The software that communicates directly with these periherals is called an Operating System. Examples of other Operating Systems are MSDOS, OS/2, UNIX and WINDOWS NT.

In order to operate the QLOG data acquisition system, it is not necessary to have an in depth knowledge of the underlying Operating System but a good knowledge is required to configure a QLOG system, perform certain operations or to perform any troubleshooting. The operating system used by Datalog is called QNX. This operating system provides the QLOG system with many advantages over other operating systems; for example, an MSDOS system only allows one program to be run at a time by a user and all of the peripherals are tied up by the one program.

QNX is an operating system designed to run on IBM 80*86 compatible computers such as 80486 or 80386 machines, the same computers which run MSDOS. Principally, Datalog use the 486 machines for use in the field. QNX provides the same services to programmers as MSDOS, but also adds several important features such as multi-tasking, a multi-user environment, rapid real-time response and networking facilities.

5.2 Multi-Tasking

QNX has the capability to run many programs at the same time. This is achieved by utilizing the time that the central processing unit (CPU) spends waiting, or doing nothing. Peripherals such as printers and disk drives are much slower than the CPU. Under MSDOS, the CPU would wait for a user to enter a response from the keyboard; QNX spends this time running other programs or tasks. QNX operates by giving each task or program a portion of the CPU processing time for a fraction of a second. All of the tasks are allocated different priorities from 1 to 15 where 15 is the lowest. If a task is running at a higher priority than another task, then this higher priority task will have precedence over the CPU time. Within QLOG, the data acquisition task runs at the highest priority; other programs such as log plotting run at much lower levels, since the speed in which the task is completed is not critical. If the system response appears sluggish, higher priority tasks are taking precedence over non critical functions such as displays tasks. There are two methods of running tasks under QNX, referred to as foreground and background. Programs that require user input are run in the foreground. The number of foreground tasks which may be run is limited by the number of screens or consoles that are available on a particular computer. QLOG acquisition and processing tasks are run in the background, since they are run automatically by the system and require no user input. Background tasks are usually described as those tasks without any user input, output, or control; or any task which is intended to run continuously. If these acquisition tasks were run as foreground tasks, the limited number of consoles would limit the number of acquisition tasks that could be run and also prevent the user from running any other programs on those particular consoles. Running the acquisition tasks in the background therefore keeps the console free for user tasks. Up to 250 tasks may be run simultaneously in the background and foreground. The addition of an ampersand ("&") to the program name when starting the task will run it in the background. The following command, for example, starts the QLOG data acquisition system administrator running in the background: dau_admin & A task id number (Tid) is displayed after starting a background task. To see what tasks are running on the system, enter the command tsk. For the time being, don't concern yourself with any of the columns apart from those labelled: Program, Tid and Pri.

5.3 Multi-User

For the same reason that QNX can multi-task, it also allows many users to operate on the same system independently of each other. QLOG has one central computer, the Server, which performs all of the data acquisition, analyses and storage. Other computers, linked to the server by a network, may be placed in the Mud Logging Unit, the offices of the Engineer and Geologist and the drillfloor. Each of these users is unaware of other users sharing the system at the same time. If the system is heavily loaded with higher priority tasks, the user may notice a slight delay in responses from the computer.

There are various ways by which the computers can be linked: Users may be located locally and connected directly by a cable to a host CPU through a serial port. The user will therefore have a terminal that will only display the information sent to the CPU and transmit back any information entered from the keyboard by the user; the terminal has no computational power. Users can be located far away from the host computer by connections through modems. A modem changes the electrical signal to audio frequency tones that can be transmitted over telephone lines, or by microwaves, satellites etc. The user at the receiving end will then have a modem to turn the audio signal back into a electrical signal which is displayed by a terminal. The other method for users to be connected to the server is by a Local Area Network (LAN) where the user has his own computer. The advantages of this computer (or workstation) are: A workstation does not slow down the server. In fact it can be used to speed up the system. A workstation is much more resistant to electrical noise than serial terminals. A workstation can perform graphics whereas serial terminals cannot do so with acceptable performance.

Multi-user systems have to guard against illegal access and must have a certain amount of restricted access to users. QLOG has a password protection as well as individual file protection. To see which users are logged on to your machine type who. To see what users are logged onto the whole network type: who net

Users can be logged onto a single node many times through the use of consoles. By pressing <Ctrl> + <Alt> + <Enter> simultaneously, the screen will switch to the different consoles that are mounted on a particular node. Pressing Ctrl Alt 2 would change to console 2. When the system is operated in text mode as opposed to graphics or windows, multiple consoles allow the mud logger to instantaneously flip between displays and programs. For example, one console could display the real time display, another the database editor, and yet another the trip mode.

5.4 Real-Time Operation

QNX is designed for real-time monitoring. UNIX, which closely resembles QNX, has the great disadvantage of being slow and unable to monitor real-time parameters. Because of the speed of QNX, QLOG provides all of the functions required by a mudlogging system including real-time data acquisition, and reporting much more efficiently than DOS or UNIX systems. One of the reasons QNX is so fast is that it has very small memory requirements; very little of the operating system is ever kept in memory. For example, all of the commands such as dir, ls, cp, tsk are loaded in from disk every time they are run. When a tsk is performed, the user will see a column called Pri; this is the priority that a task is running at. Except for tasks that have priorities pre-determined by the system, a task will run at priority 8 by default; therefore, unless a user or program changes the priority, it will run at priority 8. A program called sac will show graphically the amount of CPU time being used. The computer can never do nothing, so there is a task called idle whose job is to use any priority 15 CPU time.

5.5 Networking and Distributed Processing

QNX passes messages between tasks or programs; for example the database administrator task which saves data to the database will receive information from the data acquisition task. The operating system allows these messages to be passed over the LAN at very high speeds, approximately 1 million bits per second. This allows programmers to run tasks on any network computers and have the results sent back to the host (or issuing) computer. This distributes the processing power.

Qlog uses this message passing ability in many ways, two examples are: i). Increased processing power. If the need for more computational power is required, an additional node can be added - this node will perform calculations and pass the results back to the server. There can be up to 256 nodes, giving more computational power than most mainframe computers. In QNX, a node need not have any peripherals, the server can boot the client and share its peripherals, thus a node can be disk-less and have no display. ii) Graphical displays. Graphics requires much of the CPU power and the information required to display graphics is too much to transmit over serial lines. The QLOG server will pass the result of a calculation to a node which will perform the graphics task displaying the information; which in Datalog's case is a workstation running a windowing environment called QNX Windows.

Nodes can be in a variety of forms, for example: i) A full computer system similar to the server node with a keyboard, hard-drive, monitor and printers. ii) Display only with no hard drive. In this case the client will receive all of the program tasks from the server. iii) CPU only, providing additional computational power. communicating with the user. The CPU has no way of

To see what nodes are available on a network type net; a listing of all available nodes is given, as well as the total memory and CPU power if all the machines were running as a single system. To run a task, or to access a file on another node, type the node number in brackets; the following example would run the net program on node 4: [4] net

5.6 First Steps Logging In If your screen is blank or has a message "Type a Ctrl z to login", type Ctrl z by pressing the Ctrl and 'z' keys -simultaneously. A copyright notice followed by a login prompt should appear, You will now be given a login prompt Login: QNX is now waiting for you to enter your pre-assigned userid. If a user account has not already been assigned to you, ask a system administrator to provide you with an account. After entering your userid, you will be given a password prompt Password:

When you enter your password, it will not appear on screen as you type it for security reasons. Once you have entered your password, you may see some messages which for the time being you can ignore, but you should end up at a system command prompt which will be % or $ sign. Typing logoff or bye will log the user off of the system.

User Permissions

QNX applies access rights to users and to any files that users create. These access levels are governed by a pair of numbers, the group and member number, where each group or member is a number between 0 and 255. A user with a group number of less than 255 is an ordinary user and will be given the % command prompt as described above. A user with a member number of 255 is the leader of that particular group and has group privileges over the members of that group. A group number of 255 means that that user is a superuser and is given a $ command prompt (eg 255,125). You may notice a number before your prompt. This is the tty (terminal type) number of the console or terminal you are currently logged on to. If you do not see the tty number, run the program promptt to display it.

To see what access level (group and member number) a particular user has, type: finger <userid>

Where <userid> is the name of the user. Other information such as the last time he/she logged on can also be seen.

Permissions will be discussed in some detail in Section 6 of this manual, but a user with a lower group level can not modify, delete, run or even read another users files, unless the permission is specifically given by a higher level group user.

5.7 Basic QNX Operations 5.7a The Command Line When typing on the QNX command line, the following editing keys will be useful: Up Arrow Insert Ctrl-x Ctrl-c

will repeat previous commands. will allow you to over type and correct mistakes will cancel the entire line. will abort a command or the current command line.

It is possible to have "odd" characters in the keyboard buffer, left over from a previous program or command, which will have the effect of the system not accepting a command as typed. You will simply be returned to a command line again. For example, if the escape key is pressed before a command is entered, the command will have no effect. To clear the command line of any "odd" characters, use the Ctrl-x option.

The following points should be remembered: i) QNX always needs you to insert spaces between keystrokes or parts to a command. eg in QNX cd/tmp will not work, the command has to be cd /tmp.

ii) QNX is case sensitive. (It distinguishes between upper and lower-case characters). Commands are normally lower-case. The file "TEST" and "test" are different files.

iii) QNX always uses a forward slash "/" rather than the DOS back slash "\" as the subdirectory separator.

iv) QNX is powerful and unforgiving. You are rarely asked to confirm commands, so consider carefully your instructions before you enter them.

v) Files can have any name in QNX; executable files do not have any special extension as in DOS. Filenames can be up to 16 characters long and can include numbers and symbols as well as letters. vi) QNX uses numbers to represent disk drives: There is always a semi-colon after the drive number eg 3:/

As standard, the hard drive used by Datalog is partitioned into 2 parts:Main working partition is Secondary partition is Floppy drive is 3:/ 4:/ 1:/

If specifying a particular node number, the number must be entered inside square brackets:ie [1]3:/

vii) QNX provides a command summary and available options for most commands by placing a question mark after the command; for example ls ? use: ls [directory][options]* options: +modified -sort p=[^]pattern +unused -dir_off +dir_on +age_sort +Size_sort +reverse_sort +size +blocks -All c=columns -columns_off +file_list +horizontal +verbose +executable -executable +Modified_only +clear_screen w=column_width +tx_time b=baud_rate -modified l=line_length viii) To reboot a QNX node, simultaneously press the following 4 keys: CTRL ALT SHIFT DEL, not CTRL ALT DEL as with MSDOS.

5.7b The Directory Structure

QNX has a rigid file structure and the user must know where important files are kept, ie in which directories. QNX and QLOG have predefined directories: /config /cmds /tmp /drivers /user

has all the system configuration files all system utility programs (ie ls, cd, more etc.) temporary working directory system programs to control disks, interfaces etc. the root directory for all user directories

/datalog the root directory for all QLOG files and programs /datalog/cmds all QLOG executable programs /datalog/config all QLOG configuration files /datalog/dbms databases /windows the root directory for all the windows files

The complete structure will be looked at in more detail in Section 6 of this manual.

5.7c Changing Directory

After logging in, the user is placed in his/her home directory; eg if bob logged in, he would be located in 3:/user/bob.

In QNX, your current working directory is not shown at the command prompt as it normally is in MSDOS - this is because QNX directory names can become very long. To see your current directory type: pwd

The message returned could be: [1]3:/user/bob, which is node 1 drive 3, /user/bob.

The main command to change your current directory is cd, but there are rules on how this command should be used:examples:cd cd /user cd 3:/cmds takes you to your home directory no matter where you are located takes you to the user directory, takes you to drive 3, /cmds directory changes to node 4, drive 3, directory /user/bill/text

cd [4]3:/user/bill/text cd / cd ^ cd ^^ cd fred

takes you to the root (top) directory takes you up one directory takes you up two directories takes you into the subdirectory fred from your current directory

To better illustrate the basic rules, consider the directory structure as a tree with branches:Those shown are all valid directories and subdirectories from Node 1 with a partitioned hard drive and a backup Node 2.

[1]3:/

[1]4:/

[2]3:/

cmds

datalog

user

datalog

cmds

datalog

user

cmds

dbms

datalog

dbms

cmds

dbms

datalog

In this case, the top of the branches are the root directories:

[1]3:/

[1]4:/

and [2]3:/

The second tier can be thought of as subdirectories of the root, or the initial root directories of individual branches. eg from [1]3:/, we have [1]3:/datalog ---------- [1]3:/datalog/cmds [1]3:/datalog/dbms [1]3:/cmds [1]3:/user ------------[1]3:/user/datalog

The main rules are as follows:a) as soon as you enter / after cd, you will first be taken to the root of that particular branch the system will then look in that root for any particular directory name you have specified. Therefore, in order to change branches your directory path must begin with a /. If you are changing main branches, ie going to another drive or node, these must be specified also. b) if you are staying on the same branch, then the slash is not required when going into a subdirectory below your current directory. ie if no root is specified by a /, the system will automatically look into your present directory for the directory you have specified. If you are going up the tree, then you can use the ^ symbols to go up to different levels. c) if no node or drive is specified, the system will default to Node 1 and drive 3. Therefore, when located on any other node or drive, unless you are staying on the same branch, the full pathe together with node and drive have to be specified.

These rules can be illustrated using the directory structure shown and the following examples:Location [1]3:/datalog Command cd cmds cd /cmds cd dbms cd /dbms cd 4:/datalog/dbms cd [2]3:/datalog/dbms cd ^ cd ^user cd cmds cd /cmds cd [2]3:/cmds cd ^cmds cd dbms cd /dbms cd 4:/datalog/dbms Destination [1]3:/datalog/cmds [1]3:/cmds [1]3:/datalog/dbms not possible [1]4:/datalog/dbms [2]3:/datalog/dbms [1]3:/ [1]3:/user [2]3:/datalog/cmds [1]3:/cmds [2]3:/cmds [2]3:/cmds [2]3:/datalog/dbms not possible [1]4:/datalog/dbms

[2]3:/datalog

These same rules will apply, not only when changing directory, but when copying, moving or deleting files. For each one of these operations, the system needs to know, first of all, where to find the files.

5.7d Devices

Devices are a means for input and output, to be treated by QNX in the same manner as files, giving great flexibility in operating the system. For example, a printer can be attached to a serial port or a parallel port (NB parallel should be normally used); the output of a program can be directed to either device without any thought from a programmer.

A device name always starts with a dollar sign ($); some examples of valid devices are: $lpt $lpt2 $mdm $term1 $con $con2 $win1 $cti1 $null first printer port second printer port first "base" serial port second "base" serial port main console second console window terminal (QNX windows) CTI serial port (multiple serial port) A device that does nothing mount

To see what devices are mounted type:

Files can be copied to a device, as we will see shortly.

When a device is mounted (this occurs either automatically when the machine boots, or by issuing a mount command in the system initialization file), it is allocated a tty (terminal type) number as described previously. Devices can be referred to by the tty number; For example, if the parallel port $lpt is given the tty number $tty1, that port could be referred to by either $tty1 or $lpt. Beware, however, that the tty number will vary depending on what devices are mounted on a particular system. Do not assume, that because $tty1 is $lpt on one system, that it is the same on all machines or configurations. When tsk is run, it shows the tty number of the device on which the task was started. When the mount command is run, you will see the drives that are mounted on the machine as well as software libraries that are mounted.

5.7e Copying Files

Some new concepts are introduced here with the copy command. These concepts may be used with other commands such as mv and ls. To copy files between directories use the following: cp <source file> <destination address>

For example, there is a file called newrecord, located in the temporary directory, which we want to copy onto a disk in floppy drive 1, in a subdirectory called records: cp /tmp/newrecord 1:/records Note that there is no / after 1:/records. If this was put, the system would expect a further subdirectory name.

In the above example, the name of the file could be changed by changing the destination name; for example to new_name: cp /tmp/newrecord 1:/records/new_name

Be certain that the directory you are defining actually exists. In the first example, if there was no such directory as /records, the file newrecord would have been copied to the root directory, ie 1:/, and its name would have been changed to records. To copy all of the files from the /tmp directory, to /user/dave using the wild card (*): cp /tmp/* /user/dave To copy only files ending in p from /tmp, to /user/dave: cp /tmp/*p /user/dave

Note, that in all of the above examples, the assumption is that the user is not located in the /tmp directory, so that it has to be specified. If the current working directory was /tmp, it would not need to be specified - the system would automatically look into the current directory for those files. The above example would therefore be: cp *p /user/dave

To copy files to a different node on the network: eg cp [1]3:/datalog/cmds/* [2]3:/datalog/cmds

ie the node number has to be specified. This also applies to when copying to different drives - the drive number would have to be specified (unless copying to the default node 1 drive 3) eg to copy all files from 3:/datalog/dbms to the same directory on drive 4 (both node 1): cp 3:/datalog/dbms/* 4:/datalog/dbms

eg to copy all files from node 2s temporary directory to the same directory on node 1: cp [2]3:/tmp/* /tmp

A file can also be copied (ie printed out) to a device such as a printer. The following example will print out the file called report, located in 3:/user/datalog, to a printer connected to the 2nd paralle port on node 2: cp /user/datalog/report [2]$lpt2

If a file is copied, and no destination is specified (ie cp /user/datalog/report ), your current working console is assumed to be the destination and the file will appear on your screen.

As an exercise in copying files to devices, you should copy a text file to a printer and to a console on another node. Mistakes that commonly occur when using the cp command, are the incorrect spelling of the destination directory with the consequence that a new file is created; for example, if we were copying a file called test to /user/bob but instead issued the command: cp test /user/bib The result of this is that a new file called bib would be created in /user.

Another common mistake is to forget the $ sign when copying files to a printer; for example: cp /tmp/log lpt

will copy /tmp/log to a new file called lpt in whatever directory we are currently in.

5.7f Moving, Deleting and Renaming Files Moving Files To move files, use the same procedure as when copying files between directories: mv <source file> <destination name>

Examples: mv /tmp/well.rpt /user/bob mv /tmp/well.rpt /user/bob/new_well.rpt mv /user/bob/demo [2]3:/tmp mv demo.script test.script moves well.rpt from /tmp to /user/bob changes the name of the file as well as moving it to another directory moves the file from bobs home directory on node 1, to the temporary directory on node 2 simply changes the file name without moving it

If you try to move a file to a file name that already exists, the command will fail, whereas if you copy a file to an existing file name, the existing file will be overwritten. Deleting files To delete a file, use the 'rm' (remove) command; eg to delete a file called report in /user/fred:rm /user/fred/report

The wild card (*) can also be used to delete multiple files; eg delete all of the files in the temporary directory:rm /tmp/* The interactive option +i provides some safety and should be used when deleting multiple files. Using this option, you will be asked to confirm the removal of each individual file. ie rm /tmp/* +i

If a file cannot be deleted, the user may not have permission to delete the file; the file could be busy (someone or some program is currently holding the file open); or the file could be corrupt. Renaming Files The command is ren and it works in much the same way as the other file commands:ren well.txt well.rpt ren /tmp/well.txt well.rpt renames file in your current directory renames a file in a different directory - notice that you do not need to give the full path when you give the new file name

5.7g Listing Files To list all of the files in your current working directory: ls The ls command will only list the actual file names - no information about the file will be given.

To list all of the files contained in another directory: eg ls /datalog/cmds

As an exercise the user should list all executable files in the /cmds directory beginning with the letter f.

To list the name of files together with the file size, time and date of creation, use the files command. The size is given in blocks where one block is 512 bytes; files [1]3:/datalog/cmds will show all files in 3:/datalog/dbms

This command will not only list the files in your present directory, but in any subdirectories within the directory. In these cases, the directory path will be given as well as the file name. The +d option will list just the subdirectories, no files, contained in a particular directory: files /tmp +d will show any subdirectories in the temporary directory.

The +v option shows all information about the files, including file permissions and attributes. For the time being, you need only be aware that these exist and how to view them. They will be covered in detail in Section 6 of this manual.

5.7h Using the more command

This command has 2 distinct functions:1. To view a text file 2. To view the entire output of a command (eg ls, files, tsk) that is too big to fit on one screen. Viewing a text file For example, to view a file called .login located in your current directory: more .login

From this point, if you want to make any changes to the file, enter e and you will automatically open the QNX editor. Once you have saved any changes made and left the editor (ie grey +, w, enter, grey +, q, enter), you will be placed back into the more program, viewing the file. Simply press the esc key to exit the program and return you to a command prompt. The QLOG program called help uses the more program to display help text files. The following example should be stepped through: At the command line enter help. Use the arrow keys to select the file called qnx_cmds and press F7 to view this file, now you will be in the more program viewing the qnx_cmds help file. Press F1 for the option help screen and practice using functions listed. Try finding the text "ls" by pressing '/' or 'f', then enter 'ls' as the text to find. This is the fastest way of locating a subject within a help file.

Viewing Output If the output from a command is too big to fit on one screen, all that you will be able to see is the bottom page of information remaining on the screen. To see any of the information that has already scrolled through the screen, you need to use the more command. The command is used slightly differently, in that you are re-directing the output to a temporary file that can then be viewed. eg files +v /datalog/cmds |more

The pipe | is used to redirect the output to the more program. A temporary file with the name pipe followed by an extension will be created in the temporary directory. This is the file that you will be viewing and it will remain in /tmp until you exit the more program. When you exit using esc the file will disappear.

5.7i Redirecting Input and Output. As you have seen, a file can be copied to a printer, but what if we wish the output of a program to go to a printer? QNX provides a facility to redirect the input or output of a program using the > and < keys. By default, the input is the keyboard the output is your current console.

To output the contents of your current directory (ie ls) to a printer on the first parallel port on node 1:ls > [1]$lpt

Rather than hard copy, ie output to a printer, the output of a command can also be redirected to a file. To output a list of files (only) to a temporary file called junk:files -v > /tmp/junk The following would append (add on to) the file called junk (if it is not found, it is created):files -v >> /tmp/junk

To input to a program called mytask from a file called stuff: mytask < stuff

This redirect input could redirect the input from a keyboard on a different node:eg this example will run mail on your node but accept keyboard input from node 4 console 2: mail < [4]$con2

5.7j Creating and Deleting Directories. To create a directory, use the mkdir command The directory must have a different name to any file that already exists in the current directory. eg you are located in /user/fred and which to create a subdirectory called text:mkdir text will create /user/fred/text

The same subdirectory could still have been created if you were located in a different directory by giving the full path name:mkdir /user/fred/text If there had been a file called text in /user/fred, then the subdirectory text could not have been created. There can be directories with the same name on different drives. A directory is similar to a file in that it has permissions like a file and these permissions need to be set. These will be looked at in Section 6 of this manual.

To remove a directory use the 'rmdir' command. It is used in exactly the same way as mkdir, but the directory must be empty (of files or further subdirectories) before it can be removed.

5.7k Printing Files.

As stated earlier, text files can be copied to a printer by using the cp command. However, QNX provides a method of formatting the printout by using the list command. For example: list w=8 l=12 <filename> where w = page width l = page length

Screen printouts:- use the keys Ctrl Alt and Printscreen simultaneously:You have the following options:print screen save screen (enter a filename) calculator (F10 to exit)

5.8 More About Tasks

As mentioned earlier, the tsk command will tell us what tasks are running on a particular node; for example, to see what tasks are running on node 4 tsk n=4 [4] tsk be here, the tsk is run from node 1, but to view the tasks on node 4. This is the correct command to use here, the tsk is actually being run on node 4, so that the results would the same as above. However, if the reason we were checking the tasks running was because of a problem on node 4, running the tsk program on node 4 may compound the problem.

The tsk command shows the tty number of the device on which the task was started. If you wish to start a task on another tty, use the ontty command; eg to start the dau_admin program on $tty99:ontty 99 dau_admin &. The $tty 99 is used to run programs when no output is required.

The user should be cautioned that if a background task is started from a shell within windows, the task will be terminated when that shell is closed. For this reason, if you are in windows, background tasks should be started on another tty number using the ontty command. This could be tty99 as shown above, or perhaps the tty number of a normal consol screen. Administrator Tasks Tasks that have a higher priority than the person starting the task are called administrator tasks. As the name suggests, an administrator task administers a particular part of the system. Examples of QLOG administrators are dau_admin, the data acquisition administrator and dbadmin, the database administrator. System administrators would include timing and password administrators. Administrator tasks need to be able to communicate with other tasks over the network, thus they register their name with the task administrator. To see the tasks names that are registered: tsk na

5.9 Stopping Programs Normal tasks or commands can be halted by using Ctrl_c eg dir / has a large output, to stop it part way use Ctrl_c

QLOG administrators can only be stopped by using dau_kill <administrator> eg dau_kill DBadmin

Other (QLOG) programs have to be stopped by using the slay command. eg to stop a program called dead_program running slay dead_program If this program was running on another node, end the command with the node number eg slay dead program n=2

Again, you should use the n= option, rather than running the slay command on node 2, to avoid potential problems. One likely reason for you to be slaying a program is that it has crashed and frozen up a consol or terminal. Therefore, if you entered the command [2] slay dead program, you are risking bringing the problem to your own node.

You can only slay programs that have been started by you or by a user with lower permissions. As an exercise, start the mail program on tty 99 on node 1 and then slay the mail program.

When any task or program is started, it is given a unique task id (Tid), which will be displayed in the information given by tsk. A task can be referred to by this Tid, or by its name. If there is more than one task running (eg plot programs) with the same name, and you try slaying the program name, you will be prompted to give the Tid of the correct task to kill. If you know the correct Tid, the program could be slayed directly by using the i= option. eg slay i=4b0e n=2

5.10 Using Floppy Disks

To be able to use a floppy disk, there are two steps: 1. Format the disk 2. Initialize the disk

To format a disk use the fdformat command. For example to format a disk in drive 1: fdformat 1 +1.4m where 1.4m signifies 1.4 Megabytes (2880 blocks)

If you are located on node 1, and the disk is in the floppy drive on node 2, the command would be:fdformat [2]1 +1.4m

To initialize the disk: dinit 1

The disk is then ready to use, but you should check it for bad blocks before doing so:dcheck 1 +m The +m (mark), should any bad blocks be found, would mark the blocks, recording them in a file bad_blks in the disks root directory. The disk can then still be used.

You will notice a file called bitmap on every disk, in the root directory, including hard drives. This file contains the sector allocation of the disk and must never be moved, deleted or tampered with.

WARNING all data on a disk will be deleted when the disk is initialized or formatted, 'query' will show the amount of space, used and remaining, on a floppy or hard disk. eg query 1

May respond with: 0.7M free, 0.7M used, 50% full ( 1440 free, 1440 used of 2880 blocks )

SECTION 6 - click here to go to main menu QLOG and QNX APPLICATIONS 6.1 6.2 The Directory Structure Configuring a System

Whole System The QLOG (3:/datalog) structure System Initialization Files Creating User Accounts User Directories

6.3 6.4 6.5

Locating Text and Files Time File Access Rights Setting Timezone Offsets Minor Time Corrections Attributes and Permissions Changing Attributes Extents Scheduling Tasks with cron The use of ditto Accessing MSDOS disks Additional Commands and Options Compressing Files with zoo Using fbackup Open Files Checking for Bad Blocks System Checks (using chkfsys) Corrupt Files (using zap)

6.6

Program Operations

6.7 6.8

Archives Checking the File System

6.9 6.10 6.11 6.12 6.13

GNU (x-y-z) Plots Modification of Depth Database Backing up the Databases Removing Records Using dbprune Time Database Depth Database with dbget

Creating Restricted QLOG menus Changing Default Parameter Names Use of display, plots, edits text files Use of channels.txt

QLOG / QNX COMMANDS

6.1 DIRECTORY STRUCTURE

The user should already be familiar with some of the names of certain QLOG files. The user should now become familiar with the detailed directory structure of the QNX system and with program and file names, relating them to the programs run from the QLOG menu.

To help this, print out from the following commands:dir / -f > $lpt directory structure, without files, from the root directory

dir 3:/datalog > $lpt directory and files of the /datalog directory (ie QLOG)

Another useful assist would be to print out the help file 'manual', which details help files available and details files contained in /datalog/cmds (ie QLOG programs).

Directory structure:/config hard drive and system configuration files eg sys.init files for nodes/kns' cti.init tzset.sh sys.env QNX commands enter <command> ? for help on executing these files see next section hard drive and graphics controller files QNX boot files, the newest of which should be entered in the boot set up file files dumped here in the case of errors or corruption created for each user after authorization word processor communications spreadsheet program libraries temporary working directory home directory created for each user after being authorized eg /apps /config /drivers - applications - configuration - mouse and video controllers

/cmds /datalog /drivers /netboot /dumps /mailboxes /penpal /qterm /ripcam /lib /tmp /user /windows

Breakdown of the QLOG menu ie the /datalog directory:/datalog/calcim_dat /cbm to store calcimetry results for coal bed methane files

/chrom_dat to store chromatograms (calibrations or samples) /cmds /config /dbms /default /displays /help /menus /plots /plots/data /script /text QLOG programs log header motifs, m200 setup, calib and set up, header, tomb, profiles, decimals and preferences depth databases (drive 3 or normally 4) alarms, metric and imperial defaults for units and decimals realtime screen displays and headers english and french dials for the QLOG menu (modify in /user/***/windows for restrictions) GNUplot configurations data files for the above plot control, script and extra files for displays and headers display.txt edits.txt channels.txt plots.txt /trips /windows realtime displays extra dbase parameters channel names, calibration plot headers

trip files if required

6.2 CONFIGURING A SYSTEM "sys.init" Files Configuring a QNX system can be a complicated task but from an engineers perspective, normally all that is required are minor changes and fine-tuning of the default initialization files. The user should be cognisant that any commands placed in the "sys.init" file are run (automatically) by the system and are not run by any particular user. When a node boots, a file called "sys.init.n" is executed where n is the particular node number on the network. The file is stored in the directory 3:/config. If there is no network "sys.init.0" is executed. Basic use of sys.init.files sys.init.n sys.init.0 sys.init.kns for each node used if system is not configured as a network. With our system, even one singular node is configured as a network ie node 1 default for KNS work stations, copy to relevant node number

Use:-

for mounting of hard and floppy drives consoles for running timer, rtc at tzset.sh (in which time offset is set) programs run on ontty 99 (no output) netboot nettime cron passadmin etc mounting drives search patterns for drives and commands video drivers:

defaults held in 3:/windows/drivers qw.vga_bios oakland, 1 & qw.vga_bios atiwonder, 1 & qw.vga16 g=2 m=5 qw.vga & - typical for older kns - typical for QLOG server - typical for new knss - default for any graphics

mouse drivers: defaults held in 3:/windows/drivers mdrv microsoft serial mouse (200dpi) dev=$mdm mdrv microsoft (inport) bus mouse int=3 & set keyboard (for UK keyboards) eg kbd 102.UK for communications - comm stty settings for ports eg comm b=38400 i=ATZ| a=ATA| +h +o +l l=/logs p=1 l=15 stty b=9600 +hflow +iflow +oflow +split esc=0 >$mdm init.cti.n for multi serial port stty settings

The most common changes to be made to the sys.init files are to the mouse and video drivers and to correct port settings. The user should familiarise themselves with these commands. More detailed use and purpose of these files will be covered later in the manual, for the more advanced user.

Creating New User Accounts The authorize command will create accounts for new users. The user creating the account must be a superuser and will have to supply his/her password to run the authorize program. A non superuser may use the program to authorize a user, but no user directory or default .login file will be created, so that that user will be unable to log in to the system. From the authorize program you can create, edit, delete and view accounts. If you are creating a new account you will have to provide an initial password for the user, this is normally the same as the users name. When the user then logs onto the system, he/she can change the password to their own choosing by issuing the command pswd. Normally the default values in authorize are correct for new accounts, and the user can simply press <carriage return> through most items. Required entries are: User ID and Password - required for login User Name (enter the user's real name) New user group and member level (see below) Password Expiry - set to 0 weeks to override Once the required entries have been made, <Grey +> on the keypad should be entered in order to save the changes. The user will then be prompted as to whether to create a user directory and a default .login file. The user should reply Yes to these prompts. This completes the authorization of a new user. For security reasons, different users may be given different access levels to the system. This is governed by the Group and Member numbers assigned through authorize. These range from 0 to 255. Group number 255 denotes a superuser, giving complete access to the system. Member number 255 indicates the leader of a particular group. The command: finger <username> will show group/member numbers of a particular user

User Directories As already seen, when a new user is authorized, a user directory should be automatically created for that user: ie if Bob is authorized onto the system, the directory 3:/user/bob will be created

The following files will be automatically created in the user directory: .color .login default colours for screen and text batch file containing commands to execute when a user logs in.

The default .login file will create a mailbox for the user when he first logs in. This enables mail to be sent to this user. When a user logs in, the command ec.login is run. ec executes the users .login file containing the user initialization commands. This .login file can be customised for a users own particular requirements. The user directory will also hold the user units, decimal files and alarm settings (units.cfg, user_dp.cfg and alarms.cfg). These could be copied to the user directory from the defaults on the system; alternatively, as soon as a user makes any changes to the user units or decimals in the QLOG setup menu, the files will automatically be created and saved in the user directory. The .login file has many applications for procedures to be followed when a particular user logs on to the system.

Examples: To start a particular display (eg 2) upon login display s=2

To start a screen plot and screen display automatically on login

windows wplot plotname & wdisplay &

To have displays, help files and QLOG menus in another language

setenv LANGUAGE = french

For security, to enter the QLOG menu on login, but logging you off should you quit QLOG ie giving you no access to the QNX command line

qlog bye

6.3 LOCATING TEXT AND FILES Text A good example of locating text within files is when trying to find help on a particular subject. For example is we wish to find all references to "hookload" in all the help files: locate "hookload" /datalog/help/* This will produce an output for every occurrence of the pattern "hookload" providing the help file name, the line number and the line in which the match was found. This example should be sent to more. Files If you are trying to find a file on a particular drive a command can be made by searching the output of the files command for the occurrence of the file name using the locate command. For example, if we wish to find a file called "fortunes" on node 1 drive 3: files [1]3:/ -v | locate "fortunes" | more. The output of the files command becomes the input of the locate command, which becomes the input of the more command. There is a command called 'ff' which will search for a file which is easier to use than the multiple commands. For example: ff fortunes If run from the root directory, the whole directory will be searched. The wildcard * can be used to find all files of similar names.

6.4 TIME Setting the correct time zone Ensuring that the correct time zone is set is very important to the operation of QLOG. Data has to stored worldwide in a consistent format, yet displayed in regional time variations. To allow this to happen, the QLOG system records data with time set to UTC (Coordinated Universal Time), otherwise known as Greenwich Mean Time. Data can be displayed in regional times by using a timezone to offset from UTC. Setting the time zone offset has to be done in two locations because QLOG uses two different compilers (QNX C and C86 ANSI C) which refer to timezones in two different ways. C compiler within the sys.init files for each node so that the offset remains correct after reboots. C86 compiler Timezone offset is set in the environment by TZ=********. This is stored in /config/sys.env for each node on the network. The command fix_env_tz +/- will allow for summer/winter changes in regional areas Using TZ= Once this command is set correctly for a particular timezone, it need never be changed. Form of command: TZ= 3 letters, number, 3 letters, number Timezone offset is set by the command tzset +/- hour This command is placed in the file /config/tzset.sh, which should then be set

The three letters represent the abbreviation for regional time zones (the specific letters are not important) and the number represents the offset from UTC. 2 cycles are present to allow for regions that reset clocks for summer and winter times. This number must be set the same (ie for the normal offset from UTC) in both cases. TZ=MST7MDT7

Alberta for example:

MST = Mountain Standard Time MDT = Daylight Saving Time

The normal offset from UTC is -7 hours Note that the number in the string is positive, whereas the actual offset is UTC minus 7. The best way to think of this is, wherever you are, how many hours need to be added/subtracted to the local time to get you to UTC time. Therefore, from Alberta, to get to UTC time, we need to add 7 hours.

To switch between the two settings, a command called fix_env_tz - is used. This is also set in the file /config/tzset.sh. The command simply adjusts the current offset (-7 hours) by 1 hour (ie -6 hours). The command should therefore be commented out and ignored if we are in Mountain Standard Time.

The commands in the file tzset.sh should therefore look like the following for each case:Mountain Standard Time tzset -7 fix_env_tz Daylight Saving Time tzset -6 fix_env_tz -

When you have to switch between the two time settings then, you should make the appropriate changes to tzset.sh and then run this command on each node of the network: ie [node number] tzset.sh

You can then run the following commands to check that you have set the offsets correctly. <date> the displayed time should show the correct local time <setenv> this will show you the current environment, regarding TZ:in winter, it should show MST7MDT7 in summer, it would show MST7MDT6

Taking the UK as a second example:The normal time is UTC (or GMT) and British Summer Time is UTC + 1 hour. The TZ command would be:TZ=GMT-0GMT-0

tzset.sh should look like the following:Normal Time tzset +0 fix_env_tz + British Summer Time tzset +1 fix_env_tz +

Running setenv, you should see the following results:in winter, GMT-0GMT-0

in summer, GMT-0GMT-1 Note that once you start going East of the Greenwich Meridian, local time is ahead of UTC. The number in the TZ= string should therefore be negative, and for summer time adjustments, the command should be fix_env_tz +.

Once you have set the time zone offsets correctly, you have to ensure that the hardware clock is set correctly and if not, reset it. To the check the current time on the hardware clock, use the date command. The time displayed from this command, and on screen, should show the correct local time. This system time is taken from the hardware clock with the timezone offsets then applied ie the hardware clock (if viewed in CMOS) would show time relative to UTC.

If the time shown is incorrect (not the correct local time), reset it using the following command formats:date 12 8 96 5 55 pm rtc at +s (assume 12th Aug 1996, 5:55pm)

The rtc command will reset the hardware clock (at is the type), the +s option sets the hardware clock. The hardware clock will therefore be set to 5.55pm in this case, +/- whatever the timezone offset is.

The hardware clock will then show time relative to UTC, whereas the system or displayed time will be relative to the local timezone. The date command should, under no circumstances, be used to make actual timezone offset corrections. Only change the time with date if you are positive that the offsets have been set correctly.

Minor changes to the time

After setting the timezone offsets correctly, any minor time corrections required for the system or displayed time should be made by using the date and rtc commands as shown above. If the rtc command is omitted, the time would revert back to the original setting when the next system update was performed.

Obviously, you should be careful not to make changes to the time at wellsite if you are drilling. This would affect the databases and time calculated parameters such as ROP and lag. To make these changes, it is best to wait until rig operations allow you to shut down administrators, make the correction, and restart the system.

Using File Dates Commands such as cp, backup and qcp have an option of +newest which copies or sends only files whose dates are newer than those on the destination directory. For example if we wish to copy only the newest files from node 1 drive 3 /datalog/script to node 2 drive 3 /datalog/script: cp [1]3:/datalog/script/* [2]3:/datalog/script +n

6.5 FILE ACCESS RIGHTS Attributes and Permissions As was discussed in the first chapter each user is given a group and member level which is a number between 0 and 255 where 255 is the highest access level. QNX implements a system to protect -access to all files, directories and disks. There are two levels to this protection: Attributes: These are actually part of the file, whose restrictions apply to every user. Permissions: the level of access (or attribute) granted to members of the same group and to all other users.

There are 5 attributes to a file: READ WRITE APPEND EXECUTE MODIFY allows data to be read by other files or programs; allows users to view only allows data to overwrite the contents of the file; ie the file can be edited allows new data to be added to the end of a file makes the file executable, ie it performs a task or tasks allows attributes to be changed

Directories have the following attributes: READ CREATE BLOCK MODIFY allow programs to retrieve files from the directory allow new files to be added to or created in the directory prevents programs from looking into the directory (ie makes files invisible) allows attributes to be changed

To see what attributes/permissions a file or directory has, use the following command: files +v files +v +d for files for directories

If the command <files +v> was run, you would see the following information:Blks 76 21 1 X 3 1 1 Loc 016E74 014E20 0263B6 Grp 255 145 167 Mem Attr G-Perm-O 255 meawr e e 065 m-awr m-awr r 012 m-awr r e Date 14-Aug-93 01-Aug-93 12-Aug-93 Time 12:22 17:57 15:18 Name batch test.txt report

where: Blks X Loc Grp Mem Attr Perm G O Date Time Name is the file size, 1 block equal to 512 bytes is the number of extents (see below) is the starting location of the file on disk is the group number of the file owner (who created the file) is the member number of the owner are the attributes of the file are the attributes permitted for other users where are permissions for the members of the same group as the owner are permissions that apply to all other users is the creation date or the date last modified is the time created/modified is the file name

The first file called "batch" is 75 blocks in size (which is 38400 bytes), has 3 extents, is located at 016347 on the disk and was created by a superuser (255,255). The permissions are set such that anyone in the same group can execute the program as well as anyone not in the same group ie any user can run the program. The second file is an example called "test.txt"; the person creating the file had access level of (145,65). The attributes are set so that the Modify, Append, Write and Read are all turned on. This file is therefore not an executable file. Members of the same group (145) have the full permissions available, whereas members of other groups have only read permission, ie they can only view the file. The third example is the file is called "report". The attributes are set so that the Modify, Append, Write, and Read are all turned on. Members of the same group (167) can only view the file. Other members have no permissions at all, they would have no access to this file.

For a directory example, the command files +v +d is run and the result looks like: Blks 1 7 10 X 1 2 2 Loc 00034B 02CAC 017FE Grp 255 255 064 Mem 005 255 255 Attr m-c-r m-cwr m-c-r G-Perm-O r r wr wr c-r c-r Date 14-Jul-93 1-Aug-93 2-Aug-93 Time 12:36 15:06 12:48 Name cbm chrom help

The directory called "help" has modify, create and read attributes turned on. Both group and other permissions are set to allow the creating of new files and reading files from the directory. A group leader may access any file owned by any other members of the group. A superuser has similar freedom over all groups.

Changing Attributes The chattr command is used to change the attributes of a file. The following examples show the use of the chattr command with the previous files: chattr batch a = a p = e

Example 1

Turns off the append attribute (no user would be able to have this permission while the attribute is turned off) and turns off execute permission to group and other users on the "batch" file (this means that only the owner of the file, in this case superuser 255,255 would be able to run this batch file). chattr test.txt pg = aw po = +m

Example 2

Turns of the group append and write permission (no members of the same group would now be able to add to or edit the file); turns on the other modify permission (any user would now be able to change the file attributes). chattr report g=170 m=145 pg = r n=new_file a = m

Example 3

Changes the group and member number of report" to 170,145 (ie the owner of the file has now changed so that file attributes would apply to this new owner, and group permissions would now apply to group 170); turns off the group read permission (ie group 170 users now have no access to the file); changes the name of the file to "new_file" and turns off the modify attribute. The last step, turning off the modify attribute, has the effect that no one, not even a superuser, can ever change the attributes or permissions of the file again.

Example 4

chattr cbm p = +c

Changes the permission of the cbm directory so that new files can be created in this directory by all users.

As an exercise, the reader should: 1. Create a text file called "test" in your home directory that has the command beep as the only text in the file. 2. View the attributes of the files and the default permissions 3. Change the attribute of the file so that it is executable. Verify this by running the "test" file.

4. Change the permissions so that group users (only) can execute the file, verify this by using the files command. 5. Change the permissions so that anyone can execute, modify, read, write or append to the file. 6. Change the attributes so that modify attribute is turned off, try to modify the file again.

The chattr command can also be used to unbusy files:use files +b to determine any busy files use chattr filename s = b (changes the status to unbusy)

Extents Extents are an indication of how many pieces a file is in. An extent of 1 means that the data is continuous in on place on the hard drive. The more extents a file has, the more the file is scattered in different pieces on the drive, the slower the access speed will be since QNX will have to physically search through more parts of the hard drive to read a file. Extents occur as files are appended or grow. The depth database, dbdepth.qlog is the prime example; it is continually growing as the well deepens. As it grows it is likely to come to a portion of the disk where the blocks are already occupied by another file - the depth database will therefore continue its growth in another part of the disk thereby forming an extent.. Extents can not be totally avoided, but they can be minimized. The easiest way is to simply copy the file to another directory (or even better to a floppy disk), removing the original file, then copy back to the original directory. Simply by the act of doing this, QNX will first search for a portion of the disk large enough to hold the entire file. If this is possible, the whole of the file will be in one place ie one extent. If this not possible, the biggest available space will be used, whatever part of the file will not fit here will then be located in the next largest available space.

6.6 PROGRAM OPERATIONS Scheduling tasks Tasks can be scheduled to run at predetermined times. The program called cron must be run as a background task, cron will then check the list of programs to be and at what time. Cron should be set to run automatically by setting it in the system initialization file. The usual way to do this is by selecting terminal type 99 for it to be run, since this has no output. ie ontty 99 cron &

The list of programs to be run is contained in the file 3:/config/crontab. The list must have 6 fields: minute, hour, day, month, day of week, program name. The following example will backup only files that have changed from node 1 to node 2 every 24 hours at 01:30 30 01 * * * backup [1]3:/ [2]3:/ +a +n p s=c

If the crontab file is altered, the cron program must be stopped and restarted in order for the changes to be read by the system and take affect. The cron program is typically used for updating the realtime clock (this is set by default within the crontab file) and for operations such as backup, but the possibilities for the use of this program are endless. Working on other network stations The ditto command will allow the user to view another users console and optionally use their keyboard; this is useful if you want to help a user run a command or program remotely. From within ditto type <ctrl e> to bring up a command menu on screen. The ditto command will not work if you try and ditto a window terminal. In this case you would have to use the n option from the menu to change to a QNX console. The opposite will work, i.e a user who is in windows can ditto another node from within a windows shell. If you need to reboot a remote station, you can do so by from within ditto by using the r option from the command menu. For example if I want to ditto node 4, and I want the keyboard enabled: ditto n=4 +k If you do not want the user on the other node to know that you are using ditto, you would use the +q option: ditto n=4 +k +q

Accessing MSDOS formatted disks The command called dfs will start the DOS file system dosfsys running which will enable you to copy files to or from MSDOS formatted disks. The format of the command is dfs start a=1

Whilst dosfsys is running, both MSDOS and QNX disks can be accessed, but the syntax is different in the two cases. Obviously, to access a QNX disk, the floppy drive is referred to as 1: /, whereas to access an MSDOS disk, the floppy drive is referred to by the DOS name, ie a: / Note that the path divider is still the QNX forslash, not the normal DOS backslash. Any commands used whilst using an MSDOS disk will be the normal QNX commands, such as ls, cp, rm, mkdir etc. If copying a file from the QNX drive to an MSDOS disk, you must ensure that the file name suits the DOS format, namely a maximum 8 characters followed by a 3 character extension. Attempting to copy a file with 16 characters, which is fine in QNX, to a DOS disk will result in an error. DFS will not perform any translation on the file or disk. The DOS file system does not enable us to format an MSDOS disk. If we are required to copy files to an MSDOS disk, the disk will have to be formatted on another computer.

Example: if we want to copy everything in 3:/datalog/reports to an MSDOS disk in floppy drive A, to a new directory called "new_dir":

dfs start a=1 mkdir a:/new_dir cp 3:/datalog/reports/* a:/new_dir To stop dosfsys: dfs stop

to start dosfsys running to make the sub directory to copy the files

Since both QNX and MSDOS disks can be accessed while dosfsys is running, if we are regularly copying files to MSDOS disks, there is no need to stop dosfsys after each operation.

The normal use of this process is for transfer of data files that have been LAS (see later in manual) to MSDOS format for transfer to a DOS system, or conversely to import data to our database that is provided on an MSDOS disk. Additional Commands and Command Options ff query pswd fopen slay

to locate a particular file

ie ff <filename>

to check disk space used and remaining on a particular drive to change your password checks which files are being held open for use by another program to stop a task this can be used with the program name, node number, tty number or Tid number ie slay program n=2 slay i=3bac n=2 slay i=3bac t=2

(i=Tid number) (t=tty number)

backup

principally to back up from node 1 to node 2, but may be used to backup to and from floppy disks eg backup [1]3:/ [2]3:/ +a +n -p backup 1:/ 3:/ +a backup all newer files to node 2

backup from a floppy disk (assuming files on floppy were copied using backup)

Common Options

+/- p

pauses

used for example with the chkfsys command to detail each step of a multiple operation; requires user input at each pause used with the files command to give extra information eg with backup command, newest files only eg with backup command, all the files used with chattr to change file status eg when deleting multiple files, user will be asked for confirmation for each file deletion continual, no user input with chkfsys, to rebuild the bitmap

+/- v +n +a +/- b +i

verbose newest all busy interactive

+r

recursive rebuild

+/- f +/- d s

files directories status steal set with chattr, to change file status with qterm, to clear port buffer with rtc, to set the hardware clock clear bits during backup

s=c

6.7 ARCHIVES Compressing Files If a file or group of files are to be archived the files can be compressed to save disk space or to save transmission time if the file is being transmitted by modem. There are many different algorithms for making compressed archives of files including zip, tar, arc and zoo. Datalog uses the zoo format for compressing and archiving files. An archived file can not be used directly and has to be de-archived before it can accessed or read. The zoo command has 3 levels of help, in ascending order they are: zoo ? zoo h zoo H

Command Format zoo * <file.zoo> <filename> where program filename is the name of the file that is to be compressed and added to the archive file. The full directory path should normally be given as well as the filename. * is the particular operation to be performed:a h l D x x: add to the archive high compression list delete expand files to original directories, they will be created if they do not exist expand to current directory file.zoo is the name of the archive file, the .zoo does not necessarily have to be included as it will be added automatically by the

The high compression is normally used, and should certainly be used for large files and database type files. When archiving, the full directory path of the file should be specified. This is recorded in the archive file and allows the files to be restored to the same directories at a later stage - the importance of this is evident when having to restore all the files needed to recreate a well at a later stage.

The following examples will show the use of zoo: To zoo all the time files in 3:/datalog/dbms and archive in /tmp/time3.zoo: zoo ah /tmp/time3.zoo /datalog/dbms/time*.qlog If the archive (time3.zoo) does not already exist, it will be automatically created. zoo l /tmp/time3.zoo

To list the contents of the archive file:-

To delete a particular file (eg time960720.qlog) from the archive:zoo D /tmp/time3.zoo time960720.qlog To restore the files to their original directories:(individual files can be specified if necessary) zoo x /tmp/time3.zoo

Backing Up Large Files To Disk If you have a file or files that you want to backup or archive to floppy disk that are larger than the disk, then the fbackup command has to be used. There are two steps to creating an fbackup archive; the disk has to be specially formatted for use by fbackup the files have to be copied to the disk.

Note that a QNX formatted disk is different to a fbackup formatted disk. The fbackup program requires only the first disk that will be used to be specially formatted - this disk will initiallise an archive directory. When further disks are required, you will be asked for them. They need not be formatted, this will be done automatically by the program. The most essential thing before starting an fbackup archive, is that you have enough disks for the size of the file. There is no compression through fbackup, large files should be compressed beforehand with zoo.

Format and Initialise the first disk:fbackup 1 in 20 v = ___________

To add a file to the fbackup archive:fbackup 1 sa filename

in - initialise sa - save v - names the archive directory if required 20 - allows upto 20 files to be added to the archive

The archive directory on the first disk will record the number of files in the archive and the file names. Unlike the zoo command, fbackup does not require the full pathname of the file being archived, it will be recorded automatically. A disk that has been formatted with fbackup cannot be accessed by normal QNX commands, therefore special commands have to be issued in order to view or restore the contents of an fbackup archive. It is important that the disk is identified as containing an archive by using the fbackup command. The label should also consist of the names of the files in the archive, and their original directory.

To view all the files on the fbackup disk: fbackup 1 fi fi = files

To restore all the files to their original path: fbackup 1 re / re = restore

The root path "/" is necessary to restore the files to their original directories since fbackup uses the path as stored on the disk and if the "/" is omitted the files will be restored starting from current working directory. If you wanted to restore the file/s to another directory, the format of the command is:fbackup 1 re <disk dir>, <archive dir> where disk directory = directory in which to restore file archive directory = original directory of file eg if dbdepth.qlog was archived from /datalog/dbms and we wish to restore it to /user/datalog fbackup 1 re /user/datalog, /datalog/dbms

6.8 Checking the File System The fopen command will tell the user what files are currently being held open by a program. This should be determined before running any of the following operations ie all administrators and programs should be shut done.

The dcheck command will check a disk (whether hard or floppy) for bad blocks. If bad blocks are found the +mark option can be used to mark the blocks as bad and stop them being used by the system. The location of these blocks will be recorded in a file called /bad_blks. There should be no open files if the +mark option is used. For example: dcheck 3 +mark Will check drive 3; if bad blocks are found, /bad_blks will be created and the bitmap updated.

The chkfsys command will perform a consistency check of the file system on the requested drive. Chkfsys should only be used when the system is idle, there should be no open files when chkfsys is running. Chkfsys searches for errors and corrupt files and should be used at the start of a new job; at regular intervals throughout the job; at any signs of sluggishness or problems with the system. Before running chkfsys, shut down all programs and administrators and check for open files passadmin and task should be the only programs open. chkfsys 3

specifies drive 3

Running the command in this way, without options, will result in any errors or corruptions being reported, and the user being prompted for the course of action. Many errors can be automatically fixed by chkfsys, so the user should reply yes when prompted as to whether to fix the problem. Other corruptions may be reported as unfixable. Here the user has to make a note of the offending file and when chkfsys has finished (or stop it by ctrl c if many blocks are affected, since each affected block will be detailed and it could take forever!), the user should zap the file. zap filename Zap should only be used in this or a similar instance, to deal with a corrupt file that cannot be removed from the system in the normal manner. Zap will mark the blocks previously used by the corrupt file as being used to prevent them being used by other files - these blocks will be lost from the disk space.

Chkfsys should be run again, until no errors are reported. At the end of the process, chkfsys will recover zapped blocks and rebuild the bitmap. To run chkfsys without pauses and with automatic fixes: chkfsys +r -p

This is good when you think the corrupt files have been eliminated, but may result in corrupt files being missed if you run this command initially. Zap can also be used to remove a directory structure and any files contained with the structure. Chkfsys should always be run after this in order to retrieve the zapped blocks. zap <directory name>

Recovering Deleted Files If you accidentely delete a file, use the und command to try to undelete the file. To undelete a file, it has to be recovered in a directory other than the one it was removed from. This may be on the same hard drive or on a floppy disk. Time is obviously of the essence, since the longer the period before trying to recover a file, the more likely it is that the blocks that were made free by removing the file will become occupied by another file. und <filename> <directory> eg disk und file.txt 1:/ the file will be recovered in the root directory on floppy

The und command should only be used by a superuser. If a non superuser uses the command, or if the same directory (ie the one that the file was removed from) is given, the file may well appear to be retrieved, but more often than not, it will be left busy or, even worse, corrupted. If its busy, no problem, unbusy it with chattr, but check that it is not corrupted. If it is corrupted, you will not be able to access that file ie to view, copy etc; the file will have to be zapped.

6.9 GNU (x-y-z) Plots This is the software used by QLOG to produce plots of varying complexities. Examples include the directional plots, gas ratio plot, all the plots in the engineering suite. We can also use this software to create any plot required. All that is required is a plot command file and a plot data file. Command file: Data file: 3:/datalog/plots/<filename>.plot 3:/datalog/plots/data/<filename>.dat

More than one data file can be used for one particular plot; each one would have to be specified in the .plot file. The files should be created in the editor and the data file, in particular, should be of the correct format for the plot to work (ie columns should be lined up correctly; there should be no blank lines or blank characters at the beginning of lines).

Format of the Plot file set terminal windows set output set nokey plot with or without illustrated key set grid creates a grid from the tic positions set logscale x would plot the x axis grid as a log scale set label 1 _____________ at 20,140 left (or right) any number of labels may be used; they can be aligned from the left or to the right set samples maximum number of data points in the data file set data/function style lines function style - plots points only data style - plots a curve set tics in to plot inward tics on the graph set xtics 0,10,100 positions tics every 10, beginning at 0, ending at 100 set ytics.............. set title titles the whole plot set xlabel labels the x-axis set xrange [0 : 100] scale for the x-axis set ylabel set yrange set no autoscale no automatic scaling, so uses the scales defined above plot /datalog/plots/data/file.dat title anyinfo gives the name of the data file and the name for the key. If more than one data file, the same format should be repeated with a comma seperating the two strings.

The format of the resulting plot is default:-

ie the plot will be landscape format (horizontal) and A3 sized. For a final well report, the plot should fit on A4 or Letter sized paper, so that the vertical and horizontal scale settings in the plotter set up will have to be changed. Similarly, the plot is likely to have depth on the Y-axis, so for a final well report, the plot would probably be better in portrait (vertical) format; so, again, the scale settings in the plotter setup would have to be changed accordingly.

The colours used for the plot will also be default ie titles and border grid pen 1 pen 2 black magenta

Pen 3 and onward will then be used to plot the curves or data. Should you wish to change the colours for a better appearance (the grid would often be better in yellow), you should change the pen colours in the plotter setup).

Further details on how to use these plots is given in the help file:From a shell within windows, <h> <plot3> gives the gnuplot prompt gives the help file selection

6.10 Modification of the Depth Database A combination of two programs, dbprune and dbprune_lst, enables you to remove unwanted records from the depth database. The changes are irreversible, therefore the user should be sure about what he/she is doing, and as a precaution, ensure that they make a backup of the database before attempting to prune. The original database will not be altered in any way, but the end step requires the modified database being copied to dbdepth.qlog, therefore it is important to keep an original copy incase the process hasnt worked for some reason. NOTE that this process cannot be formed on the hot system, ie while you are drilling and records are being written to the database. This should only be done when you are confident that you can complete the task before drilling recommences ie during trips or at casing points.

Before starting the process, carefully check the database and make a note of which records you want to remove. You should be 100% certain of these particular records before proceding. You are likely to be removing records for 2 reasons. Firstly, if you have had extra records created due to an incorrect setting of the padding factor, or created after a depth correction. In the case of incorrect padding factor, it is simply a case of removing those extra records. In the case of a depth correction, you should make sure which record contains the correct data. For example, if the record for 1000.0m has already been created but you then have to make a depth correction to 998.0m, it is quite possible that instead of 1000.0m being written over, an extra record, 1000.1m will be created. In this case, the record you want to keep is 1000.1m, so that 1000.0m should be removed. Secondly, there may be have been some corruption causing bad depth numbers where the hole depth in the appropriate column is not the same as the hole depth for that record which is shown in the record information at the top of the page. This should be checked carefully in dedit. However, if this is the only record for that depth, you should NOT remove it using this process it cannot be replaced. We will see how to deal with this situation (using dbfixer) in the next section of the manual. Obviously, if this corruption has occurred at the end of the database, beyond the end depth of the well, then you can delete these records using this process.

Procedure Shutdown DBdepth and DBadmin and make a backup of the database. Restart dbadmin (remember whether drive 3 or drive 4) dbprune_lst this creates a datafile called dbdepth.list in the users home directory.

This data file has the following format:Depth 1000.0 1001.0 1001.2 1002.0 Go / NoGo 1 1 1 1

This file can now be edited to remove the unwanted records. Normally, you will be able to use the editor (ed dbdepth.list) but if the database is large, you may find that dbprune.list is too large for the editors memory and it wont load. In this situation, use the big editor (bed dbdepth.list) in exactly the same way as the editor. Before editing, check the start and end depths and confirm that the listing is correct. For the records that you want to remove, change the 1 to a 0. Only the records marked by a 1 will be copied over to the new database file. When doing this, refer to the record notes you made beforehand, or simply have the database open on another console - this way, you make sure that you are removing the correct records. When the listing is edited, save the changes and exit the editor. dbprune

this creates the modified database containing only the wanted records ie those marked by a 1. The file is called dbdepth.qlognew and again is contained in the users home directory.

dau_kill DBadmin copy the modified database back to the original database cp dbdepth.qlognew 4:/datalog/dbms/dbdepth.qlog

restart dbadmin (the crc and index files will be automatically updated) and check that the new database is okay. Check the start and end depths and ensure that the database is complete. Pay particular attention to the depth reference column and the top display information. The depths here must be the same and non-zero, otherwise any ensuing dbdepth or plotter work will fail. If everything is okay, the prune process is complete. You can now remove the original database backups.

Occasional problems have been experienced with the prune process when the database is located in drive 4. This is usually indicated by a meaningless dbdepth.list being created. If this situation does arise, transfer the database to drive 3 and prune using the following procedure:

shutdown DBdepth and DBadmin cp 4:/datalog/dbms/dbdepth.qlog 3:/datalog/dbms (leave the original in drive 4 as a back up)

dbadmin & 3.

depth.crc and dbdepth.index will be automatically created in drive They may have to be removed in order to perform dbprune_lst successfully. Firstly, try with the crc and index files in place; if dbprune_lst does not perform, or if dbdepth.list is corrupt, then remove crc/index and try again.

dbprune_lst files

ed dbdepth.list as previously dbprune dau_kill DBadmin cp dbdepth.qlognew 3:/datalog/dbms/dbdepth.qlog dbadmin & check database is okay as previously. If so, dau_kill DBadmin cp 3:/datalog/dbms/dbdepth.qlog 4:/datalog/dbms dbadmin d=4:/datalog/dbms & check database is still okay, if so remove dbdepth.qlog and crc/index files from 3:/datalog/dbms

6.11 Backing up the Databases Time Database The time database consists of singular time files automatically created for each day. They are located in drive 3 eg 3:/datalog/dbms/time960723.qlog

Depending on the complexity of the job and how much data is actually stored in the database, each time file could contain up to 3000 blocks, so that the disk space used up is important. The normal procedure is to compress the time files into an archive file, then copy the archive file to a floppy disk. To save disk space, the time files can then be removed from the drive.

eg

cd 3:/datalog/dbms zoo ah time1.zoo /datalog/dbms/time960723.qlog zoo ah time1.zoo /datalog/dbms/time960724.qlog zoo ah time1.zoo /datalog/dbms/time960725.qlog etc

Remember that a floppy disk contains 2880 blocks, so when no more time files can be added to the archive without exceeding this number of blocks, copy the archived file to disk and remove the time files from the drive. cp time1.zoo 1: / Only the data for the particular days whose time files remain on disk will be accessed by the time database. Should you therefore need to access data for a day that has been archived and removed from disk, simply restore that particular time file to the system.

These procedures can be done even if the administrators are running. The only time file you will be unable to archive or remove is the current days time file which will be held open by dbadmin. Depth Database Unlike individual time files, the depth database, dbdepth.qlog, cannot be accessed (ie copy) on a hot system when the administrators are running. The easiest form of backup is simply to copy dbdepth.qlog to floppy disk and/or another node. However, this cannot be done while we are drilling. This would be undesirable, ie no backups, during bit runs that may last several days. dbget

allows you to do depth database backups even when the administrators are running, by creating a data file which is an image of the actual database.

The dbget procedure can be used for unit backups of the entire database and also for successive update backups to be sent to remote work stations. Here, we will just look at the procedure for unit backups. The user should refer to the next section in this manual for the procedure required to update, via modem, data on a remote work station. Unit backups of the entire database The user should be located in their user directory. dbget creates 2 files in your user directory:dbdepth.newlog - the data file, an image of the database depth.crc - marks the records extracted by the current dbget operation

While dbget is running, the display will show the record numbers being read. At the end of the process, the total number of records read (together with the % of the database) will be displayed. You should ensure that this agrees with the actual number of records in the database. For a unit backup, you can simply copy dbdepth.newlog to a floppy disk.

On the next occasion you wish to make a backup, you should remove both dbdepth.newlog and depth.crc from your user directory before running dbget. After running dbget, both files will be recreated, representing the current, complete, database. The newly created dbdepth.newlog can then be copied to floppy disk, replacing the previous one.

Should the situation arise that you lose dbdepth.qlog and need to restore the database:cp dbdepth.newlog form the floppy disk to your user directory dbput this will write the records back to dbdepth.qlog

Naturally, only the records extracted by the previous dbget will be restored, so the importance of doing this process on a regular basis is clear.

NOTE the file depth.crc marks the records extracted during the dbget process. This file is used when sending database updates to a remote. If depth.crc is left in place when running dbget, only records that have been added to the database, or changed since the previous dbget, will be extracted. This process will be considered in more detail in the next section.

6.12 Creating Restricted QLOG menus

At wellsite, if several users such as geologist, engineer, toolpusher and drillfloor, are networked to the system, it is important that their access to the system is limited to those features that they would likely require. It would obviously be unwise for other users to be able to edit the database or to have access to important realtime controls and setups. These restrictions can be applied by changing what actually appears in the QLOG menu. Default menus are already designed for the above users, but may have to be modified at wellsite. The full, default menus are stored in 3:/datalog/menus:realtime.dial reports.dial database.dial engineer.dial geology.dial other.dial setup.dial

These files list each item in the individual menus, and have the following format:menu name | program name eg in the realtime menu; Realtime Zeros | dau_zeros

An additional file in 3:/datalog/menus is button_names, which simply contains the title names of each of the QLOG menus.

Any user who logs in to the system will automatically be given these full, default menus, unless directed otherwise.

To restrict the menu for a particular user, the default dial files should be copied to that users windows sudirectory, and then edited as required. cp /datalog/menus/* /user/bob/windows

eg for user bob;

To remove a particular item from a menu, simply remove that line from the appropriate dial file. If a complete menu is to be removed, simply remove that menu name from the button_names file.

To change the database.dial so that the database can be viewed only:ed database.dial change Edit Databases | dedit; to View Databases | dedit -e;

NOTE, that this should also be done for the lithology editor, otherwise other users will still be able to change any geological data:change Lithology | lithed to Lithology | lithed -e

Windows menu options:Any program that can only be run from the windows interface is indicated as red in the QLOG menu and that program cannot be accessed from a normal console. To set this in the .dial menu file, the program name is preceded by a ~ or . symbol:ie Lithology | .lithed

Sub-menus are defined in the following way:Edit...@(Databases | dedit, Lithology | .lithed)^R;

Should the user wish to have restricted menus in another language, eg french, then the default dial files should be copied from 3:/datalog/menus/french to the users windows sub-directory (ie /user/bob/windows) and edited in the same way as described.

6.13 Changing the Default Parameter Names So as to provide good, understandable realtime displays and to give good final log presentation, it is advantageous to change some of the default parameter names on the system. This may be to provide a more accurate or fitting name, or simply so that a particular name will fit better in the space allocated on logs. Examples may include:Pits1, Pits2 etc H2S1, H2S2 etc Comments1 etc Rename to suction, settling, mixing pit etc Rename to shaker H2S, flowline H2S etc Drilling Data, Survey Data etc

Changing these names is by way of editing certain text files which are held in 3:/datalog/text. display.txt

contains the name of every measured or calculated database field, and will affect every part of the system such as displays, units, database, plots etc. contains the names of extra database parameters that are input by the user, eg comments, lithology etc and will affect the same parts of the system as display.txt except for plots. contains the same names as edits.txt, and changes here will affect the names seen on any plots or logs. contains the names of configurable analog and digital parameters ie measured parameters. Changes here will affect the calibration and configuration menus together with the test mode.

edits.txt

plots.txt channels.txt

Example of changing display.txt :- renaming pits 1 to 4 (suction1, suction2, settling, premix) When you access the file using the editor, you will see the following information:0013 0014 0030 02 02 03 TripTank Pits Temp_In TTV1 TV# MTIA M3 M3 DEGC

Column 1 Column 2

Field Number Parameter Type

eg

digital parameters 01 pit volumes 02 gasses 08

Column 3

Parameter name - note that if there are two words, they have to be joined with an under_score. Column 4+5 Standard abbreviation and units for WITS format

In the example, note that the Pits parameter possesses fields 14 through 29 ie 16 fields - each field will be named sequentially ie Pits1 to Pits16. To rename Pits1 to Pits4, those fields have to seperated in display.txt. The edited file should look like:0013 0014 0015 0016 0017 0018 0030 02 02 02 02 02 02 03 TripTank Suction1 Suction2 Settling_Pit Premix_Pit Pits Temp_In TTV1 TV1 TV2 TV3 TV4 TV# MTIA M3 M3 M3 M3 M3 M3 DEGC

The remaining Pits parameter now occupies fields 18 - 29, ie 12 fields, ie Pits1 to Pits12. The total number of Tank Volumes is still 16. If this line was omitted, instead of having Pits 1 to 12 in the unused channels, you would have Premix_Pit1 to Premix_Pit13. Example of when and how to use channels.txt The likely time that this will occur is when, because of sensors required on a particular job, the default channel configuration is not sufficient for the sensors required. eg you have to change a channel that, by default, is configured as H2S, to an extra pressure sensor such as Kill Line Pressure.

Channels.txt contains two columns:Column 1 Column 2 abbreviations that will appear in the test mode names that will appear in the configuration and calibration menus

Changing the names:Original channels.txt H2S1 H2S_1 After editing KLP Kill_Press

This new name will now have to be changed in display.txt aswell, remembering to change the parameter type (changing from gas to pressure), and if required, the WITS abbreviation and units.

Original display.txt 0050 0051 0054 10 08 08 Casing_Press H2S Combust CHKP HSX# CBG# KPA PPM PPM

After editing; 0050 0051 0052 0054 10 10 08 08 Casing_Press Kill_Press H2S Combust CHKP HSX# CBG# KPA KPA PPM PPM

(if you require a WITS abbreviation, seek advice from the programmers)

You should also check the system and user decimal places for the new type of parameter and change if necessary (in this case, system decimals would have to be changed from 4 to 1, and user decimal places changed from 0 to 1).

SECTION 7 - click here to go to main menu ADVANCED QLOG/QNX

7.1 Corrupted Database 7.2 Importing and Exporting Data

Using dbfixer Export LAS Import Connecting Machines Configuring the Network Card Network Commands

7.3 Network Configuration

7.4 Advanced use of the System Initialization File 7.5 Communications Terminal Types External Modem Settings Serial Port Settings Comm Qterm Transferring Files, QCP Relaxed Timing Option Sending Multiple Files Transferring Databases Other Transmission Protocols

7.1 CORRUPTED DATABASE

In the database, the hole depth is stored in two locations:1) the reference depth used by dbadmin - this depth is displayed in the information, at the top of the page in dedit, for each individual record. 2) depth displayed by the system - this depth is displayed in the column reference BL

For the QLOG system to function correctly, these two depths MUST be the same. If, due to some corruption, they were different, programs that refer to the database, such as dbdepth and plotting programs, would not work. Neither of these depths can be edited in normal circumstances - the reference depth cannot be accessed, and the hole depth in column BL is a locked parameter, it cannot be changed. In the event of a corruption as described, a program called dbfixer can be used to change the depths of individual records. Unlike the dbprune process which doesnt affect the actual database, dbfixer works on the hot system - any changes made will directly change the database, therefore the user should use CAUTION. It is best performed after any necessary pruning, and it is obviously advisable to make a backup of the database before using dbfixer.

Make sure that dedit and dbdepth are not running Make sure that DBadmin is running Run dbfixer

You will see the following example:1 1000 1000:

1 signifies the record number The two 1000s signify the 2 depths stored - the 1st is the depth stored in column BL - the 2nd is the depth reference for dbadmin Note that 1000 is not the actual depth stored, but the number stored. The actual depth here is 1.000m, since the system decimals, by default, is set to 3 for the hole depth.

The dbfixer program will take you, record by record, through the database. It automatically starts at the first record. Unfortunately, you are unable to specify another record if you wish to begin part way through the database. Therefore, if you have some corruption that only occurs towards the end of the database, you have to go through the entire database to get there!

If both numbers for the record are correct, simply press the space bar. You will then be taken automatically to the next record. If you need to make a correction, press the enter key. You will see the following:1 1000 1000 : >

You are then required to input the correct depth. When doing this, remember to put the depth in the same format as the depths displayed in dbfixer ie put the whole number after considering the system decimals.

For example, if you wanted to correct a particular depth to 100.5m, the number to input is 100500

Having made this correction, press enter and the database will be adjusted accordingly. You should then proceed, record by record. If you want to finish at any point, ie you dont need to go to the end of the database, press enter followed by control-C. The program will then be aborted. Any changes made to that point will have been saved.

7.2 IMPORTING AND EXPORTING DATA Exporting Data There are two programs that can be used to extract, or export, data from either the depth or time database. Both programs are operated in pretty much the same way, with the only difference being the output format of the data. export las data will be in the ASCII format data will be in Log ASCII Standard format

The extracted data from both methods can then be restored to QNX or MSDOS systems. export Basic command:(depth database) export s=100 e=200 f=** f=** f=** o=3:/tmp/datafile s = start depth e = end depth f = cell reference or field number o = output file f can be defined by either the database cell reference, or the field number which is given for each parameter in the first column of 3:/datalog/text/display.txt eg parameter RPM ROP Methane cell ref a bx bz field number 00 75 77

An index file can be used instead of having to put references for several parameters into one command. This can be done with the editor and is simply a vertical list of the required field numbers. The form of the command would then take:export s=100 e=200 x=index.file o=output.file (The full directory path should be given for each file)

Unless specified, the format of the output data file will be comma seperation. If required, a report format can be given by using the +r option. In association with this, the number of lines per page can be specified (p) and also the width allowed for each column of data (this is given along with the reference or field number) eg export s=100 e=200 x=index.file o=output.file +r p=54

The index file for the above parameters could be

00,10 75,10 77,10

where 10 spaces are allowed for each data column

This command would then extract the data between 100 and 200m, producing a report style output with each page being 54 lines long. There would be 4 columns (depth is automatically exported) each being 10 characters wide. By default, when data is exported from the database, it will be in the system metric units. If User Units are required, the +c option should be used. Data can also be averaged through the export program (a=average interval). A database originally in metres can also be exported and displayed in feet (+f option). For example, in the previous example, if we had desired the output data averaged over 5m intervals instead of every metre, the command would be:export s=100 e=200 a=5 x=index.file o=output.file +r p=54

A recent addition to the export (and las) program is that parameters such as Comments, Lithology Comments, Porosity, Fluorescence, Calcimetry etc can also now be exported. These parameters are those detailed within edits.txt rather than display.txt. Viewing these files, you will notice that the field numbers in edits.txt are the same as those in display.txt. However, this is just a function of the two text files. Should you wish to export these parameters, you again have the choice of specifying them by way of the column reference or the field number. However, the field number will not be as displayed in edits.txt, but the actual number of the column in the database spreadsheet (numbered left to right, with the first column RPM being 0). It would therefore be alot easier to use the column references rather than number. Unfortunately, due to the nature of the text columns in particular, the report format option cannot be used when exporting these parameters.

Exporting data from Time Database The form of the command is identical to that used with the depth database. The differences are the way that the start and end time are detailed, and also that a +t option must be used. export s=d=20-08-96 t=09:00:00 e=d=20-08-96 t=10:00:00 x=index o=output +t Hence, both the date and time have to be given in the format shown above. If d is not specified, the current day is assumed. If t is not specified, midnight is assumed. If difficulties are experienced with this operation, for example with error messages such as this time does not exist in the database, the error is going to be in the the incorrect setting of the timezone offsets.

LAS As already stated, the las command works in exactly the same way as the export command, using the same options. Obviously, the command las will be used instead of export, but otherwise, the only difference is the format of the outputted data. eg las s=100 e=200 a=5 x=index.file o=output.file

The default format from the export program is comma seperation eg 8.000,9.450,23.555 9.000,10.288,25.543 10.000,10.431,20.245 Note that individual columns are not seperated as distinct columns, neither are they aligned if the numbers are different (ie 9.000 compared to 10.000). Note: this default format for the export program can be changed by using the +r option for a report format.

Output from the las program is different in that individual columns will be seperated and aligned. As well as the data, the las output also contains information pertaining to the las software, the parameters that have been extracted and the units of measurement. Each individual column will also be headed by the parameters abbreviation.

A typical output, after all the las information has been detailed, would look like:

DMEA.M :1 BL Hole Depth ROPA.M/MIN :2 BX Rop ~ OTHER DATA Produced by QLOG (c)1991-1996 Datalog Technology Inc. QLOG/LAS $Revision: 2.3 $ ~A DMEA ROPA 25.000 29.345 26.005 20.562 27.011 9.781 28.012 11.582

Note that the units will only be specified if they are the default system metric units. If the option +c is used to extract the data in the user units, and these units are not the default, the space in the header information will be left blank. The user can edit the file to specify the units by using the editor.

The default format for the output data will be MSDOS format. This can be changed by using the m= option, where m= qnx, dos or posix (for unix)

Obviously, if the DOS format is going to be used, the filename given for the output file must confirm to the DOS system (maximum 8 characters followed by a 3 character suffix). If no suffix is specified, the file will automatically be given a .las suffix. +/- w (wrap). This will turn the wrap mode on or off. If this is not specified either way in the command, the las program has default settings for wrap mode. If there are less than 255 characters in a line, one continuous line will result. If there are more than 255 characters, the las program will automatically use the wrap mode - then the maximum characters on one line will be 80. All excess characters will be wrapped. If no wrap command is given, the default is wrap off. ie allowing up to 255 characters in one line. However, if for example, we had 200 characters but wanted the lines wrapped (ie 80 per line), we should use the +w option.

In practice, if we were exporting data to produce a printed report, we should use the export program rather than the las program. Las is primarily used to export to MSDOS systems using a Log ASCII Standard format.

Importing Data This program is primarily used for importing data such as wireline or MWD data into our database. As such, this would be importing data into the user defined columns of the database ( there are 8 UD columns, references JT through KA). Notice that as a default, the first 4 user defined columns are assigned particular wireline data. JT JU JV JW Sonic Resistivity Gamma Ray Bulk Density

Although these would be the columns used when importing this specific wireline data, the remaining 4 UD columns can be used, as can any other column in the database. If a datafile is presented to us in DOS format, simply run dosfsys (dfs start a=1), copy the file to the hard drive and proceed as you would with a QNX file. import f=filename s=** u=*:* d=*:*

Command format:s

lines to skip - this is used to ignore lines at the top of the file. This may be in the case where there is text information (such as in the las format), or if data is only required from a certain portion of the file. when data is being imported into the user defined columns. The first number is the number of the column in the datafile (the first column will always be depth and is regarded as column 0), the second column is the number of the UD column (1 to 8). when data is being imported into any other column of the database. The first number is as above, the second number is the number of the column, or field reference, in the database (the number given in the first column of display.txt).

A maximum of 8 parameters can be imported at any one time. Note that the depth does not need to be specified. The depth of each record in the data file will be read, and this data will automatically be written to the correct depth record in the database.

example: the file wireline.dat contains the following data Depth 500 501 502 Gamma 48.5 49.2 47.8 Sonic 63.6 64.5 62.8 Resistivity 0.875 0.921 0.856

etc

To import this data to the correct columns, the command would be import f=wireline.dat s=1 u=1: 3 u=2:1 u=3:2 If there was already data in these particular columns that we wanted to overwrite, the option +o should be used. Note that with the user defined tracks, no units are specified - they are unitless by definition. Therefore, with the data file, it does not matter what the units are, the actual value will be imported. However, if we are importing data into any other database column (ie using the d=*:* option), the units do matter. Before we import, we should ensure that the user units are the same as the units in the data file. We then import using the +c option (converts) to ensure that the correct data is imported. The +f option (feet) tells the import program that the depth data is in feet rather than metres.

7.3 NETWORK CONFIGURATION Connecting Machines Two QNX machines can be directly connected together, by co-axial cable, with the connection up to 2000 feet or 600m long. Connecting more than two machines requires an active hub. Each node must be connected directly back to the active hub, again with each connection up to 2000 feet. An active hub acts as an amplifier, boosting the signal, allowing longer connections. Obviously, the more nodes that are attached to the network and the longer each connection is, the slower the communication will become. An active hub can be connected to another active hub if more nodes are required than available on one active hub (each hub contains 8 connections).

With one hub, the maximum size of the network would be 8:Node 1 -------- Active Hub -------- Nodes 2 to 8

With two hubs, the maximum size of the network would be 14:Node 1 -------- Active Hub -------- Nodes 2 to 7 -------- Active Hub ------- Nodes 8 to 14

Lights for each connection on the active hub will confirm that the system is active with communication to that particular node. Configuring the Network Card The purpose of a network is to allow users on other machines to access the information being processed by the main server or node 1. Node 1 would then be driving the entire network with other nodes feeding from it. Therefore, even though a user would be physically located at node 3 for example, by default any data that that user sees will be supplied by node 1. Unless the user specifically specifies a different node number, any work that that user does would also be on the hard drive of node 1. A typical network at wellsite may consist of the following:Node 1 Node 2 Node 3 Node 4 Node 5 The CPU or server Unit backup computer KNS (eg for geologist) KNS (eg for engineer) Drill floor monitor Hard drive Hard drive Typically no hard drive Typically no hard drive Typically no hard drive

The network cards for each node on the network have to be configured when the computers are being booted. During the boot up sequence, press <Esc> when the screen is cleared and the following message is displayed: Node n, where n is the node number of the computer.

The following boot menu will be seen: N boot from Network D boot from Disk

Press <Esc> to take you into the network interface configuration menu:

The following items will be displayed:1. Boot from network The option selects the default boot source. Selecting No - this means that this particular node will access its own hard drive from which to boot from. This option would be selected if we have only one computer, and would also be selected for node 1 on a network. Selecting Yes - will cause the computer to attempt to boot over the network. This option would be selected for each node, other than 1, on the network. These selections can be overidden by selecting N or D at the first boot menu. 2. Local Node ID This is the local node number of a particular station on the network. In order for a network to function correctly, each station must have a unique node ID. Node numbers are normally added sequentially, being between 1 and the number of nodes licenced (max 255). If 0 is entered the network card is ignored. 3. Primary Boot Node ID This is the node number of the computer that the local node will attempt to boot from. This is normally node 1, the main server of the network. For example, if we were setting the configuration for node 3, the primary boot node would be 1; on boot up, node 3 would then access the hard drive on node 1 in order to read the boot file that will allow it to boot.

4. Alternate Boot Node ID The node number from which to boot from should booting fail from the primary boot node. This would typically be node 2. 5. Retries from Boot Node The number of boot attempts allowed before a boot failure occurs. Typically set at 1 6. Boot File Name This is the actual name of the boot file on the operating system that the network will boot from. These files are located in 3:/netboot. The same boot file name should be entered in the configuration menu for each node in the network. This can be confirmed from the net command. If an incorrect file name is entered, ie it does not exist in 3:/netboot, the system will not be able to boot up. The file used (as of August 96) is called os.2.21atpb 7. Hardware Interrupt Level This is a setting concerning the communication of the network card with external terminals. This is determined by the EPROM setting on the network card and is normally set at 5. 8. Exit Menu & Boot All settings are saved in the non-volatile memory when this option is selected. Once the system is booting, the green light on the back of the network card should flash steadily at about once per second, showing that the network is in the continual reconfiguration state (see the netstats command). In normal operation the green light should be on and indicates network access, the red light indicates CPU access and normally flashes sporadically.

Example: Configuration files for a network of 3, where Node 1 is the CPU Node 2 is a backup computer with identical hard drive Node 3 is a KNS station with no hard drive Node 1 1 2 3 4 5 6 7 8 N 1 1 2 1 os.2.21atpb 5 Node 2 Y 2 1 2 1 os.2.21atpb 5 Node 3 Y 3 1 2 1 os.2.21.atpb 5

Network Commands

The network size must be capable of running a new node number. To find out the number of nodes that can be added to a network type netsize. The netsize command is also used to increase the size of the network; insert a boot or network disk when prompted and follow the instructions. Netsize must be run on the boot server node. The alive command will show the total number of nodes allowed, and whether they are running or not. Once the network card has been configured and the user is sure that there is network expansion size available, the "sys.init.n" file should be edited or created, where n is the node number. The netboot task has to be running for nodes to be able to boot over the network. This command should be in the boot server "sys.init" file (ie sys.init.1). The new node should now boot over the network. The net command shows all the nodes currently alive, together with information on each node such as Operating System version, memory used and available, CPU speed, tasks running and available etc. There will be an asterisk that indicates your current node ie from where the net command was issued. The netstats command displays the following network statistics: Min Packet Queue The minimum number of empty packets in the outgoing message queue. If this number ever reaches 0 messages (ie data) will be lost. Packet Queue Overruns If the Packet Queue ever reaches 0 this number will increment. This number should always be at zero, if it ever becomes non-zero phone for technical support. Network Tx/Rx Packets The number of packets sent and received by the node. Reconfigurations The number of times the network has been reconfigured since the node was last booted. This value will increase everytime a new node enters or leaves the network. If this value increases when no nodes are entering or leaving the network it could indicate faulty network hardware, cables or noise on the cables.

Network Tx Errors The number of packets corrupted during transmission. Errors are acceptable since these are high speed circuits that will correct errors automatically. QNX will attempt several times to resend these packets. Network Tx Timeouts The number of times packets were unable to be sent to another node which has a network card but whose software is not responding. Network Tx Aborts The number of times QNX has given up trying to send a packet from the node. Network Rx Errors How many bad packets are received by the node. Whereas it is normal for some packets to become corrupted during transmission, under no circumstances should corrupted packets be accepted by another node. This is a definite no no and any Rx errors should be reported immediately to Technical Support. Network Rx Duplicates How many duplicate packets were received and rejected. This occurs when 1 node sends a packet to another and awaits a message that the packet has been recieved. If it doesnt receive that confirmation, it will resend the packet. If in the meantime the receiving node has accepted the packet, it will reject the duplicate packet. Excessive values could indicate faulty wiring/hardware.

If problems are being experienced with the network start netstats+monitor, then everytime netstats is run, a report will be generated with the node number, error type and the time of the error. The command nettest <node number> will check the data transmission between your local node and the given node. The number will increment rapidly showing the communication back and forth between the 2 nodes. Generally no errors are seen, but any errors occuring will be detailed by a message. Errors may be seen for example due to interference from electrically noisy cables eg if the network cable was run close to the main power cable at rig site.

7.4 ADVANCED USE OF THE SYSTEM INITIALIZATION FILES

Each node on the network has an individual system initialization file that is executed every time the node is rebooted. Any changes that are made to this file will only be read, and therefore executed, during a reboot. This file tells the operating system what hardware is mounted, what operating system programs to run, what utility programs to run etc. Typically, the sys.init files are already created on the system, and any changes required to be made by the user at wellsite are small changes. Nevertheless, the user should be familiar with what the file is doing as a whole, in case there are any operating problems with the system. The fault may be an easy fix if the user understands the sys.init file, but a major problem if they dont. The sys.init.n (where n = node number) files are located in 3:/config. When the system boots, the file for each node on the network is read and each node is booted according to the commands in the file. Sys.init.1 is obviously the most important since this is operating the network. Any command that is started with an is commented out and therefore ignored.

A typical sys.init.1 will contain the following commands:Note that many of the tasks are started on ontty 99, terminal with no output. back dots on verbose mount float

suppresses the screen printing of the task ID number turns on the ability to cd.., change up a directory level. In QNX, unlike DOS, the ^ symbol is used for this purpose. with this enabled, all the commands that are executed during the boot up sequence (ie commands in sys.init) will be displayed mounts the floating point software library. This allows non integer calculations for C86 compiled programs do be done rapidly by a separate processor mounts the graphics library for the tvga graphics adaptor. This allows graphic displays on the normal consols (using bar). If windows is run, a different driver (as specified below) is used mounts the sac shared library used by the sac processor time program

mount lib /drivers/glib.tvga

mount lib /config/sac.slib

mount disk 4 d=3 pa=qny t=*** n=** h=* p=* this command mounts the hard drive qnx 4 partition with the options referring to such things as heads and tracks mount cache s=48k d=3 mounts disk drive cache for drive 3. A cache is a portion of the RAM allocated for specific operations, saving processing time. mounts an extent cache to link large, fragmented files mounts a bitmap cache on drive 3

mount xcache s=48k mount bmcache d=3

mount console $con# mounts consoles, number # search 3 sets the drive search order ie drive 3. The search ordcr would be different for other nodes on the network, typically search [1] where node 1 is the network server. This specifies that the search order detailed for node 1 is the one that should be used.

path !!/cmds/!/datalog/cmds/!/Quser/cmds/! sets the system path so that all of the above directories are searched for an executable program when commanded. The 2 !!s means that /cmds will be searched first, being the most important command directory. cd 3:/ timer & a run blk rtc at tzset.sh on entering just cd, you will be taken to the root starts the QNX timing facility required by programs. This allows programs to sleep - they instruct the timer program to give them wake up call after a certain time period has elapsed. When you tsk, any program waiting for the timer will be indicated in the column, where the timer Tid will be displayed. sets the system time from the hardware clock contains the timezone offset settings applicable to the two system compilers the network clearing house program administrates program names across the network and will prevent an administrator which is already running on the system from being started elsewhere.

ontty 99 clearhouse start &

ontty 99 dyna &

starts the dynamic link library (common routines between programs) required by programs copiled with the CII compiler

ontty 99 cii_emul_8087 &

mounts the floating point emulator for CI compiled programs (similar to the mount float command required by the C86 compiler)

ontty 99 envmgr /config/sys.env.1 & runs the environment manager specified passadmin & cti I=15 p=30c & /config/init.cti & runs the password administrator - this should only be run on node 1 initializes the cti multiple serial port card runs the cti port initialization file that contains communication set ups, such as baud rate, for each of the cti ports

/windows/drivers/mdrv Microsoft serial mouse (200dpi) dev=$mdm /windows/drivers/qw.vga & initializes mouse and graphics drivers required for windows these were detailed in Section 7 of this manual. ontty 99 netboot & ontty 99 nettime & ontty 99 cron & the netboot command has to be run in sys.init.1 in order for other nodes on the network to boot from this node this updates the current time across the network this allows the scheduled running of programs detailed in the /config/crontab file

ontty 99 poll l=/logs/poller & this runs poll which checks that all nodes on the network are functioning ontty 99 locker & this is the network file administrator which stops any corruption of files that different users or programs are reading or writing to at the same time.

ontty 99 dumper d=/dumps & a program that crashes due to a memory exception error will be stored in a .dmp file in /dumps for later scrutiny by programmers. passon nacc CPU +w nacc 3 +r +w turns on the password facility allows network access to non superusers of this nodes CPU allows network access to non superusers in order to read and write to this nodes drive 3

7.5 COMMUNICATIONS Terminal Types From a QNX perspective the user is operating through a terminal whenever they are at a screen or console. The terminal type used to communicate with the QNX system will vary depending on the physical connection to the QNX system and the type of hardware. For example, if the user is on a full screen node then the terminal type would be qnx; if the user opens a window shell then a qnxw terminal type is used. If the terminal is not part of the network, the terminal type will vary depending on the type of terminal connected (or the terminal type being emulated). The command tset will show the current setting ie the current terminal type. tset <terminal type> would set the terminal type for the current tty.

The following terminal types are commonly used in QLOG: qnx qnxs qnxw Used on consoles, this is the default setting. Used when remote terminal is a qnx terminal or a machine emulating a qnx terminal (eg qt for MSDOS). Used by QNX windows (this is set automatically by windows).

Another type of terminal would be selected if the remote terminal accessing the QNX system is not a QNX type, for example a vt100. A list of all the terminals supported by the QNX system can be seen by entering tcap list, the tcap command manages the terminal capability database.

Remote Work Stations A work station can not only view data from another Qlog system but can transfer data. This allows remote log plotting and access to wellsite data as if the user were at wellsite, an obvious advantage to clients. Remote communications is achievable by way of a modem (Datalog currently use US Robotics high speed modems) which is connected to the computer with a serial cable. The internal settings of the modem will be looked at in more detail in the Advanced System Management section of this manual. At this stage, the user should be familiar with the external settings of the modem and with the correct settings of the serial ports required for the modem to operate.

External Modem Settings The modem has a set of 10 external dip switches. These should always be set in the following manner:ON OFF 3, 5, 8 1, 2, 4, 6, 7, 9, 10

The back off the modem contains a 25 pin serial port, a power socket for its own 16 volt transformer and 2 jacks, one for the phone line and the second should a telephone be required. The front of the modem contains a series of LED displays:HS AA CD OH RD SD TR MR RS CS SYN ARQ High speed Auto Answer Carrier detect Off hook Received Data Send Data Terminal Ready Modem ready Request to send Clear to send Synchronous Mode Error control

When the modem is attached to the computer and switched on, the following lights should be displayed:TR, MR, RS, CS When a connection is made to a remote work station, the following lights should also be displayed:HS, AA, CD, OH ie a total of 8 lights. All of the lights should remain on for the duration of the communication. Should these 4 go out, the connection has been lost. When data is being transmitted, RD or SD should be displayed.

Serial Port Settings The modem can use any of the serial ports ( $mdm, $term, $cti) on any part of the network, but whichever port is used, the communication parameters have to be set correctly in order for the modem to function. These default settings are stored in 3:/config/init.cti for $cti1. These settings are:baud rate parameters required: 38400 hflow (hardware), iflow (input), oflow (output) split, esc=0

To see the settings of the serial port:stty < $port name For example, if the modem was attached to $mdm (remember node number if the modem is attached to a node other than [1] on a network), you would issue the command:stty < $mdm If the resulting information was given:baud 9600 +hflow +echo +edit +etabs +igate +mapcr You would have to issue the following command to set the port correctly:stty b=38400 -echo -edit -etabs -igate -mapcr +oflow +iflow +split esc=0 > $mdm In other words, settings that are not required have to be turned off with a - and settings that are required, but are not present, have to be turned on with a +. After issuing this command, you should again check the port set up ( stty < $mdm) to ensure that all the changes have occurred. You may commonly find, for long command strings, that not all of the changes have taken place. You should issue a second command to make the appropriate changes still required. Once the port has been set correctly, the settings should be put in the appropriate sys.init file so that the port will reset correctly should the system have to be rebooted. ie the command in the sys.init file should be:stty b=38400 +hflow +iflow +oflow +split esc=0 > $mdm

The baud rate is the speed of communication. It is equal to the number of characters per second (cps), or bits, multiplied by 10. It should be set at 38400 because this is the maximum communication speed of the $cti ports. The base serial ports (ie $mdm or $term) can actually communicate at bauds of 57600 but we still assign 38400 to avoid confusion. When two modems connect, they establish a communication baud rate which may well be less than 38400. It is a common misconception that the serial port baud rate should then be changed to match the modem communication baud. All this does, in effect, is limit the communication speed, thereby increasing call time and cost. If 2 modems established a baud of 19200 but the port was set to a baud of 9600, information could only be delivered by the port at 9600. If the port baud had been set to 38400 on the other hand, information would be delivered by the port at 38400 and transferred between modems at 19200. The modem has a built in buffer to hold the excess data created by this speed discrepancy.

Comm What we have done so far is set up the modem and serial port to enable us to make an outgoing call on the modem. To allow 2 way communication, ie enable a remote station to dial in to our local system, a task called comm has to be running. Again, there is a correct setting for the baud rate and parameters to be run with comm. The default command is stored in 3:/config/init.cti... ontty $mdm comm b=38400 i=ATZ| a=ATA| +h -a + l l=/logs p=1 t=15 m=Datalog i= a= /logs p= t= m= initialization string answer command directory that keeps a record of all connections made number of rings to pick up on time out ie if no connection is made after 15 seconds, the modem will stop trying greeting message

Here, again, the baud rate is set to the maximum capability of the cti port. This is to allow a full speed range for the remote station. This command string should be set within the appropriate sys.init file so that comm will always be restarted should the system have to be rebooted. In practice, you are never likely to want to shut comm down, but if you should wish to do so, the following command should be issued:-

slay comm u=8000 Qterm qterm is the communications software package used by the QLOG system. Before a particular user can successfully use qterm he or she must possess a telephone directory. This should be copied from the qterm directory to the users qterm directory. The user may have to create this directory. mkdir 3:/user/*****/qterm cp /qterm/phone.dbase /user/*****/qterm

To add a particular number to this directory, the user has to enter qterm. The following commands should be issued:qterm m= the correct port name should be given (remember node number if on a network). If no port is specified, $mdm will be assumed.

At this point, the qterm software will be loaded and a message will display Qterm V1.16. The V is just the version. You will see a flashing cursor showing that qterm is waiting for a command. ctrl-a option d brings up the qterm menu takes you to the telephone directory

Alternatively, the command qterm m=$**** +d would take you directly to the phone menu.

To add a new number, the cursor arrows should be used to take you to an available position in the menu. Type e to edit You will then be taken into a submenu for that particular number The following information needs to be entered:System Name The name of the company or person Phone Number The number of the remote modem Modem Port The serial port that the modem is connected to Input Flow On Output Flow On Hardware Flow On Baud Rate 38400

Type <grey plus> to accept You will then be taken back to the main phone directory, where the new number and the correct setups will be stored. From the same menu, you can dial the remote system:option C option A to make the call once attack dial - the modem will keep on dialling until a connection is achieved

Once the above information is stored in phone.dbase, the number can be called directly on entering qterm by giving the system name, eg Acme_Oil, along with the qterm command: eg qterm Acme_Oil m=$cti2

Once a connection has been made with the remote station, you will be given information such as the time of connection and the baud rate that the two modems have connected at. On screen, you will see a normal QNX login prompt. You must remember that this is now on the remote computer and any work that you do after logging in will be done on that remote machine. If you have made this connection from wellsite, you can still see what is happening at wellsite by changing consoles (ctrl_alt_enter). Only one of the consoles on your local machine will be taken up by qterm - this is where you are working on the remote station. The remaining consoles are still occupied by your local system. Once you have finished your work on the remote station, you must log off from the system. Having done this, qterm is still running, the connection between the two modems is still open - it is still costing money! You must therefore remember to hang up after your call and exit from the communications program, this is especially true when two QLOG work stations are connected together as it is very easy to forget that one display is actually from a remote system. After logging off: ctrl_a option h to bring up the qterm menu hang up and quit qterm

You will then see a normal command prompt - you are back on your local system.

Transferring Files In order for files to be transferred between computers connected by modems, a program called qcp must be run. qcp Quantum Communications Protocol

To Send A File from Local to Remote (Upload) To send a file from your local work station over the communications link to another work station, the following procedure should be followed. In this example we will send a file called junk.txt, which is located in our /tmp directory. We wish the file to go to the /tmp directory on the remote machine. The user dials and logs in to the remote system as outlined above. 1) From the command prompt on the remote system type: qcp re This tells the remote system to expect to receive a file using the qcp file transfer protocol. 2) Type 'Ctrl a' to access the Qterm menu on your local work station. Type 's' for send file This brings up a menu of file transfer protocols 3) Select 1 (for QCP). A command bar will appear at the top of the screen with the message file to send. 4) Type the exact file specifications: /tmp/junk.txt After pressing enter, the file transfer will take place automatically. The user will see the percentage of the file sent and the actual speed of the transmission (cps). Once the transfer is complete, the user will be returned to a command prompt on the remote machine.

Here is a summary of the procedure, with commands issued on your local machine in italics and commands issued on the remote machine in bold: After connection is achieved login password qcp re ctrl_a s 1 /tmp/junk.txt bye ctrl_a q

After transfer completed

Sending files to different directories If we had wanted to send the file to a different directory on the remote machine, for example /user/fred, then step 4) in the above procedure would be:/tmp/junk.txt,/user/fred/junk.txt A second method of doing this is to use the force (f=) option when initiating QCP on the remote station. Instead of simply qcp re, step 1) in the procedure would be:qcp re f=/user/fred/junk.txt

The remaining procedure would be the same. Both of these procedures can also be used if you should wish to change the file name. For example, to change the name of the file to junk.rpt, the 2 commands shown above would become:/tmp/junk.txt,/user/fred/junk.rpt qcp re f=/user/fred/junk.rpt

In both of these cases, the filename will be changed on transmission and the file will be taken to the new directory.

To Receive a File From A Remote System (Download) We wish to download a file called blurb.txt to our local machine. The file is located in the /tmp directory on the remote work station: After dialling, connecting and logging in to the remote station, the following command is issued on the remote station: qcp se /tmp/blurb.txt

The receiving end (your work station) will automatically start receiving data. The file will automatically be downloaded to the /tmp directory on our local machine.

Relaxed Timing Option If problems are experienced with the connection being broken before file transmission has been completed, QCP has a relaxed option that can be implemented. This may result in slightly slower transmission speeds but it is more resilient to noise or interference on the line. This will normally allow transmissions to be completed where failure had been experienced. The same procedure is followed as already detailed, with the following options:login password qcp re +r ctrl_a s 1 /tmp/junk.txt +r bye ctrl_a q

After connection is achieved

After transfer completed

In other words, the relaxed option is given when QCP is initiated on the remote station and when the file is named on the local station.

Notes on File Transmission When uploading data (sending data from your computer), make sure you start the transfer procedure on the remote machine. QCP will only start automatically when a file is sent from a remote computer to the local one. Other protocols may or may not start automatically, check first. Once the receiving computer transfer procedure has been initiated, QCP will allow 60 seconds before a timeout occurs and approximately 240 seconds if the relaxed timing option is used. This means you have a minimum of 60 seconds to start sending data before the remote machine will give up.

Sending Multiple Files Multiple files can be sent using wild cards. For example to receive all files ending in .plot located in /datalog/plots from a remote machine to our local work station: qcp se /datalog/plots/*.plot The receive will start automatically on our machine. This option can not be used for sending large subdirectories. For example, qcp se /datalog/cmds/* will not work, in fact no files would be sent and QCP will not find any files to send, instead you should use an index file using the 'x' option.

The same procedure can be used to send multiple files from our local machine to the remote:initiate qcp on the remote as previously detailed (qcp re) on your local machine, ctrl_a s 1 as previously described use the wild card when naming files to send: eg /user/datalog/*.rpt

Using an Index File Multiple files in different subdirectories, or files that can not be sent using the wild cards can be sent by specifying an index file that contains the list of files to send. For example, if you wish to send all of the files in /datalog/plots, except for one called "temp.plot", you would first create the index file (on the local work station) containing a list of these files: This file can be created simply by using the editor, or alternatively, you could use the following procedure:files /datalog/plots -v >/tmp/plot.index This would create a list of the files and any sub-directory paths and record them in /tmp/plot.index . Now we can edit plot.index and remove the unwanted file "temp.plot" by deleting the whole line. Then follow the same procedure as already outlined, but when naming file to send on your local machine, use the following command:x=/tmp/plot.index The x= option specifies that there is a list of files to send. Updating Files If you wish to send all of the data files in /datalog/plots/data that have been modified, or are not present on the other system, since the last update you would use the +n option which only sends the newest files. The local system will compare file dates with the remote system and will send any files whose modification date is newer on the local system. As before, start the receive on the remote machine: qcp re Initiate the send on your local machine: 'Ctrl a' to enter command, 's' to send a file, '1' to select qcp and file to send: /datalog/plots/data/* +n

Sending The Database The two online databases (time and depth) cannot be copied or moved in the normal manner because they are always open for -writing/reading by the database administrator. The other concern with sending the database is that the files can be extremely large and it would take progressively longer if the whole database was being sent each day. Therefore, a procedure has been developed for sending only the data that has changed since the last time the database was sent (including edited data). This procedure accesses the database through the database administrator so that this transfer can be achieved whilst the database is on line and even being updated whilst it is being sent. There are three steps to send an update of the depth database: 1. Create a temporary image of changes in the depth database using dbget 2. Transfer this temporary image to the remote machine. 3. Issue a command to place this image into the remote database using dbput Remember that for the time database, the commands dbget_t and dbput_t would be used and the files created are dbtime.newlog and time.crc. All procedures are otherwise the same. The temporary image of the database is called dbdepth.newlog and is created in the users home directory when dbget is run. The time that dbget is run is the cut off depth, therefore any data written to the database after dbget is run will not be sent. You should always be logged in as the same user before running dbget. A file called depth.crc is created in the users home directory when dbget is run on the local machine. This file is used to determine which records were extracted during the last dbget. Only new or altered records will be extracted durin the current dbget. If this file is missing or deleted, the whole database would be extracted to dbdepth.newlog. The dbdepth.newlog file is then transferred to the remote station. The file would normally be compressed, using the zoo command, prior to transfer so that the file size is kept to a minimum. The users home directory path should be supplied if the user was not in his home directory prior to starting the Qterm communications program. The file will then be sent to the same user directory on the remote station. The user should be logged in on the remote as the same user. Dbget and dbput require the dbdepth.newlog file to be in the users home directory, so if it transferred to a different user directory, the process will not work. When the transfer is complete, dbdepth.newlog will have to be restored on the remote machine, and then placed into the remote work station's depth database: dbput

The dbput program may take some time to place all the data thus it is quite valid to run dbput in the background

dbput & This will have the disadvantage that the data transfer can not be seen but it will allow the user to do any other work on the remote work-station. It is not a good idea to log off the remote machine until the dbput is finished as if there is an error it will never be seen. When dbput has finished it will report the number of records written. The 'fopen' command will show how far dbput is through the dbdepth.newlog file; 67/450 means that 67 records out of 450 have been transferred from dbdepth.newlog to the database. It is worth checking that all the data is present after the dbput is finished by viewing the database with dedit. Summary of the whole process, commands on local machine are in italics and commands on the remote machine are in bold. Assume you are logged in as datalog on your local machine:dbget cd /user/datalog, with ls you should see dbdepth.newlog and depth.crc zoo ah depth.zoo dbdepth.newlog enter qterm and dial remote station login datalog, give password - you will be located in /user/datalog on the remote qcp re ctrl_a s 1 /user/datalog/depth.zoo when file transfer is complete ls, you should see depth.zoo zoo x depth.zoo to restore dbdepth.newlog dbput when all records have been sent to the database, check ok logoff ctrl_a q

Possible problems with database transfers Every time you run dbget you must succesfully transfer the dbdepth.newlog file and sucessfully run dbput on the remote machine. Every time you run dbget a new dbdepth.newlog file is created in your home directory, so if the last one has not been sent and dbput a gap will appear in the remote database. If the lines are noisy and you keep on getting cut off part way through the transfer, try using the relaxed timing option of qcp. Try using the sealink protocol especially if the transfer is via satellite (most overseas communications are via satellite). If noisy lines are a frequent problem, perform the remote send procedure often so that the dbdepth.newlog file is kept small and the chance of being cut off mid-way through a transfer is reduced. When you run dbput, if you receive an error 'Could not open dbdepth.newlog' it means that you are logged in as the wrong user or the dbdepth.newlog file does not exist in your home directory. Both the remote and the local system have to have the database administrator running.

Other File Transmission Protocols Sealink Sealink is better suited to satellite transmission where there can be long propagation delays between the sender and receiver. QCP is a QNX protocol, thus non QNX machines will not support QCP. QNX has other protocols such as xmodem, ymodem, kermit and sealink which are supported by non QNX machines. For example, to send a file using sealink from your QNX system to another system. This procedure may vary depending on the system you are connected to, so if in doubt ask the computer operator for the remote system. If the machine was a QNX based machine the user would type: sealink re to initiate the receive on the remote

To initiate the send on your local machine:Ctrl a s 8 file to send to bring up qterm menu send option to select sealink protocol

QT QT is the MSDOS communications package which will emulate a QNX terminal and perform QCP (Quantum Communications Protocol) file transfers. Normally QT is used to connect a PC with a modem to a QNX system but the PC and QNX system could be directly connected. When the terminal connects with the QLOG system, it will normally display CONNECT and the baud rate that it connects at:for example, "CONNECT connection has been achieved. 2400" means a straightforward 2400 baud rate

The method of error correction and compression detected between the two modem connection will also be shown:for example "CONNECT 14400/ARQ/V32/LAPM/V42BIS" The actual connection speed can be lower than the configured rate due to line noise. The error correcting standards such as V32BIS and V42BIS provide data compression and data correction and should provide trouble free connections. If the modem did not connect, a message will be displayed. Depending on the message either retry or wait (if it was busy for example). The remote login is the same as a local login, the user will end up at the same command line. Example Login by Modem: CONNECT 2400 (5 seconds delay...)

QNX Version 3.15H Node 1 $tty5 Local Time: 11-Apr-92 2:10:18 pm Copyright (c) Quantum Software Systems Ltd. 1983,1989 Login: (enter your login name) Password: (enter your assigned password) Last on 11-Apr-92 2:41:43 pm on [1]$tty30. 0 Login Failures Communications Speed:

Modems may communicate comparitively slowly depending on the line or communications noise. Please make allowances for this if the connection was slower than expected. For example, at 1200 baud it will take approximately 5 seconds to draw a real time screen. All keystrokes are buffered, so give the system enough time to react to a keystroke before the next key is hit.

Logging Off If possible, you should always log off the system properly, do not just hang up as it is possible to cause problems at the remote site depending on what you were doing when you hung up. To exit properly from the QLOG system select "exit". Always exit from the remote system first by entering bye or logoff before hanging up your local system. To hang up your line, type 'Ctrl a' and then 'h' to hang up the line. If you do not hang up you may rack up a large phone bill and you will stop anybody else from dialing into your system. To exit from QT back to MSDOS type 'Ctrl a' and 'q' to quit.

Noise It is possible to have more line noise than the modems can self correct. This is rare, but could manifest itself by hieroglyphic characters being printed randomly around the screen, or simply by totally losing the connection. There is not much that can be done about noise apart from logging off and re-dialing to see if the line noise clears up. If a file transfer is successfully completed, the file will not have any corrupted data due to line noise as all data integrity is checked by the transfer protocol.

SECTION 8 - click here to go to main menu QLOG TROUBLESHOOTING

8a.
8b. 8c. 8d. 8e. 8f.

General Software Failure General User Errors of Software Hardware Faults Depth Related Problems - Crown Depth M200 Chromatograph Plotters/Printers

Rev C - October 1996

8a. General Software Failure


To try and prevent failures from occurring, there are certain operations/precautions that should be performed on a regular basis. Keep the hard drive as clear as possible, ie keep as much free disk space. Therefore, on a regular basis, remove any unwanted files, in particular large ones. This is likely to include:database copies, or those files created by dbget or dbfixer - if these are no needed, remove them. Just keep current versions if necessary. time files - archive them and copy to floppy disk. Just keep the last few days on the hard drive. logs printed to file - backup and/or remove out of date report files - backup and/or remove /datalog/..qlog..

longer

The typical directories that you are going to clean in this way will therefore be user directories, the temporary directory and 3:/datalog/dbms. In a similar way, if you have a second node with an automatic daily backup being performed, it is very important to keep node 2s hard drive clean. It is likely to become full even faster than node 1 if you are doing daily operations such as dbgets or printing logs to file. If node 2 becomes short of disk space, the server, node 1, will obviously be affected. Particularly on the longer wells, keep a regular check on disk space available by using the query command. For larger files, keep a regular check on the number of extents. The more fragmented that a file becomes, the slower the system will become. This is done by using the files +v command. Perform regular system checks with chkfsys as a matter of course. This should be done at the start of jobs, and during the course of a well where possible, especially on projects lasting several months. This procedure will check for, and correct, any corruption on the system. Should a file be corrupt and the system is unable to fix it, then you will have to zap that file and rerun chkfsys to recover the lost blocks and rebuild the bitmap. When performing system shutdowns, ensure that you close down all programs and shut down all administrators. Not doing this will be inviting files/programs to be left open or busy, or even worse, to become corrupted.

However, things may not always go according to plan ................

Database administrator There may be times when you are unable to start the database administrator. This is more likely at the beginning of a job. The cause of this will likely be that the dbdepth.crc and dbdepth.index files are incompatible with the depth database, or are present from a previous database. These files should be removed, or if you are unable to remove them, they should be zapped. You will then be able to start the database administrator and on doing so, the index and crc files will be created automatically for the current database. You should check that dbdepth.lmap, dbdepth.bmap and dbtime.bmap are present in both 4:/datalog/dbms and 3:/datalog/dbms, otherwise the system will not work. If one or more is missing from either directory, then it can be copied from the other directory. The moral of this story is that users should not be removing any files that they are unsure about. The bmap and lmap files will always be on the system by default, so this problem should never occur.

Programs hanging up. If a particular program simply hangs up and locks up either the current consol or the entire node, you should always try to slay the program before resorting to a reboot. This is especially valid if the hangup is on node 1. If you do not know the actual name of the program, then perform a tsk to see which programs are currently running - it is normally easy to determine which is the program that has hung up. If you can do this by switching consol, then it is a simple matter of issuing the commands: tsk followed by slay program_name Should this not succeed, you can try the same process but using the programs Tid number. The command would then be slay i = 3b03 for example. Should the whole node be locked up, then you should try slaying the affending program from another node. The form of the commands would then be (assume node 2 is frozen): tsk n=2 followed by slay program_name n=2 You should also check the State that tasks are running in. This information is given when you run the tsk command. It is normal for the first 5 programs listed to be READY, ie task, fsys, dev, idle and net. Of the remaining programs listed, whether they are run by the system or by the user, the state should be REPLY or RECV. If you see a program here as READY, this will cause you system hangups.

System slow and sluggish It is quite obvious if the system starts to slow up - you will notice that certain operations will take a lot longer than normal; you may actually hear the hard drive whirring away as it is completing tasks. There could be several causes which should be investigated: As discussed above, large fragmented files will cause the system to slow so you should perform a files +v to check larger files for the number of extents. Check the size of the ..qlog.. file. This file can get very large and slow the system. If this is the case, simply remove it. A new file will automatically be created. Hard drive corruption should be checked for by running chkfsys. You should check that only current servers or drivers are running. This refers to plot servers and calcimeter drivers for example. Should these programs not be exited properly, the servers/drivers may be left running. As new ones are subsequently started, you will get several running at the same time with some of them not actually servicing a plot etc. The extra, unused ones should be slayed. If you do not know the actual Tid number of the plots you are currently running, you may have to slay all servers and then restart the required ones. You can run the sac command to see which priority is taken up the processing time. If you are wanting to run another program, you can set a higher priority so that it gets precedence. You do this with the command pri=7 for example. Any program you then run from that consol will run at priority 7, until you change it again (the default is 8). Alternatively, if you can determine which program is taking up the processing time, you can reduce its priority by entering the following command: slay program p=9 for example. Programs with a higher priority will then take precedence.

Unable to access certain files This most likely applies to files such as databases. Firstly, check that the required administrators are running. If they are, then the problem is almost certainly that the file has been left busy for some reason, notably a system crash, so that the file was not closed properly. If you are checking a particular file, you should change to its directory and issue the command:files +b or files +v (a busy file will be shown by a capital B) Should you just be doing a general check for busy files, you should change to the root directory and issue the files +b command. Should you have a busy file, use chattr /directory/filename s=-b to unbusy it. Similarly, fopen can be used to check for tasks that are currently running.

Because a system crash is the most likely time for problems to occur, the user should take care when performing routine shutdowns/reboots ie make sure that all files and programs have been properly closed and that all administrators have been shut down. It is even worth taking the time to run fopen before switching the computer off; the only open tasks should be passadmin and task. Procedure in the event of uncontrolled shutdowns Before rebooting, ensure that there is no communication between the computer and the chromatograph:If you have an m200 with a front control panel, either put the chromat into local mode, disconnect the serial cable. For an m200 without the control panel, simply disconnect the serial cable. If time allows, run a chkfsys to ensure that no corruption has occurred. Check the system for busy files or open tasks Before starting the m200admin, ensure that a rogue m200admin isnt running. (in normal operations, two m200admins will be shown when you run tsk. It has occasionally been known for one to remain running after a crash or administrator shut down. This has to be killed before restarting m200admin. dau_kill may not be enough - a further reboot may be required to kill it)

or

Identifying causes of system crashes Check the ..qlog.. file stored in 3:/datalog. This file keeps a complete record of all events happening on the system, whether performed by the user or by the system itself. By checking events immediately prior to the crash may yield information as to the cause. Any program that fails or crashes will automatically be copied to the 3:/dumps directory. This program should be copied to disk and forwarded to the programmers who can then investigate the problem. NOTE that this dumping of programs will also occur if a failure is caused by user error, so be sure that this wasnt the case before reporting the program as faulty.

With all program failures, record a detailed account of the circumstances surrounding the failure; what tasks were running at the time; what exactly you were doing at the time of the crash; rig status, ie drilling/tripping/reaming/off bottom etc; what actions you took to remedy the fault. This report, together with copies of the ..qlog.. file and contents of the 3:/dumps directory, should then be forwarded to your operational base for analysis.

System hang up on reboot It is a good idea to have the verbose option enabled in the sys.init files so that when the system is rebooting, all of the commands in the sys.init file will be displayed on screen as they are performed. You can therefore see where the system is hanging up - it would normally be something quite simple like not having an ampersand at the end of a command so that the consol is taken up by that particular task when it is started. In this case, the system hasnt actually hung up, just the consol is taken up - you can therefore switch to another consol, login and seek the problem. By running tsk, you can see which program is running on the other consol, stop it with slay, and correct the fault in the sys.init file. Should the hang up be genuine, then there is no problem for any other node on the network - you can simply edit the appropriate sys.init.file and reboot. However, if node 1 hangs up, there is nothing you can do about it - you will have to redefine node 2 and boot from there. Hard drive failure Should the hard drive on node 1 fail, the only recourse is to switch hard drives or nodes (this is assuming that you have a 2nd node). The procedure is straight forward:Reboot Node 2 Escape into the network configuration card Redefine as node 1 and to boot from disk rather than the network. Reboot Obviously, this is only of any benefit if you have been backing up configuration files and using hotback to back up the databases. The original node 1 should then be redefined (as node 3, for example) and run from the network. Remember, that effectively, this node no longer has a hard drive. The system will then run principally the same, apart from the following changes. dau_admin should be run on the original node 1, ie CPU since this is where the DAU is attached to. If it has been redefined as node 3, then you would administrator as [3] dau_admin &. You will have to redefine ports of any peripheral equipment still attached to the CPU ie printers, chromatograph etc You will no longer be able to run programs such as hotback, or scheduled back up to node 2 if you had been doing so previously.

start the

Checking Network Communication Should strange things be occurring on the network; slow updates, bogus data, program crashes etc that cannot be explained by any of the things already detailed, you should check that the communication on the network is normal. You should use the netstats command and pay particular attention to the following:Min Packet Queue Network Rx Errors should never show 0 - if it does, data is being lost should always be 0 - if not, it means that corrupted data is actually being accepted by other nodes.

These are very, very, very rare! but should you experience them, the fault may lie with electrical interference. You should refer these faults immediately to your operations base.

8b. General "User Errors" of Software System Decimal Settings. If data or the calculated derivation of data is incorrect by a factor of 10, then the system decimal setting is incorrect. This should not normally be a problem since these settings are default on the system and should never be changed by the user. Database - Recalc Parameters To save on processing time and to provide speedy access to the database, non essential calculated data is NOT updated automatically in the database. Therefore, when you view the parameter in the database, you will see only zeros. This is NOT an error, but simply means that you have to run the recalc option (F9) to see the values. Use a screen refresh to see the updated data. Viewing Data - User Defined Units The convert program enables realtime conversions, for any available unit, to be made. If your user preferences are in imperial units for example, yet the values displayed are in metric, it simply means that converts is not running. Should you make a change to your user units, the particular program that you are in will need to be refreshed in order for the change to take affect. For example, if you are in a realtime display, change consol and make a unit change, then return to the consol with the display, the units will not have changed. You will have to escape from the display and re-enter. Command Paths If, after typing a command or trying to access a program from the QLOG menu, a message returns that the "command not found", it may be because the directory path of where the system has to look for the command, has not been set or has been lost somehow. Before investigating this, you should ensure that you have entered the correct command, ie that you have spelt it correctly. Do this by listing the contents of either the /cmds (for QNX commands) or the /datalog/cmds (for QLOG commands) directory. If your command is correct, then you should check the path setting. The path is set within the sys.init files. If you type path ?, the current path will be displayed. A typical path may look like:path !!/cmds/!/datalog/cmds/!/Quser/cmds/!

If you create a batch file, then you have to ensure that the following are done in able for it to be run as a command:Change attributes and permissions to make it executable If only you are going to use it, then the file can be located in your user directory in order to be executed. However, if you want any user on the system to be able to use it, then firstly you should ensure that the permissions are set correctly secondly, you should copy the file to the /user/cmds directory. Mathematical Errors "Exponential or logarithmic function error" - This is seen occasionally after a reboot, or after adjustments have been made to QLOG configuration files. This is caused as a result of a nonsense mathematical calculation made by dau_admin. It is usually caused by incorrect hole or pipe sizes or incorrect "equipment" settings. The user should therefore be careful when making changes to these files. For example, ensure that your hole size matches the bit size in the bit database; ensure that no outside pipe diameters are greater than the hole size etc etc. Incorrect lag calculations are also usually a direct result of the user entering incorrect values in the configuration files. Therefore, carefully check the values entered in the hole and pipe profiles, and also the values entered in the pump output file. If the lag calculation is absurdly incorrect, check in the equipment table that no value has been entered in the Air Drill overide facility. Logs or plots crashing at the start usually has a similar cause. The vast majority of cases are simply caused by incorrect scales. This is most common when using log scales - never start a log scale at 0, it will normally be a factor of 10; ensure that the number of divisions entered is equal to the number of cycles defined by the scale range. eg if the scale is 0.01 to 10.0%, you should select 3 divisions. Another cause of plots crashing is the use of the text type in the chart configuration. This should only be selected if you wish to print parameter values as well as, or instead of, plotting them. You should not select this function when you have selected a comments type parameter from the database. The system knows by default that this will be text, so you should leave the chart configuration as the default linear.

Windows or mouse not working 99 times out of a 100, these problems stem from the command entered in the system initialization files:Windows - check that the correct driver has been specified for the video card. qw.vga & is the normal default for CPUs and 2nd nodes qw.vga_bios atiwonder, 1 & may be a specific type for CPUs KNSs may use the specific qw.vga_bios oakland, 1 & Newer models use qw.vga16 g=2 m=5 & Should you be certain that the driver specified is OK, then indeed, the video card may need replacing. Mouse - again check that the correct driver has been specified, whether bus or serial, and for serial mice, check that the correct port has been defined. Bus mouse mdrv microsoft (inport) bus mouse int=3 & (note that com2 in the CPU bios should be disabled if you are using a bus mouse - this will normally be done when the system is set up prior to going to the field) mdrv microsoft serial mouse (200dpi) dev=$mdm &

Serial mouse

If this is correct, then you may need to check that the serial port isnt jammed, ie the buffer overloaded. You can clear this by using the command qterm m=$mdm +s This command will steal the port from the mouse, and in doing so, clear the port. Check again to see if the mouse will function. A final check is to swap the mouse with another that you know is working. In this way you are checking both the port and the mouse. NB Should windows freeze up on you, you do not have to reboot the computer. This is especially valid if this happens on node 1, or if you are only using 1 computer. You can escape from windows by using the following keys:Ctrl_Print Screen Having done this, you should ensure that all windows related tasks, and programs that were running in windows are shut down (eg plots, screen plots). You should use the windows down command or use slay to shut down individual programs.

8c. Hardware Faults

MOST WELLSITE FAULTS REQUIRE SIMPLE SOLUTIONS YET TEMPT USERS TO LOOK FOR COMPLEX ANSWERS. ALWAYS DOUBLE CHECK THE SIMPLE SOLUTIONS BEFORE PRECEDING FURTHER. A SECOND PERSONS OPINION WILL OFTEN REVEAL THE OVERLOOKED "OBVIOUS" ANSWER. i. Hooking up the System Proper Practices: Sensors should be connected at the junction box before connecting multicores DAU/Elcon should be powered down before attaching multicore Ideally, the DAU/Elcon should be powered down before any sensors are disconnected/connected at junction box during the course of a job. Obviously, this is not practical if the rig is drilling, so you should be very careful not to cause a short whilst working on connections - ensure the wire ends do not come into contact with each other, or anything else. If checking seating of, or replacing, microchips, make sure that you wear an anti-static bracelet and/or earth yourself. These chips are sensitive to static and easily damaged if touched by hands with no precautions taken. No Signals on any channel If you have no signals coming through to the test mode, firstly ensure that you have the DAU switched on, dau_admin running, and all cables between DAU and CPU properly attached. If so, the problem is simply going to be that the multicore cable is not screwed in far enough. Check at both the junction box and DAU unit. The best procedure here is to disconnect all cables, then reconnect them all. Intrinsic (Elcon) System Should you have incorrect but the same signals (eg 300 to 400 counts) on each analog channel, then the problem is with the DAU not having reset itself. To fix this:switch off the DAU disconnect and reconnect the co-axial cable from the DAU card on the CPU switch on DAU Depth channel continually produces counts when there is no movement or even when the sensor is actually disconnected. The problem here is with the multiplexer chip (CD74HCT541E) to the right of the barriers. The chip may simply not be seated correctly or may be damaged. This can be investigated by swapping with the depth direction chip immediately below (they are the same). If the chip is damaged, you will now have depth ticks but no direction - this means careful vigilance and use of the invert binary sensor facility until the chip can be replaced.

Analog sensors are showing 4 to 20mA, but producing no counts in test mode; low end counts are abnormally high; minimum to maximum count range is abnormally low. All of these symptons are produced by a faulty analog multiplexer chip which will need replacing. The chip is CD4051BCN again located to the right of the barriers. Restricted range of analog sensor counts, as described above, may also be as a result of a faulty chip on the DAU card inside the CPU. The chip, MM74C901N, will need replacing.

ii. Faults during normal operations Sensor or Circuit Failure? Here you should check the test mode in QLOG. If you are showing 4mA, then you know that the current loop is intact and that the fault is due to signal loss from the sensor. If you are showing 0mA, then you are dealing with a broken circuit. Circuit Failure The way to go here is a methodical approach and a process of elimination. Initially, try to identify whether the fault is coming from within the logging unit or external to the unit. Firstly, the barrier can be checked simply by swapping with a spare, or other barrier, if possible. If you still get no signal, then you know that the barrier is okay and the problem is external to the unit. When swapping barriers you have to be careful that you are using the correct type. This means either digital or analog for a normal DAU, but is more involved with an Elcon system where there are several different types - check the number on the barrier and configuration sheet. Also, when swapping cards or barriers, you have to be careful about causing a short. Ideally, you should switch the DAU off before removing barriers but obviously this will not be practical if the rig is drilling ahead and you are recording data. If you are unable to swap barriers, you will need to test current and voltages. Disconnect the sensor wires from the hazardous side of the barrier. For an analog channel Connect a spare sensor or a loop calibrator to the hazardous side of the barrier, thereby completing the current loop. You can then induce a signal from the sensor/calibrator and check the readings coming through on the test mode. If the signal is coming through, you know that the barrier is okay. At the same time, you can check that you have a voltage by connecting your voltmeter in parallel across the hazardous side terminals. If necessary, you could also connect an ammeter in series within the loop to check the current signal. Remember, you should be reading between 4 and 20mA, and 24v.

For a digital channel You have to create a switch. Simply connect 2 wires to the hazardous side of the barrier. By touching the wires, you complete the circuit and this should result in a pulse. This again, can be checked by viewing the pulses in the test mode.

Elcon 3 wire barriers (torque, temperature, flow paddle) cannot be tested in the same manner, but if the barrier fails, the signal returned will be maxed out. You will therefore see 4095 counts in the test mode. If you have signal failure on all channels, you are going to be checking the connections of muticores; or that power is still being supplied by the DAU - check the green LEDs and fuses. External to the unit Firstly, check the obvious; connections at the junction box and at the sensor. You are then going to check loops in the circuit. Check the loop between the unit and the junction box. Disconnect the wires on the unit side of the Jbox and put your loop calibrator/signal source in series to complete the circuit. You can then check in test mode as to whether you are receiving the signal, or you could also place an ammeter in series and take your reading from there. Check the loop between the Jbox and sensor, by reconnecting the wires at the Jbox, then disconnecting the wires at the sensor. Again, place your calibrator in series to complete the loop (the loop is now sensor-unit, but should there be a fault you know it is between the sensor and Jbox because you have already eliminated the remainder of the circuit) and test for your signal as described above. Should you still be getting a signal, then the faulty circuit is originating from the sensor, so you are going to be checking internal wiring/connections. Sensors It is unusual to have electronic sensor failures. Sensors usually cease to function when they have been mechanically damaged or have wiring problems. 95% of failures can be associated with wiring faults, bad connections, trapped or broken wires, multicores not screwed home etc. Internal DAU Cards These rarely fail. They are best adjusted only by Technicians at the operations base. The voltage thresholds will not normally change during normal wellsite use. If possible, replace the card or use an alternative channel configuration in the software, rather than attempting to fix" the cards. DAC Card Again, this rarely fails. The only repair is unit replacement. Computer Cards

Again, these rarely fail. If you have spares, you can make replacements to try and identify which card is faulty. For immediate action however, you are best to swap over and use Node 2 as the principal server, and only try to effect fixes at a time when logging is not required. Hard Drives Again, these rarely fail, but must be recognised as a mechanical device and thus at some point in time, may eventually die. Software backups are therefore essential. The immediate action would be to use the alternative server (ie Node 2), and attempt to boot the failed unit off the new server. Try to access the "failed" hardrive and identify the problem (you may have to mount it)

ALL HARDWARE FAULTS, SUSPECTED FAULTS AND/OR THE CONSEQUENTIAL ACTIONS TAKEN AT WELLSITE, MUST BE RECORDED ON THE APPRORIATE REMEDIAL ACTION FORM AND RETURNED TO THE OPERATIONS BASE SO THAT QUALITY CONTROL CAN BE MAINTAINED.

8d. Depth Related Problems (crown sheave sensor)

The vast majority of crown sheave depth related problems are the result of incorrectly aligned targets. Be sure that the sensor is mounted in a way that ensures that the targets are counted correctly. This means that each target should be large enough to fully accommodate both sensors at the same time, and that the spaces between targets are large enough for the same reason. When installing, check that every target is producing a pulse on both sensors and check with the wheel going in both directions. If any target/sensor proximity is marginal, either replace/reposition target or realign sensors. Be sure that a target has not been damaged or come adrift during drilling. This will only worsen and produce erroneous or no pulses. Use an appropriate number of targets in order to get the best depth resolution possible. This may have to be as few as 2 or 3 targets for small sheaves, but preferably, you should install at least 5 or 6. Always use the fast sheave for mounting targets and locating sensor because this wheel will rotate more per unit of vertical movement of the blocks. The fast sheave will normally be located at one end of the sheaves, or offset from the others, and is usually a bit larger.

Ensure that the targets are being sensed correctly and producing the correct sequence of pulses. If not, the direction could well change intermittantly. Also ensure that there are no spurrious activations of the proximity sticks. The order of activation is: Prox stick 1 OFF ON ON OFF OFF ON ON OFF OFF Prox stick 2 OFF OFF ON ON OFF OFF ON ON OFF

| | | |

One cycle:One depth tick

This sequence can be routinely checked by viewing the LED lights on the depth board which is normally mounted with the DAU/Elcon unit. This saves trips to the crown ! (in some cases, this board may be housed in the small junction box at the sensor) A change in direction will occur when proximity sensor no.2 is activated first in the series.

The sensors should be about 5mm, maximum 10mm, from the target material when in use. The counts will then be seen in test mode as will the direction changes on channel 9. The depth itself will not work properly unless the hookload is operational and calibrated. This is required for the computer to detect in/out of slips. If depth inaccuracies are seen over a connection ie the computer is still a few feet off bottom when we are in fact drilling, or the computer reads on bottom when we are still a few feet off bottom, then the fault will be that one target is not being sensed in both directions. Inaccuracies are often seen during trippiing - sometimes the pipe is being moved so fast that the sensor simply cannot keep up with the number of pulses - this is often worse in one direction than another. This is rarely seen during slow tripping or normal drilling operations.

Possible problem with DAU cards (not Elcon). Occasionally, counts and direction changes are being seen in the test mode but not in the realtime display. This means that the targets are being detected, but not producing a strong enough signal to register on the system. i) The sensor needs to be slightly closer to the target material ii) The DAU card thresholds may need to be reduced to increase their sensitivity. (This does not normally occur, but could save a visit to the sheave to reduce the gap between sensor and target material) You should connect your voltmeter across the Ground and TP1 terminals, and adjust the voltage on the P1 terminal. You should read 7 volts.

8e Chromatograph

Generally, most faults with the chromatograph are due to user error and easily traceable. Should the chromatograph actually break, there is little that can be done in the field. The unit will normally have to be returned to base and replaced. Genuine problems with columns can often be as a result of particles entering and causing blockages, therefore when transporting chromatographs, ensure that all inlets are covered.

Status reads unplugged

ensure chromatograph is switched on ensure m200 administrator is running ensure correct port is defined in m200setup

Ensure the port has the correct settings, and is connected via the special null modem serial lead used by the chromatograph: stty baud=9600 par=none stop=1 bits=8 >$cti1

Filaments will not turn on

this is a problem with helium pressure - the filaments will not turn on if the pressure at the column head is less than 5psi. Therefore, check that bottle, regulator, leaks etc to ensure that helium is reaching the chromatograph at the correct pressure. Occasionally, when first setting up the chromatograph, the filaments do not turn on. This will normally just require you to resave the method with filament setting on.

Slow drop off of gases

Firstly check that you have good sample flow through the perchorate filter and change if necessary. If this is okay, the problem may be due to the sample not being pushed through the columns at a high enough pressure. Normally, you can expect columns to clear in a couple of injections. Should it be taking several minutes or longer, the pressure in the columns is too low. Should it be occuring on both channels, then the fault is probably with the helium supply - check the regulator and helium line and filter for any blockage. Should it be occurring on only one column, then the fault is internal with a blockage in the particular column. Check the sample exhausts at the back of the chromat to confirm this. You should feel a little puff after each injection. Should you not feel this, then there is a blockage.

No gas on one channel

This is probably a more serious case of the above, when the column has been completely blocked. Another possibility is the autozero. Should your base line on the chromatogram be offscale, you should check the value of the autozero in m200setup. If it is +/- 450 mV, then the column needs replacing. The worst scenario is that the injector is not working - the chromat will have to be returned to base if this is the case.

No gas on either channel

It is unlikely that both channels will become blocked, so the blockage is likely to be before the chromat - check for restrictions in the sample tubing; check that there is flow through the perchlorate filter - this may need changing or may be packed too tight to allow sufficient flow; the unlikely final possibility is that the filter inside the sample port has become plugged. Generally spurious readings Strange readings, abnormal extra gasses, peaks moving etc etc, can normally be put down to incorrect settings of pressure and temperature. You should check that the CHP Scale, Temp Scale and Offset are set correctly for that particular column. This should always be checked as a matter of course when first setting up the chromat.

When you run the administrator, "m200admin &", two m200admin tasks will appear on the "tsk" list. If on using dau_kill, one task remains, you will have a ongoing fault, as the next startup of m200admin will add a further two m200admins tasks and thus you could incorrectly have 3 in total. A reboot may be the only way to remove this rogue task.

You should avoided booting the computer while the chromatograph is on line. For the chromatograph with a front control panel, put the chromat into local mode. For the chromat with no control panel, simply disconnect the serial cable before rebooting.

8f Plotters/Printers

No life in the plotter

Check the fuses at the back of the machine

Printer or Plotter not loading from memory

Check that both Intellicards are present and/or seated correctly. Swap with cards that you know are working to confirm the fault.

No communication

Check that you have the plotter and port correctly defined in Printer Controls. Check that you have the correct interface defined in the plotters own setup (ie parallel or serial).

The printer and plotter self tests will verify that the cards and printer itself are okay.

Printer head starts jumping and plot becomes misaligned

The carriage bar has become dirty with dust and/or ink you should clean it thoroughly and coat it with a thin layer of machine oil to allow smooth movement of the printer head.

Quality poor and/or parts are and/or not printing replacing.

The pinheads on the printer head may be damaged need cleaning. If damaged, the head will need

Colors are incorrect

Check the ribbon alignment (printer setup option 51). The horizontal bars on the A and H should be blue and red alternately. This is a fault with option 15 in the setup. Auto FF should be set to Abut.

One line is plotted, then the plotter advances a page before plotting the next line

Common error displays on the printer itself may be: Buffer Overflow The plotter is still being fed information from the computer but cannot save any more data in its memory. The plot needs to be stopped at the computer and the plotters memory cleared by pressing the 'clear' button several times until all plots and data have been cleared. Framing Error When the plotter is running from a serial port, the speed of communication between the plotter and computer needs to be the same (ie baud rate 9600). A framing error means this is not set properly. To set the baud rate on the computer, type: stty baud=9600 >$cti1 (or $mdm etc) This needs to be edited or input into the /config/init.cti file so it is read every time the computer is booted up. To display the serial port settings on the screen, type: stty baud <$cti1 Plotters on a serial port also need to be set to 'handshake DTR' Load Intellicard This is usually due to the card not being seated correctly in its slot. It may get dislodged during transit, or unusually, the intellicard may have died. If neither the printer nor plotter can be accessed, ensure that the cards are seated properly and swap from one plotter to another to ensure that the problem is the card. Smartcard ports can become 'log jammed' with information and will stay that way until the computer is booted up. The easiest way to solve this is to swap to another cti port. Remember to change baud rates and plotter starter files.

SECTION 9 - click here to go to main menu DATAUNIT PROCEDURES

9.1

Rig Up

a. b. c. d. e. f.

External Rig Up Internal Rig Up Channel Configurations and Calibrations Completing the System Setup Completing the Equipment Setup Set up and Calibration of the Gas System

9.2 9.3 9.4

Preparing the System Preparing the Realtime System Daily or Frequent Procedures while Drilling a. The Realtime QLOG System b. Logging c. Equipment

9.5 9.6

Reporting Backing Up Data a. Time Database Backups b. Daily Depth Database Backups i. unit ii. remote c. Final Well Backup

9.1 RIG UP

This is a general guide only, intended to cover all of the necessary procedures required to prepare a unit for operation. Each individual unit will obviously vary in what exactly is required. The order in which procedures are detailed here is a sensible order, but again, is not intended to be absolute. External rig ups in particular, are largely dependent on rig operations at the time. 9.1a External

Rig up all of the external sensors and site the junction boxes. When positioning the sensors, take note of the points highlighted in Section 1 of this manual (eg for pit and mud sensors, H2S sensors etc). Make sure that the important sensors are rigged up first when time is short eg depth, pits, gas, so that, even if the rig up cant be completed before drilling commences, the essential service can still be provided.

When siting the junction boxes, make sure that the multi-core cables will reach back to the unit. DO NOT connect the multi-cores to the DAU until all Jbox wiring has been completed. There will normally be two junction boxes, both of which can usually be sited in a central position at the pits; ie Jbox 2 will take most of the pit and mud sensors; Jbox 1 will have to take the depth and the pump strokes, so that the pits are still in a central position.

Install the gas trap assembly and run the power cable and two polyflow lines back to the unit. At least one spare line should always be run so that they can be quickly swapped over in the case of mud entering the line, excess moisture, freezing etc. Make sure that you mark the lines, at the unit and at the gas trap, so that they are easily identified.

For all cables and lines run, make sure that they are run correctly and secured neatly with cable ties - along cable trays if possible; if not, certainly route them above ground level to minimise the risk of damage. This not only minimises potential problems for yourselves but also leaves a good impression with the contractor and the client. A sloppy rig up will leave a bad impression of unprofessionalism.

Connect the sensors at the Jboxes paying attention as to whether the sensors are 2, 3, or 4 wire sensors and whether the shield is required. Make sure that the individual wires have clean contacts and are connected to the correct terminals (ie red +, white number, black () or ground). Make sure that you connect the sensors to the correct channel numbers by following the unit configuration sheet, or if there isnt a prepared configuration, ensure you record it as you go along.

You should leave copies of the configuration sheet in each junction box and in the unit.

Make sure all C-clamps are well greased to prevent seizure and allowing for an easy rig down. After confirming with the electrician or mechanic, get the main power cable connected and run to the unit. Run co-axial cables if you are going to be setting up a network.

9.1b Internal Rig Up In cold climates, allow all equipment to warm up prior to switching on

Connect the interfaces from the DAU to the CPU. This interface comprises the following:-

Ground

Sensor - the internal gas system

Power - power to the boards or barriers Co-ax - analog signals

and

60 pin ribbon - digital signals, MUX board multiplexers

Set up the CPU with monitor, mouse and keyboard.

Connect the multi-core cables to the DAU unit.

With switches on, connect CPU and DAU power cables to the UPS (uninterrupted power supply). Switch on the UPS. The CPU and DAU will now power up.

NB: the UPS has the facility to run 4 power cables; these would normally be occupied by the DAU, the CPU (including monitor) and the m200 chromatograph. The 4th one could be used by the active hub or perhaps a printer. Once the computer has booted, login and start dau_admin (dau_admin &). Make sure you can access the QLOG system. Check that windows and the mouse are working; if not, check the sys.init file for correct drivers and port. With the computer okay, you are ready to start configuring the system, calibrating sensors and hooking up the rest of the unit equipment. 9.1c Channel Configurations and Calibrations Ensure dau_admin is still running; start the convert program (convert &). Enter DAU (non-intrinsic) or Elcon (intrinsic) in equipment. Enter Sensor Configuration and make sure that each sensor, analog and digital, is configured correctly according to the configuration sheet.

Confirm with the drilling engineer or geologists as to the units that you should be recording parameters with, enter User Unit Preferences and ensure that each recorded and calculated parameter is set accordingly. Enter Test Mode and confirm you have signals from the sensors Enter Analog Calibration and enter what calibrations you can (see Section 1 for calibration procedures) :-

Mud Density Mud Temperature H2S Mud Conductivity Pump Pressure Casing Pressure Ambient Gas

500 to 2500 kg/m3 * 0 to 100 C * 0 to 100 ppm 0 to 100 mS 0 to 5000 psi * 0 to 10000 psi * 0 to 5%

* These can be checked and confirmed at a later stage

Pit Levels according to the rig values or your own measurements Mud Flow and Torque - wait until they can actually be measured Depth and Hookload if possible:- enter ticks/100m in the Equipment table and after the hookload calibration, set your threshold values. Ensure that the direction is correct. If not, switch Channel 9 in Invert Binary Sensors as a short term solution. Go to the crown and turn the prox sticks around for a long term solution.

9.1d Completing the System Setup Enter Printer Controls and ensure that the printer ports ($lpt and $lpt2 for node 1, any others that may be connected to other nodes) are defined and that a Report and Local printer are also defined. If these need to be added to the menu, use the command:prt_ctl -l Enter Pit Setups and define any Pit Totals within the system - you should confirm with the mud engineer or derrickman as to what system they are going to be using. Check your text display screens and make any changes necessary in the Create Display option.

9.1e Completing the Equipment Setup Connect the plotters to the parallel ports on the CPU. Check that they are functioning by performing the Self Tests in both printer and plotter modes. Ensure that there is communication with the computer by copying a text file to both destinations. Set up the second node with monitor, keyboard and mouse. Ensure that it boots and that windows and mouse are working. If there are going to be no other nodes, connect node 2 to node 1 directly with network cable, reboot node 2 changing the network file and ensuring that it boots as node 2 from the network.

If other nodes are going to be used, connect all nodes to an Active Hub. Boot up each node from the network and ensure that all are functioning correctly.

9.1f Setup and Calibration of Gas System 1. 2. 3. 4. 5. 6. 7. 8.

Set up and calibrate the chromatograph (see Section 4 for full procedure). Start the m200admin program Enter m200setup and define the correct serial port Attach metal tubing to regulator and blow out with a little helium Attach He tubing to the chromatograph Turn on regulator to 80 psi Connect chromatograph to the UPS, switch on and connect serial cable Check method and setups, especially the scales and offsets Calibrate from windows

Calibrate Total Gas Sensor

1. Check that CC switch point etc are set correctly in equipment 2. Switch on the sample pump on the front of the CPU 3. View the counts in testmode, when the signal is stable, zero the CC trimpot on the front of the CPU so that you have 0095 counts (this is the 0% calibration) 4. Apply your test gas (normally 2.5% methane) to the sample port at the back of the CPU maintaining a constant flow of 5 scfh on the flow gauge. The final, stable reading (counts) will be your high end calibration. 5. Repeat this process for the TCD detector using high end gas (normally 99% methane) 6. Check the operation of the CC detector (switch point and shut off point) when applying the high end gas.

Calibrate the internal H2S sensor. For zero gas, check the number of counts in the test mode. Your maximum calibration should be 100ppm. Check this accuracy by applying your test gas at the sample port, maintaining constant pressure as above.

Plug the polyflow gas line into the back of the CPU, switch on the sample pump and check that you have good suction at the gas trap.

Check the time it takes for gas to go from the gas trap to the mudlogging unit (butane from a cigarette lighter is ideal for this, ie its cheaper than calibration gas). You should use the Total Gas Sensor for this, rather than the chromatograph, since it is a continual detection. Enter the time, in seconds, as Gas Pump Time in the equipment table.

9.2 Preparing the System For the time being, shut down any administrator programs that may be running. Ensure that the time zone offsets and system time are set correctly (Section 6.4)

Remove any old or unwanted files from the system eg 3:/datalog/dbms rm dbdepth.qlog dbdepth.crc (may have to use zap) dbdepth.index (may have to use zap) time96......qlog survey.dat bit.dbdase dbdepth.lmap dbdepth.bmap dbtime.bmap

The only files you should have here are:

4:/datalog/dbms

rm

dbdepth.qlog dbdepth.crc dbdepth.index any files any old report files etc any old chromatogram files any old XYZ plot data files ..qlog..

3:/tmp 3:/user/....

rm rm

3:/datalog/chrom_dat rm 3:/datalog/plots/data rm 3:/datalog rm

Once you have done this, perform a system check: chkfsys 3 +r chkfsys 4 +r

If any corrupted files are found that cannot be fixed by the chkfsys program, zap the corrupted files and re-run chkfsys to recover the blocks and re-write the bitmap.

Restart dau_admin &

Create user accounts, if necessary, via the authorize command

If the geologist, engineer etc have nodes on the network, ensure that they are able to login and check the requirements for the restricted qlog menu. Adjust the dial files if necessary (see section 6.12). Also, check their requirements for unit selection and change if necessary.

start other main administrators: converts & dbadmin d=4:/datalog/dbms & upd_prof & plot_admin & m200admin & flowalarm &

(this will create dbdepth.qlog plus the index and crc files in drive 4:/ )

(should the alarm activate here, yet the sample pump is off, you need to invert the signal - Switch 10 in Invert Binary Sensors)

It is not necessary, at this stage to start dbdepth, dbtime or hotback. This can wait until you are nearly ready to start recording data.

Check to see that adminstrators are running by typing: dau_kill

Create, if not already done; or modify existing script files:i. A Mudlog - check requirements with the geologist ii. Any other depth databased logs that may be required, such as Drilling Parameters, Pressure Log - again, check requirements with geologist, engineer. iii. Realtime Mud and Gas parameters plot iv. Realtime Drilling parameters plot v. Realtime Trip in and Trip out plot

9.3 Preparing the Realtime System Enter all the information that you can into the Equipment table: Depth Method Ticks per 100 Log interval (ie depth database record interval) Time interval (ie time database record interval) Average stand length Start Depth (ie depth at which database will start) ROP average interval Amps per Ftlb (torque conversion if required; get a table from the toolpusher or mechanic) Gas Pump Time RPM gear ratio (if sensor is not on rotary table - get value from driller or mechanic) Pressure Gradient (normal for the area) MD overide (if no density sensor) Theta Values (from mud engineer) Surface Conn Loss (use default 0.5 initially) Mud Motor details if required

In particular, at this stage:

Prior to drilling out:

Enter the Profiles; certainly the existing Hole and Casing profiles. As soon as you have BHA details, enter the Pipe profile. Enter Pump Data; ie the mud volume per stroke for each pump, and the efficiency. Get the stroke length and liner details (derrickman) and use the pump output program to calculate the pump volume. Confirm with the toolpusher and the engineer as to the efficiency of the pumps. Enter any pre-existing Bit Data (if not starting at spud) and details of the current bit as it is run into the hole. Enter any pre-existing Survey Data (if not starting at spud). Ensure that the current Hole Depth is set correctly by using Depth Adjustments. Enter all the well information into Well Data - this is used for all log headings.

As they are preparing to drill:

Ensure that the new bit run is entered and started if not already done so If not done before, enter BHA and drillpipe details into the Pipe Profile Set the bit depth correctly in Depth Adjustments Start the dbdepth and dbtime administrators; make sure that the other administrators are running still. Check pre-set calibrations against rig values Set calibrations for torque and mud flow when comparisons are available Set Personal alarm limits on all important parameters, ie gas, ROP for drill breaks, pits for losses and gains, mud flow, H2S, pump pressure etc. Check gas trap level in the header box, start the pump, total gas and chromatograph running. Once drilling has commenced, check that the database is updating As drilling continues and stringweight increases, reset your hookload calibration and slip thresholds. Keep a check on the WOB, and reset if necessary.

9.4 Daily or Frequent Procedures While Drilling 9.4a. The realtime QLOG system

Ensure that the depth tracks correctly. Particularly at the beginning of a well, but also throughout, check the depth at every kelly down. If the depth is not following rig depth correctly:i. Check that slip threshold values are correct, and that in and out of slips is being registered correctly. If not, reset the hysteresis values. ii. Check the calibration, ie ticks per 100m, and slightly adjust if necessary iii. Check the targets on the shiv to ensure that all are being sensed correctly

Keep a constant check on alarm settings and keep set so that they are doing the required job ie its pointless having pit gain/loss set at +/- 5m3 for example.

Keep mud theta values up to date in equipment table, so that real-time and stored hydraulics are correct.

Keep the kick kill program updated every time that the driller records SCR pressures.

Set up a pressure overlay (normal compaction trend) for the well and make sure that the trend is checked regularly.

Keep overburden, fracture and formation gradient calculations up to date

Keep the depth database editing up to date.

Make sure you update the survey program every time that surveys are taken.

Keep real-time plots going 24 hours a day. One plotter should have drilling parameters (and switch to a trip plot when tripping) and another plotter monitoring mud and gas parameters.

Update Well Data as the well proceeds

Keep regular data backups to floppy and node 2

9.4b. Logging

Make sure that you use the Trip Mode and the real-time plots when tripping. Preferably, keep a manual trip sheet and monitor losses and gains extremely carefully. Monitor ALL trips, casing runs, cementing jobs carefully. All Data MUST be written down clearly and kept together using both trip sheets and the diary. Remember, a different logging crew could well be quizzed by the company rep. on losses/gains on previous days trip. Trip data can be sent to the /datalog/trips directory in a report form.

Perform regular lag checks; if washouts occur, use the Lag Volume Adjust in the equipment table (this is the extra hole volume created by the washout)

Print out logs regularly to check for accuracy, consistency and corrections; keeping this up as go along will save a lot of time at the end of the well.

Keep an up to date hole and casing profile, for easy reference, with all necessary annular and string capacities, pipe and collar displacements

Keep a Unit Diary up to date and complete with as much pertinant information as possible. This should be information pertaining to the actual well itself, together with software or equipment problems/changes, troubleshooting and fixes etc.

Keep the Final Well Report up to date as much as possible while drilling. Thus, not only is the report is compiled while the information is fresh in your mind, but it will save you a lot of time at the end of the well.

Keep a daily record of the IADC report and include in FWR if required

Update the Days versus Depth plot daily

9.4c. Equipment Check gas trap regularly for level of mud, mud entering the line, wet or hard calcium chloride in the drop out jar.

Keep a regular check on all filters and driers; change when required (see section 4).

Check suction at trap regularly

Activate the H2S sensors every 2 days for correct operation. condensation.

Check the sensor for

Run calibration gas through the chromatograph every few days if possible and re-calibrate when necessary; the reading of the chromatograph should be accurate to within 10ppm.

Check and zero the CC and TCD gas detectors daily. Their operation will be affected by changing flowrate and temperature. With the suction pump on, but sample line off, re-zero the trim pots so that you have 95 counts in the test mode.

Do a visual check of sensors at the start of your shift. ie check that they are in position, operating, clean, secure etc

Clean mud sensors regularly (ie each shift) especially the ones at the shakers. Check for cuttings build up in the header box.

If using float pit sensors, check each shift that the floats are free.

During bit trips, do a thorough clean of all sensors. As well as the cleaning detailed above, make sure that all C-clamps are well greased; check that pressure, hooload, torque etc are not becoming embedded in mud, oil etc

Keep the logging unit clean and dust free; computers should not be exposed to a dirty, dusty environment. This will also reflect a professional, well controlled service image to the Client.

Maintain the printers; clean out any pieces of paper that may have collected inside; clean and oil the printer head bar; carefully clean the printer head; check condition of ribbon and the ribbon alignment.

9.5 Reporting The reporting requirements are going to vary between different regions as to their own particular requirements; so to is the reporting at wellsite and contents of the final well report, depending on the type of operation and the clients requirements. The following is intended as a guide only. Unit Operation Equipment failure report Operational Status form - to be signed by Company Rep for when the unit becomes operational and for when it is released. Sample Dispatch Manifest - for all samples dispatched from the rigsite, to be signed by Company Rep or Geologist. Corrective Action Report - eg requests/suggestions/failures/improvements etc. Goods Received and Requested - keep copies so all crew members know whats been ordered, whats on route to rig, whats arrived. Bug Report - should any be found ! Report to operations base. Keep an ongoing list of Inventory supplies needed. Calibration Record

Morning Report Determine the requirements of the Company Rep and Geologist and draw up a Report Form accordingly. Determine whether a hydraulics report is required.

Final Well Report Keep this up to date as much as possible while drilling. Use the example report in the Dataunit manual as a basic format, but again, include any components that are specifically requested by the client. Try to include as many plots, tables and diagrams as possible. This format is generally preferred to a lot of text.

Suggested Contents:-

General Well Information Days vs Depth Plot Mudlogging Services Geological Prognosis Engineering Report

Operator, location, spud and TD date, objectives, hole and casing depths etc Prognosed against Actual Outline of the operation, equipment etc Perhaps by hole section, or for individual bit runs explanation of bits/BHA used, deviation, hole problems etc For each Formation, full description and depths Probably best done for each Formation; include background levels, shows, produced gas etc. Tabulate figures. Include XYZ plots of pressure profiles, parameters against depth (eg temp, cond, bulk/shale density). Include Leak Off Tests and any other measurements. Include survey listing and plots ie breakdown of daily operations % for each operation; do for each hole section and for well as a whole Use the one from the database or create one yourself Probably more relevant for deviated wells Breakdown of each operation, losses incurred etc, of and maybe plots of pressure testing Use the engineering program to provide plots.

Formation Tops Geological Report Core Report Gas Report the Pressure Analysis

Deviation Report IADC Time Analysis Bit Record BHAs Casing & Cementing reports Leak Off Tests

The Unit Manager is ultimately responsible for ensuring that the Final Well Report is factually accurate and satisfactorily completed; that all logs are neat and edited; that all forms have been completed. If any data is incomplete, the Unit Manager will be required in the office to complete the job at the end of the Well.

9.6 Backing up Data

The time and depth databases will automatically be written to node 2 if the hotback program is running. However, additional backups should be completed as a matter of course. a. Time Database Backup All time data is saved in 3:/datalog/dbms. A new file is created daily (at 23:59) and has a date identifier: eg time961023.qlog time961024.qlog time961025.qlog

Once the day is complete you can backup the data by compressing the time file into an archived (zoo) file. Any time file in this directory with the '.qlog' extension will appear in the time database; once it is zoo'd up and removed, it cannot be viewed in the database until it is extracted. So, leave a few days data, in hand, in the directory. The reason for compressing and removing is not only for backup. Each time file could take up to 3000 blocks of disk space. A lot of time files will therefore take up a large proportion of the disk space. Any resulting problems are avoided by just keeping the last few days time files on disk. To compress a file: zoo ah time1.zoo /datalog/dbms/time961023.qlog zoo ah time1.zoo /datalog/dbms/time961024.qlog etc Use the query command to keep a check on the size of the zood file. The maximum a floppy disk can take is 2880 blocks, so when the file reaches 2500-2600 blocks, you should not add any more files to the archive. time1.zoo should then be backed up to floppy:cp time1.zoo 1:/

The original time files can be removed from the hard drive to save disk space, and another archive file begun:zoo ah time2.zoo /datalog/dbms/time961025.qlog zoo ah time2.zoo /datalog/dbms/time961026.qlog etc

If you should have to restore a particular days data in order to view it in the database: first, find the correct zoo file by listing the zoo file contents (although the days contained in the file should be recorded on the floppy disk label: zoo l time2.zoo copy the file to the temporary directory:extract the relevant file:cp 1:/time2.zoo /tmp

zoo x /tmp/time2.zoo time961026.qlog

b. Daily Depth Database Backup The depth file 'dbdepth.qlog' cannot be copied directly whilst drilling and the depth administrators are running, therefore it is backed up daily by making a 'temporary image' of it by use of the 'dbget' command. i. Daily 'unit' Backup Procedure: Ensure that you login as the same user each time. This is so that the files created are always in the same user directory. Type 'dbget' at the prompt The computer will read all of the records in the database and return a message giving the total number of records read when the operation iscompleted. Two new files will have been created in the user directory: dbdepth.newlog depth.crc the image of the database an index of the records read

Copy the dbdepth.newlog file to a floppy disk or a different hard drive as the backup. Remove the depth.crc file (before you run dbget the next time).

If you ever need to restore a depth database, copy dbdepth.newlog to the user directory and type 'dbput' at the prompt. The data will then be automatically restored to the database.

ii. Depth Database Backup for Remote Transmission The difference in the procedure is that for preparing a backup file for transmission, we need to keep the file size to a minimum. This is done by using the depth.crc file when dbget is run - only newly created or changed records will be extracted. These can then be added to the remote database rather than recreating the whole thing. Its very important that you login as the same user each time, because the correct depth.crc file has to be accessed from your user directory. Type 'dbget' Zoo up the created dbdepth.newlog file and send, via Qterm, to the remote computer. Once the zoo'd file has landed at its destination, extract the dbdepth,newlog file and make sure that it is located in the user directory. Type 'dbput' Go to dedit and check the database is there. The next time that dbget is run at wellsite, DO NOT REMOVE the depth.crc file in the user directory as this ensures that only updated records to the database will be extracted. If sending data remotely and keeping unit backups, use one user directory (keeping the depth.crc file) for the remote dbgets, and use a different user directory (removing the depth.crc file) for wellsite backup.

c. Complete Final Well Backup

At the completion of a well, all data and files should be backed up. This may be required by the client but also allows for the well to be re-created at a later date to produce follow up data for the client. You have to be certain, therefore, that you include all the required files. Final logs, ie mudlog, pressure log, composite logs, drilling logs etc should all be plotted to files once they have been edited and are complete. This allows for easy reproduction - the file can simply be copied to a printer, rather than having to recreate the database, units, plot files etc. The plotting of logs to a file is very easy: Go to Printer Controls and set the file output file name (eg, /tmp/mudlog_plot) Go to the starter file (Plotter Setup) for the correct control file and select this output file as you would select a printer. Start the plot in the usual way ie from Plotinfo or by using the 'plotter command. Once the file has been created, it should be backed up to floppy disk. You should obviously zoo it first. A printed copy can be produced by copying this file to a plotter, ie 'cp mudlog_plot $lpt &.

At the end of the well, it is a good idea for the Unit Manager to complete a general report which should include: All QLOG bugs encountered Details on ALL computer crashes/freeze ups Any hardware problems. Details of Final Well Report and Logs Report on personnel and evaluation of the job/service Any other useful information.

For a complete software wellsite backup, the following files are needed: (Shut down all adminstrators first) All of the files should be added to a single archive file which should be suitably named for the well or client. 4:/datalog/dbms dbdepth.qlog dbdepth.index dbdepth.crc dbdepth.bmap dbdepth.lmap time*.qlog (backup separately as already described) survey.dat target.dat bit.dbase tvd.cfg header.dat tomb.dat dp.cfg <filename>.hpgl equip.cfg m200admn.cfg m200admn.gas m200gas.cfg m200admin.gas hole.pro pipe.pro case.pro pumps.cfg calibs.cfg analog.cfg digital.cfg ratios.cfg units.cfg any chromatograms saved ...any script,control and extra files used in well logs

3:/datalog/dbms

3:/datalog/config

3:/datalog/chrom_dat 3:/datalog/script 3:/datalog/plots/data 3:/datalog/plots 3:/datalog/text

for any XYZ plots used display.txt edits.txt plots.txt channels.txt units.cfg user_dp.cfg

3:/user/<NAME>

3:/datalog/trips 3:/datalog/cbm 3:/user/..../

any trip files saved if applicable morning reports any well data files or reports

The best way to store well data is by compressing the files first and then by backing up the data to floppy disks.

At the end of the well, you will therefore have 3 sets of disks: Set 1 Set 2 Set 3 Set 4 etc Archived time files One archive file containing all the well files (as above) One archived file containing mudlog Any other logs printed to file

También podría gustarte