Está en la página 1de 65

SeisSpace System

Administration

Release Notes

Page 1 of 65

May 2005

SeisSpace System Administration

Configuring SeisSpace
Overview
Once you have logged in as an administrative user, you can start the
Administrator and select the Administrative host.
In the cluster environment where you have manager nodes, login nodes and
processing nodes, you may need to start additional work managers.
For example, if your cluster is configured so that you intend to log in to the
manager node and run the Navigator from there, they you only need the
workmanager on the manager node at this point. If you plan on having the
users log into one or two of the nodes as login nodes where these nodes
will run the GUIs and interactive jobs, then you will need to start a
workmanager on these login nodes as well. The goal here is to have the
administrative host list be the list of hosts where you intend to log into to
run the GUIs and interactive jobs.
To start the workmanagers on the login nodes you need to run the
...../apps/bin/workmanager script on those nodes. You can use your
"parallel shell" command to do this or you can physically log into each
node and type the ".../apps/bin/workmanager start" command in the
command line.

Other Docs

Known Problems

SeisSpace System Administration

Selecting the Administrative Host


1. Click the Administrator icon.

The Administrator window appears:

2. Click the host list to choose your Administrative host and select the
host. This window will display all machines with currently active
workmanagers.
3. Select the host where you plan to log in and where you want to run
local interactive jobs.
4. Use the following Wizards (from left to right) to configure SeisSpace:
Cluster Wizard
Data Home Wizard
User Wizard
The wizards add information to the netdir.xml file located in
$PROWESS_LOGDIR. and will create some project (or data) directories.
Note that the Admin Tool also provides the all the functionality of the
Wizards. See Administrator Tool for more detail.

General Wizard Use


You can navigate through the wizards using the following buttons:

Other Docs

Known Problems

SeisSpace System Administration

Back returns you to the previous wizard panel.


Next performs the current action and goes to the next panel.
Finish performs the current action and exits the wizard.
Cancel exits from the wizard with performing an action.

Cluster Wizard
The Cluster Wizard leads you through the configuration of a Linux cluster
and defines the Virtual File System (VFS) disk resources for SeisSpace
datasets.. The text at the top of each panel gives a brief description of the
wizard and its use. Additional help is available from the pulldown menus
on each windows.
1. Click Cluster Wizard.
Click Next > in the Cluster Wizard introduction screen.
2. Select Yes Add Nodes to build a list of node names on your cluster.
You can later group these nodes into a variety of different subsets for
defining which nodes of the cluster to submit jobs.
Click Next >.

Other Docs

Known Problems

SeisSpace System Administration

3. To populate the list of host names, use the Numeric Pattern and
Numeric Range options, manually add them, or import the list from an
ASCII file.

Select ASCII Import to import a file containing a list of node names.


You can use a file created from ASCII Export. You can also use ASCII
Export to transfer this information to an ASCII file to use the next time
you need this information.
The format of node names is a list, such as:
host01
host02
host03
host04
Select Automatically pad zeros to ensure that node names have the
same number of digits when there are more than 9 nodes. In the above
example, we padded with one zero digit to allow up to 99 nodes. Enter
a pad number and toggle Automatically pad with zeros on and off.
You should see your change in the Available Hosts field.

Other Docs

Known Problems

SeisSpace System Administration

Click Add >>> to transfer the selected hosts from Available Hosts to
Hosts in Cluster and to the database.
Click Next >.
4. Select Yes, Add Job Clusters to group subsets of your clusters as a
single Job Cluster. (Job Clusters are groups of compute nodes (hosts)
on your cluster that are used for submitting jobs.)
For example, if you expect to submit a lot of jobs to 4 node subsets of
your cluster and you have a 16 node cluster, you can alias node1 node2
node3 and node4 to a name like nodes1-4.
Click Next >.
5. Enter a Cluster Name (such as, supcl1-4) and select a foreman node.
The foreman node runs any computation or operation that cannot be
distributed. Choose a node other than the sitemanager node, that is a
node that is less busy. You will generally want to exclude the manager
node and the login nodes from cluster aliases.
Select the cluster nodes from the Available Cluster Hosts and add
them to the Host in Job Clusters column.
Select all by clicking on supcl1, Ctrl A, and Add >>>.
Select individual entries or a range: Ctrl MB1 and Shift MB1.

Other Docs

Known Problems

SeisSpace System Administration

Click Next >.

Select Yes to add more Cluster Names (or aliases) or No when


finished.
Click Next >.
6. Select Yes, Add Virtual File Systems to specify VFS directories
where your SeisSpace data will reside. Later, you will associate the
VFS directories with each SeisSpace Data Home.
7. Select a Centralized VSF (a single large data server on your network)
or a Distributed VFS (a series of disks or file systems on a large raid
array or disks local to your compute nodes).
Click Next >.

Other Docs

Known Problems

SeisSpace System Administration

8. Select VFS Basenames and Paths. In our example we made a VFS that
uses /export/d01 file system on each of the four nodes for storage.
To do this we:
add all four Available Hosts to the Basename Hosts column by
selecting them and clicking Add (the one under the Available Hosts
column). Make sure the Basenames specified include your network
paths.
In the VFS basename text field, enter /network/<host>/export/d01 using the <host> button to insert
the characters <host> in the pathname).
Enter a VFS ID and label. The ID is the name of the data directory.
The Label is the name of the directory created under the VFS ID. You
should use a project name that relates to the data. Click Add (the one
under VFS label).
VFS Label ties distributed disks together. For example, Alaska or
Testing. The directories listed under VFS Paths are the directories that
are built.

Other Docs

Known Problems

SeisSpace System Administration

Click Next >.


Select Yes to add more VFS Names for Cluster or No when finished.
Click Next >.
9. Select Add Scratch Directory to Network Directory (NetDir) to add
a scratch directory name to your network directory (NetDir). Landmark
recommends creating a scratch directory as a local path. This allows
you to use the scratchAdmin.sh script to create similar scratch
directories on all your nodes.
Toggle Make script file? and enter the location for the script file.
Scripts are automatically generated when you click Next>.
Run the scratchAdmin.sh script that we provide for creating the
scratch directories on all of the selected nodes. Remember, root
needs rsh entrusted privileges (rsh without password).

Other Docs

Known Problems

10

SeisSpace System Administration

10. Click Next >.


11. Click Finish.

Data Home Wizard


This wizard leads you through a series of panels that add a project database
to the Network Directory. You will add SeisSpace projects or Data Homes
and ProMAX projects or Data Homes to the Network Directory.
1. Select Data Home Wizard from the Administrator icon.
Click Next >.
2. Select the type of project you are working with from the following
choices:
Add a SeisSpace Data Home to Network Directory
Add a ProMAX Data Home to Network Directory
Click Next >.

Other Docs

Known Problems

11

SeisSpace System Administration

3. The default takes you to Add a SeisSpace Data Home to Network


Directory. Enter or select a pathname to the SeisSpace project you are
adding. This path should be accessible from all nodes in the cluster.
The pathname is analogous to ProMAX primary storage, and the Data
Home directories use a project/subproject hierarchy that is analogous to
the ProMAX Area/Line hierarchy in PROMAX_DATA_HOME. Note:
DO NOT specify the same directory as your ProMAX Data Home.
Click Next >.
4. Associate Scratch with project database.
Select a Project Database from the pulldown.
Select one or more scratch directories from the Available Scratch list.
Jobs run for this project will use the scratch directories specified for
temporary storage.
Click Add >>>.

Click Next >.


5. Associate VFS with Project Database. This is similar to specifying
the secondary storage partitions in ProMAX, but in SeisSpace a single
VFS directory may represent multiple directories on multiple disks.
Select a Project Database from the pulldown menu.
Select a VFS directory from Available VFS.

Other Docs

Known Problems

12

SeisSpace System Administration

Click Add>>>. This moves the directory to Associated VFS. To


remove the directory click Remove.
Select an Associated VFS directory from the list to associate to the
selected Project Database.
You can repeat the process to add more associated VFS directories.
However, if you add more than one directory you are prompted to
select a directory whenever you make a dataset. Landmark suggests
selecting one VFS to use as the default.

Click Next >.


6. Do you wish to add more Data Hones adds more project databases.
Select either:
No, finished adding Data Homes.
Yes, add more Data Homes.
Select Yes, add more Data Homes.
Click Next >.
7. What kind of Data Home do you want to add to the network
directory.
Select Add ProMAX Data Home to Network Directory to add a
ProMAX project database.
Click Next >.

Other Docs

Known Problems

13

SeisSpace System Administration

8. Enter or select a PROMAX_DATA_HOME from the pulldown. This


path should be accessible from all nodes in the cluster.
Click Next >.
9. Associate Environment Variables with ProMAX Data Home adds
or changes ProMAX environmental variables, such as your
PROMAX_ETC_HOME and PROMAX_SCRATCH_HOME
(generally the minimum required entries).
Hint: PROMAX_MAP_COMPRESSION is often used and is required
if the Map files are compressed. You may also enter extended scratch
variables here.
Click New to add a new environmental variable.
Click Delete to remove the selected environmental variable.
Click Edit to change the selected environmental variable.

Select PROMAX_ETC_CONFIG_FILE_HOME.
Click Edit>.

Other Docs

Known Problems

14

SeisSpace System Administration

10. Enter the network pathname for


PROMAX_ETC_CONFIG_FILE_HOME. or
PROMAX_ETC_HOME and PROMAX_SCRATCH_HOME
11. Add any additional variables to define the ProMAX runtime
environment. Common additions might include:
PROMAX_ETC_HOME, PROMAX_SCRATCH_HOME,
PROMAX_SCRATCHX#_HOME (for extended scratch), and
PROMAX_MAP_COMPRESSION. If you plan to access data from a
tape catalog you will need to set PROMAX_TOPCAT_HOME. You are
not required to add PROMAX_HOME and LM_LICENSE_FILE here.
Click OK>.
Click Next>.
Click No finished adding Data Homes when through adding
projects.
Click Next>.
Click Finish in the Wizard window.

Verifying Projects in the Navigator


In the Navigator, click on the hosts folder and then click on the Refresh
icon, second icon from the left. Scroll down the host tree to see the
ProMAX and SeisSpace projects listed under the host. Within ProMAX
projects you can see all the AREAS, LINES, Flows, Tables and Datasets.

Other Docs

Known Problems

15

SeisSpace System Administration

User Wizard
This wizard allows you to add, edit, or remove users, passwords, and
privileges.
A new user must be a valid user on the host where the site and
workmanagers are running and the user must exist in the /etc/passwd
file. However, the password does not have to be the same.

Adding users
Click Next > in the wizard splash screen to add, edit, or remove users,
passwords, and privileges.
To remove a user, select a username from the Edit/Remove User
pulldown and click Remove.
To edit a password, select a username from the Edit/Remove User
pulldown and click Set Password.
To edit privileges, select a username from the Edit/Remove User
pulldown and click Change Actions.
The initial login user, useradmin, should only be used the first time you
use the User Wizard.
All users must be valid Unix/Linux users.
Any user with Administrator privileges is allowed to add new users, and set
up clusters and projects.

Adding the System Administrator


To add the SeisSpace system administrator, enter a username such as,
ssadmin, and a password. Verify the password, select the Administrator
checkbox, and click Add. NOTE: this user must be a valid user on the
system with entries in the /etc/passwd and shadow files.

Other Docs

Known Problems

16

SeisSpace System Administration

Adding other users


Add another user, such as prouser2, a password, and uncheck the
Administrator checkbox. Continue adding your SeisSpace users. When
finished, use the Edit/ Remove User pulldown to see a list of the current
users.
Click Next> and Finish when you have completed adding users.
Select File>Exit to logoff.

Administrator Tool
The Administrator tool allows you to add, delete, or modify the system
configuration information and verify the entries made in the previous
wizards.

Other Docs

Known Problems

17

SeisSpace System Administration

Administrator Tabs
The following tabs allow you to select the type of information to work
with:
Host/User:
Cluster
VFS
Scratch
SQL
SeisSpace Data Homes
ProMAX Data Homes
Queues
Tree

Other Docs

Known Problems

18

SeisSpace System Administration

Hosts/User

Tabs
for selecting
different
information
The Host/User tab is used to add additional hosts (or nodes) to the existing
configuration, copy the entire configuration from one administrative host to
another, and perform user administration.
The host copy function is used when you add a second cluster with a
separate manager node and you clone the configuration from one cluster to
another.

Other Docs

Known Problems

19

SeisSpace System Administration

Cluster

The Cluster tab allows you to manage the cluster alias lists generated in the
Cluster wizard. You can review the existing cluster alias definitions, add
more, or delete unneeded ones.

Other Docs

Known Problems

20

SeisSpace System Administration

VFS

The VFS tab allows you to manage the Virtual File System lists generated
in the Cluster wizard. You can review the existing VFSs, add more, or
delete unneeded ones. Use this tab to remove existing VFS directories) or
delete the contents of the VFS from disk.
Note: It is easier to use the Cluster wizard to create new VFSs than to use
the VFS tab of Administrator tool.
The Info option shows which file systems are incorporated into the VFS
and how much disk space is available.

Other Docs

Known Problems

21

SeisSpace System Administration

Scratch

The Scratch tab allows you to manage the scratch directory lists generated
in the Cluster wizard. You can review the existing scratch directory
definitions, add more, or delete unneeded ones.
Note: This tab only affects the contents of the netdir file. It does not add or
remove directories from disk.

Other Docs

Known Problems

22

SeisSpace System Administration

SQL

The SQL tab allows you to manage the SQL databases for flow replication.
In this dialog, you can create and delete databases. The options dump,
restore and convert are not implemented.

Other Docs

Known Problems

23

SeisSpace System Administration

SeisSpace Data Home

The SeisSpace Data Home tab allows you to manage the SeisSpace Data
Home definitions generated in the Data Home Wizard. You can review the
existing SeisSpace Data Home settings and add new definitions or delete
unneeded ones.
This tab can also be used to associate new VFSs to existing Data Homes,
disassociate unneeded VFS, and manage the scratch directories assigned to
SeisSpace Data Homes. It is also used to associate a SQL database to the
project for flow replication.
Note: After you make changes to the SQL, VFS and Scratch settings, you
must click the corresponding Associate buttons to save the changes for
each Data Home.

Other Docs

Known Problems

24

SeisSpace System Administration

ProMAX Data Home

The ProMAX Data Home tab allows you to manage the ProMAX Data
Home settings and environment variable definitions generated in the Data
Home wizard. You can review the existing ProMAX Data Homes and their
associated environment variable lists, add more Data Homes, or delete
unneeded ones.
This tab is also used to associate a SQL database to the Data Home for flow
replication.
Note: After you make changes to the SQL settings, you must click the
Associate button to save the changes for each Data Home.

Other Docs

Known Problems

25

SeisSpace System Administration

Queue

The Queue tab allows you to manage the list of Queue Directives; there is
no wizard to manage the Queue directives. You can review the existing
Queue directive definitions, add more, or delete unneeded ones.

Queue Directives
The SeisSpace Navigator/Flowbuilder only understands and supports the
PBS/Torque queues. The standard Queue directive settings used are:
#PBS -S /bin/sh
#PBS -N qflowname
#PBS -l nodes=1:ppn=1 (as entered in the job submit GUI)
#PBS -o PROMAX_DATA_HOME/AREA/LINE.FLOW/exec.#.log.out
#PBS -e PROMAX_DATA_HOME/AREA/LINE/FLOW/exec.#.log.err

Other Docs

Known Problems

26

SeisSpace System Administration

To add a Queue directive called PPN1 that specifically requests 1 cpu on


one node, enter:
#PBS -l nodes=1:ppn=1

To alter the nice value, set up a Queue directive called NICE5, for example:
#PBS -l nice=5

Segregating clusters
To segregate a cluster for SeisSpace jobs to run on one set of nodes and
ProMAX jobs run on another set of nodes, follow these steps:
1. Add a property to the /usr/spool/PBS/server_priv/nodes file for each
node that PBS can spawn a ProMAX job and a property for SeisSpace.
Here is an example nodes file:
n26 ss np=2
n27 ss np=2
n28 ss np=2
n29 pmx np=2
n30 pmx np=2
n31 pmx np=2
n32 pmx np=2

Nodes n26-n28 have the property ss and nodes n29-n32 have the
property pmx.
2. As root, restart the PBS server and scheduler.
service pbs restart

3. Add a Queue directive call ProMAXQ:


#PBS -l nodes=1:pmx:ppn=1

4. Add a Queue directive call SeisSpaceQ:


#PBS -l nodes=1:ss:ppn=1

Tree
The Tree tab allows you to review the configurations for all Data Homes
and administrative hosts that have been defined in the netdir file. This is a
summary view without editing capabilities. You can also us a text editor tor
review the netdir.xml file.

Other Docs

Known Problems

27

SeisSpace System Administration

Navigate the tree by selecting the Host folder and then opening the
database, data directory, and scratch directory entries. Once opened, these
folders should show the paths to all the directories you configured in the
Wizards.
Click File > Disconnect to exit from the Administrator Tool.
Click File > Exit to exit from the Administrator.

Other Docs

Known Problems

28

SeisSpace System Administration

Managing Batch Jobs using PBS queues


Managing batch jobs via queues provides the following benefits:
sequential release of serially dependent jobs
parallel release of groups of independent jobs
optimized system performance by controlling resource allocation
centralized management of system workload

Introduction to Batch Job Queues


PBS is the only queueing system that is supported by SeisSpace:
PBS is the Portable Batch System. It is a flexible batch processing
and resource management system for networked, multi-platform
Linux and UNIX environments. PBS is a public license queuing system. We tested against version OpenPBS_2_3_16. You can download
this version from www.openpbs.com or www.openpbs.org after registering on the website and receiving email confirmation. There is no
charge for OpenPBS.
Note: There are two versions of PBS: OpenPBS and PBSPro. At this
time we only use OpenPBS. There is also a very similar and more
robust queuing system called TORQUE that is based on OpenPBS.
See below for more information.
Portable Batch System queueing software (OpenPBS) was developed
originally for NASA as a replacement for NQE. Landmark does not
distribute OpenPBS. You can read more information about OpenPBS and
download it from www.openpbs.org. We include installation and
configuration instructions in this document.
We are aware of another queuing system called TORQUE. We have done
limited testing using TORQUE. This package can be downloaded from
http://www.supercluster.org/projects/torque. TORQUE is built upon
OpenPBS. It is a very active OpenSource program, with frequent updates,
bug fixes and active user support. Reportedly, it is easier to install,
especially for 64-bit systems and has more robust features and capabilities
than OpenPBS. The look and feel of installing and configuring TORQUE is
similar to, but easier than OpenPBS. Landmark suggests using torque
instead of PBS on 64-bit linux installations.

Other Docs

Known Problems

29

SeisSpace System Administration

Configuring PBS/Torque queues


PBS/Torque queues are the only queues supported for the Linux platform.
SeisSpace only works with PBS/Torque on Solaris as well.

Tips for OpenPBS / Torque queue


PBS queues poll the server once a minute by looking at the CPU utilization as reported by cat /proc/loadavg. If many jobs are input at one
time into the queue when the machine is idle, the queue can release
too many jobs before that node refuses any more work.
We suggest the following configuration so that too many jobs are not
released at the same time. You specify the number of available nodes
and CPUs per node in the /usr/spool/PBS/server_priv/nodes file.
Each job is submitted to the queue with a request for a number of
CPU units. The default for ProMAX jobs is 1 node and 1 CPU or 1
CPU unit. That is, to release a job, there must be at least one node that
has 1 CPU unallocated.
There can be instances when jobs do not quickly release from the
queue although resources are available. It can take a few minutes for
the jobs to release. You can change the scheduler_iteration setting in
the qmgr. The default is 600 seconds (or 10 minutes). We suggest a
value of 30 seconds. Even with this, we have seen dead time for up to
2 minutes. It can take some time before the loadavg begins to fall
after the machine has been loaded.
By default, PBS installs itself into the /usr/spool/PBS, /usr/local/bin
and /usr/local/sbin directories. Always address the PBS qmgr by its
full name of /usr/local/bin/qmgr. The directory path /usr/local/bin is
added to the PATH statement inside the PBS queue management
scripts by setting the PBS_BIN environment variable. If you are
going to alter the PBS makefiles and have PBS installed in a location
other than /usr/local, make sure you change the PBS_BIN environment setting in the ProMAX sys/exe/pbs/* and PBS_HOME in the
SeisSpace port/bin/workmanager scripts.
Run the xpbs and xpbsmon programs, located in the /usr/local/bin
directory, to monitor how jobs are being released and how the CPUs
are monitored for availability. Black boxes in the xpbsmon gui indicate that the "mom" is down on that node. It is normal for nodes to
show as different colored boxes in the xpbsmon display. This means
that the nodes are busy and not accepting any work. You can also
modify the automatic update time in the xpbsmon display. However,
testing has shown that the automatic updating of the xpbs display
may not be functioning.
Other Docs

Known Problems

30

SeisSpace System Administration

LP and NQS have an option that allows any user to bring the queues
up and down via the ProMAX Queues window. This is not available
for PBS unless the user that is running ProMAX is listed as a manager of the queues via the qmgr command.
Landmark suggests that you read the documentation for OpenPBS
registered users available at www.openpbs.org. This document
includes more information about the system and ways to customize
the configuration.
PBS requires that you have the hostnames and IP addresses in the
hosts files of all the nodes
Note: hostname is the name of your machine; hostname.domainname
can be found in /etc/hosts, and commonly ends with .com:
ip address hostname.domain.com hostname

For DHCP users, ensure that all of the processing and manager nodes
always get the same ip address.
Landmark presents one option of many that can be used to install and
configure OpenPBS Job Queues. For a successful installation the following
must exist:
Make PBS the queuing capability for a cluster environment. That is,
work is automatically distributed over the cluster as nodes are available.
Install PBS for all nodes of the cluster. The installation can be done
on each machine independently or you can use a mounted file system
from the manager node, this may be easier.
Install all components including the server and scheduler on one
node. This is known as the server node and serves the other main processing nodes. Normally this will be the cluster manager node.
The following files must be the same on all installations on all
machines:
/usr/spool/PBS/server_name
/usr/spool/PBS/mom_priv/config
These files are only used by the server and scheduler on the manager
machine:
/usr/spool/PBS/server_priv/nodes
/usr/spool/PBS/sched_priv/sched_config

Other Docs

Known Problems

31

SeisSpace System Administration

Downloading and Configuring OpenPBS for 32 bit installations


Landmark does not distribute PBS software due to PBS licensing
restrictions. However, you can download the software package from
www.openpbs.org and follow the downloading instructions. If the
complete/custom Linux installation was completed as recommended in the
release notes then you should have everything else that you need.
Landmark installed and tested OpenPBS_2_3_16.tar.gz. We suggest that
you download the files on a machine that will host the queue server and can
be seen by all of the other nodes of the cluster. (PBS will need to be built
and configured on all nodes of the cluster but with different build flags).
In this example, there is a filesystem on the queue server of the cluster
called /export/d01. The rest of the cluster has node names node001 through
node0xx incrementing by 1. These nodes can see the disk on the manager
by an automount pathname similar to /network/manager/export/d01.
1. cd <some directory of your choice> /export/do1 in our example
2. mkdir PBS
3. cd PBS
4. Download the OpenPBS_2_3_16.tar.gz file in this directory. Again you
can find this file on the Internet using a search engine. DO NOT
ATTEMPT TO USE A OpenPBS RPM file since these files are
incomplete.
5. gunzip -c OpenPBS_2_3_16.tar.gz | tar xpvf 6. This command uncompresses and untars the file and creates the
OpenPBS_2_3_16 directory.

Red Hat Enterprise WS 3.0 U1 U2 U3 or U4 32 bit Pre-install Configuration


For the Red Hat Enterprise WS 3.0 32 bit and associated update releases
you need to perform some customization steps before proceeding with
setting up the queue server. These steps generate include files which the
PBS configure script requires.
The include files come from the tcl-devel-8.3.5-92.2.i386.rpm and tkdevel-8.3.5-92.2.i386.rpm, which are generated from the tcltk-8.3.592.2.src.rpm. The *devel* rpms can be used when building PBS mom on
the compute nodes of the cluster.
Login to the queue server as root and type the following commands in a
shell window. In this case, root is running ksh or bash. All italic entries are
values unique to your system or names.

Other Docs

Known Problems

32

SeisSpace System Administration

1. Download the tcltk-8.3.5-92.src.rpm from your favorite mirror site, or


install it from the source CD. A suggested mirror site is:
http://linuxsoft.cern.ch/cern/cel3/SRPMS/
2. rpm -Uhv tcltk-8.3.5-92.2.src.rpm
3. rpmbuild --rebuild tcltk-8.3.5-92.2.src.rpm
4. cd /usr/src/redhat/RPMS/i386
5. rpm -Uvh tcl-devel-8.3.5-92.2.i386.rpm
6. rpm -Uvh tk-devel-8.3.5-92.2.i386.rpm
7. cd /usr/lib; ln -s ./libtkx8.3.so ./libtkx.so
Login to the queue server (as root) and type the following commands in a
shell window. In our case, root is running ksh or bash. All italic entries are
values unique to your system or names.
8. cd <path>OpenPBS_2_3_16/buildutils
Note: If you are installing onto a Red Hat Enterprise WS 3.0 or
associated update system, you need to perform the following items
before running the configure step.
Replace the .../buildutils/exclude_script file by copying the
exclude_script from $PROMAX_HOME/port/misc/pbs. This script
contains the following additional lines:
+/ \<built-in>//d
+/ \<command//d

9. cd <path>/OpenPBS_2_3_16
10. set the PROMAX_HOME environent variable: export
PROMAX_HOME=<path to ProMAX installation>

11. patch -p1 < $PROMAX_HOME/port/misc/pbs/pbs_update

Red Hat Enterprise WS 3.0 U4 64bit Pre-install Configuration


For the Red Hat Enterprise WS 3.0 64 bit and associated update releases
you need to perform some customization steps before proceeding with
setting up the queue server. This is the one instance where Landmark is
currently recommending to use Torque instead of PBS. There are still some
changes that need to be made to the base Torque installation before you try
to configure and make the torque/pbs queue executable. There are no
differences in the results using torque or pbs, they both write exactly the
same files to the same places by default.
Download the torque tar file from the web. If you have trouble locating the
file please contact ProMAX support and they can help you locate it. The

Other Docs

Known Problems

33

SeisSpace System Administration

file you need is: torque-1.2.0p2.tar.gz. perform the same gunzip and tar as
you would have done for pbs.
1. gunzip -c torque-1.2.Op2.tar.gz | tar xpvf 2. cd to the torque-1.2.Op2 directory
3. On the 64 bit systems you will need to edit the configure script to point
to the lib64 instead of lib library directories as well as point to the
correct version of the tcl and tk that is loaded on the machine. Here is a
diff output on the machine that was used for testing to enable the
configure and make steps: A patch has been posted on the Torque web
site to take care of makeing these changes.
[root@h1 torque-1.2.0p2]# diff configure.orig configure
995c995
<
count=/bin/ls ${tcl_dir}/lib/libtk* 2> /dev/null | wc -l
-->
count=/bin/ls ${tcl_dir}/lib64/libtk* 2> /dev/null | wc -l
1042c1042
< count=/bin/ls -d $TCL_DIR/lib/libtcl${TCL_LIB_VER}.* 2> /dev/null |
wc -l
--> count=/bin/ls -d $TCL_DIR/lib64/libtcl${TCL_LIB_VER}.* 2> /dev/null
| wc -l
1045c1045
<
count=/bin/ls $TCL_DIR/lib/libtcl${TCL_LIB_VER}.* | wc -l
-->
count=/bin/ls $TCL_DIR/lib64/libtcl${TCL_LIB_VER}.* | wc -l
1083c1083
< count=/bin/ls $TCL_DIR/lib/libtk${TK_LIB_VER}.* 2> /dev/null | wc -l
--> count=/bin/ls $TCL_DIR/lib64/libtk${TK_LIB_VER}.* 2> /dev/null | wc
-l
1086c1086
<
count=/bin/ls $TCL_DIR/lib/libtk${TK_LIB_VER}.* | wc -l
-->
count=/bin/ls $TCL_DIR/lib64/libtk${TK_LIB_VER}.* | wc -l
1100c1100
< TCL_LIBS=$TCL_LIBS -L$(TCLX_DIR)/lib
--> TCL_LIBS=$TCL_LIBS -L$(TCLX_DIR)/lib64
1102c1102
< TCL_LIBS=$TCL_LIBS -ltclx -ltkx
--> TCL_LIBS=$TCL_LIBS -ltclx8.3 -ltkx8.3

Other Docs

Known Problems

34

SeisSpace System Administration

1109c1109
<
TCL_LIBS=$TCL_LIBS -L$(TCL_DIR)/lib
-->
TCL_LIBS=$TCL_LIBS -L$(TCL_DIR)/lib64
1281c1281
<
count=/bin/ls $d/lib/libX11* 2> /dev/null | wc -l
-->
count=/bin/ls $d/lib64/libX11* 2> /dev/null | wc -l
1283c1283
<
X11LIB="-L$d/lib"
-->
X11LIB="-L$d/lib64"

Solaris Preinstall Configuration


In testing we used Torque for Solaris. From the torque web site the torque1..2.0p2.tar.gz file was downloaded. In the Landmark test environment the
target machine did not have the gcc compilers loaded to run the configure
and make steps. These steps were run on the development server that did
have the compilers installed, but we had to make sure we pointed to the
right version of the compilers. After the configure and make were run on
the development server the "make install" was run on the target test
machine. In this test intallation we also did not have tkl available so we did
not attempt to make the guis.
On the development server:
1. cd <some directory of your choice> /export/do1 in our example
2. mkdir PBS
3. cd PBS
4. Download the torque-1.2.op2.tar.gz file in this directory. Again, you
can find this file on the internet using a good search engine.
5. gunzip -c torque-1.2.0p2.tar.gz | tar xpvf This command uncompresses and untars the file and creates the torque1.2.0p2 directory.
6. make a temporary build script called "doit" for example:
#!/bin/ksh -xv

CC=/opt/SUNWspro/bin/cc ./configure --enable-docs --enable-server -enable-mom


NOTE that this example does not include the building of the xpbsmon or
xpbs guis.

Other Docs

Known Problems

35

SeisSpace System Administration

7. run this script in this directory to run the configure step


8. run "make" to build the executables
9. make a tar file of this directory
On the target test server:
1. cd <some directory of your choice> /export/do1 in our example
2. mkdir -p PBS/torque
3. cd PBS/torque
4. copy the tar file from the server where the configure and make were run
in to and untar it
5. tar -xvf xxxxx.tar
6. run "make install" this is step 5 below.
Continue with step 6 below to complete the queue setup.

All installations:
A typical cluster installation will need two different configurations built.
The first for the Queue server which will consist of the queue server and
scheduler and the second for the processing nodes which will need only the
mom" component of the Q. The steps are very similar for each build but
the configure command is different.
1. mkdir server
2. cd server
3. ../configure --enable-docs --enable-server --disable-mom --with-tclx=/usr
--enable-gui

If the manger node will also be a compute node, or if this is a single


workstation installation, use the --enable-mom option. That is,
../configure --enable-docs --enable-server --enable-mom --with-tclx=/usr
--enable-gui

4. make
This builds the executables.
5. make install
This distributes the executables into the requested installation
directory which by default is /usr/local/bin and /usr/local/sbin. You
can change the location by editing the makefile and the ProMAX
sys/exe/pbs and the SeisSpace workmanager scripts.

Other Docs

Known Problems

36

SeisSpace System Administration

6.

cd /usr/spool/PBS

7.

vi server_name

The server node name is the name of the machine where the scheduler and
server will be running. We gave it the name manager. (The configure and
make steps should put the local machine name in this file and in general
you should not have to change it.)
8. vi server_priv/nodes
manager ntype=cluster np=2
node001 ntype=cluster np=2
...
node0xx ntype=cluster np=2

Note: If the server ,node will also be a compute node, include it in the
list, if not just list the processing nodes.
The np=2 setting says that each node has 2 CPUs. If you are
configuring a low-performance machine like a laptop, for example,
with a single cpu, set np=1
You can also optionally set queue properties in this file. For example,
you can add a name in the line to direct jobs of a specific type to
specific nodes. In this case, seisspace and promax.
node001 ntype=cluster np=2 promax:seisspace
node002 ntype=cluster np=2 promax
node003 ntype=cluster np=2 promax
node004 ntype=cluster np=2 seisspace
node005 ntype=cluster np=2 seisspace

In this example ProMAX jobs can be set to only run on nodes 1 2 and
3 and SeisSpace jobs can be set to run on 1 4 an 5.
You define which property to assign a job to when you queue the job
up and add the property in the queue submit menu that you get after
you select the Q name from the ProMAX UI,
9.

vi mom_priv/config
# log all but debug events
$logevent 255
$max_load 3
$ideal_load 2
$clienthost manager

Other Docs

Known Problems

37

SeisSpace System Administration

Landmark suggests these settings for the max_load and ideal_load


values. You are free to adjust these as necessary for your installation.
10. cd /usr/local/sbin
11. ./pbs_mom (optional on manager)
This starts the mom if you used the --enable-mom option. If the mom is
not enabled, do not do this step. You will need to do this step for single
workstation installations.
12. ./pbs_server -t create
This starts the server.
Caution: The -t create flag deletes everything that is currently in the
configuration. If this is your first time through the installation, or you
need to start over, use this option.
13. ./pbs_sched
This starts the scheduler.
14. cd /usr/local/bin
15. . ./qmgr
16. Type the commands as shown after the Qmgr: prompt. The names.
interactive, single, short, and multiple are suggestions for the queue
names. You may choose fewer or more queues or use names like serial,
or parallel.
Qmgr: create queue interactive queue_type=execution
Qmgr: c q single queue_type=e
Qmgr: c q parallel queue_type=e
Qmgr: c q short queue_type=e
Qmgr: set queue interactive enabled=true, started=true
Qmgr: s q single enabled=true, started=true,max_running=1
Qmgr: s q parallel enabled=true, started=true
Qmgr: s q short resources_max.cput=59,enabled=true,started=true
Qmgr: set server scheduling=true
Qmgr: s s default_queue=parallel
Qmgr: s s managers="user_name@*" for example (Fred@*)
Qmgr: s s acl_hosts="*.xxxxx.com"
Qmgr: s s node_pack=false
Qmgr: s s query_other_jobs=true
Qmgr: s s scheduler_iteration=30
Qmgr: s s max_user_run=the number of jobs to allow per user, for example:
max_user_run=10

Other Docs

Known Problems

38

SeisSpace System Administration

17. The following Qmgr commands are used to QC the


contents of the queue database:
Qmgr: print server
Qmgr: print server
#
# Create queues and set their attributes.
#
#
# Create and define queue single
#
create queue single
set queue single queue_type = Execution
set queue single max_running = 1
set queue single enabled = True
set queue single started = True
#
# Create and define queue parallel
#
create queue parallel
set queue parallel queue_type = Execution
set queue parallel enabled = True
set queue parallel started = True
#
# Create and define queue short
#
create queue short
set queue short queue_type = Execution
set queue short resources_max.cput = 00:00:59
set queue short enabled = True
set queue short started = True
#
# Set server attributes.
#
set server scheduling = True
set server acl_hosts = *.denver.lgc.com
set server managers = user1@*
set server default_queue = parallel
set server log_events = 511
set server mail_from = adm
set server query_other_jobs = True
set server scheduler_iteration = 30
set server node_ping_rate = 300
set server node_check_rate = 600
set server tcp_timeout = 6
set server node_pack = False
set server job_stat_rate = 30
Qmgr:
Qmgr: list node h1,a1,e1
Node h1
state = state-unknown,down

Other Docs

Known Problems

39

SeisSpace System Administration

np = 2
ntype = cluster
Node a1
state = state-unknown,down
np = 2
ntype = cluster
Node e1
state = state-unknown,down
np = 2
ntype = cluster
Qmgr:

If you do not get something similar to the above listing:


Check for each node in the etc/hosts file.
Check that the host id and ip address of each node is included.
Check the nodes in the /usr/spool/PBS/server_priv/nodes file to
make sure that all the compute nodes are included in the list.
Kill and rerun/usr/local/sbin/pbs_mon, pbs_server, pbs_sched.
Qmgr: q or quit

The following commands stop and then restart the server with the new
configuration. You can also change properties of the server using qmgr
without having to restart the server.
Caution: Jobs running from the queue fail if you arbitrarilly restart the
server.
18. Stop the pbs_server. On linux you can use the "killall
pbs_server" command. On Solaris do a "ps -ef | grep pbs"
and find the processid and use the kill command to stop
the process.
19. ./pbs_server
Setting up the compute nodes
The following steps set up the base directory where the compute nodes are
installed. In our example, you are only building the moms. This step is not
necessary if you are doing a single workstation setup where all three
components will run on the single workstation and were configured in the
previous steps
On the queue server:

Other Docs

Known Problems

40

SeisSpace System Administration

1.
2.
3.
4.
5.

cd /export/d01/PBS/OpenPBS_2_3_16
mkdir client
cd client
../configure --enable-docs --disable-server --enable-mom --with-tclx=/usr
make

On the processing nodes, node001-0xx; install, edit the configuration files,


and start the mom daemon, using the following steps:
1. cd <path>/PBS/OpenPBS_2_3_16/client
2. make install
3. cd /usr/spool/PBS
4. vi mom_priv/config (as for manager)
The mom_priv/config files must be identical on all machines.
5. vi server_name
Enter the name of the queue server node (manager).
6. /usr/local/sbin/pbs_mom
7. Confirm that the queue is working by setting your DISPLAY
environmental variable.
Set PROMAX_HOME PROMAX_ETC_HOME (if applicable) and
PATH to include PROMAX_HOME/port/bin and
PROMAX_HOME/sys/bin.

Other Docs

Known Problems

41

SeisSpace System Administration

Starting PBS at boot on Linux


To start PBS daemons when the machines boot up, use one of the following
two pbs script(s):
Starting pbs_server, pbs_sched, and pbs_mom for Linux
Starting pbs_mom Only
Starting pbs_server, pbs_sched, and pbs_mom for Linux
The following /etc/init.d/pbs script starts pbs_server, pbs_sched, and
pbs_mom for Linux:
#!/bin/sh
#
# pbs
This script will start and stop the PBS daemons
#
# chkconfig: 345 85 85
# description: PBS is a batch versitle batch system for SMPs and clusters
#
# Source the library functions
. /etc/rc.d/init.d/functions

# let see how we were called


case "$1" in
start)
echo "Starting PBS daemons: "
if [ -x /usr/local/sbin/pbs_server ] ; then
chmod 755 /usr/spool/PBS/spool
echo -n "Starting pbs_server: "
daemon /usr/local/sbin/pbs_server
chmod 777 /usr/spool/PBS/spool
echo
fi
if [ -x /usr/local/sbin/pbs_sched ] ; then
chmod 755 /usr/spool/PBS/spool
echo -n "Starging pbs_sched: "
daemon /usr/local/sbin/pbs_sched
chmod 777 /usr/spool/PBS/spool
echo
fi
if [ -x /usr/local/sbin/pbs_mom ] ; then
chmod 755 /usr/spool/PBS/spool
echo -n "Starting pbs_mom: "
daemon /usr/local/sbin/pbs_mom
chmod 777 /usr/spool/PBS/spool
echo
fi
;;
stop)

Other Docs

Known Problems

42

SeisSpace System Administration

echo "Shutting down PBS: "


if [ -x /usr/local/sbin/pbs_server ] ; then
echo -n "Stopping pbs_server: "
killproc pbs_server
echo
fi
if [ -x /usr/local/sbin/pbs_mom ] ; then
echo -n "Stopping pbs_mom: "
killproc pbs_mom
echo
fi
if [ -x /usr/local/sbin/pbs_sched ] ; then
echo -n "Stopping pbs_sched: "
killproc pbs_sched
echo
fi
;;
status)
status pbs_server
status pbs_mom
status pbs_sched
;;
restart)
echo "Restarting PBS"
$0 stop
$0 start
echo "done."
;;
*)
echo "Usage: pbs {start|stop|restart|status}"
exit 1
esac

Use chkconfig to start the selected daemons for initialization states of 2,3,4
and 5. As root type,
chkconfig pbs on
chkconfig --list | grep pbs

The following confirmation appears:


pbs

0:off 1:off 2:on

3:on

4:on

5:on

6:off

Starting pbs_mom Only


For the processing nodes where only the mom will be running, use the
following /etc/init.d/pbs file:
#!/bin/sh
#

Other Docs

Known Problems

43

SeisSpace System Administration

# pbs
This script will start and stop the PBS daemons
#
# chkconfig: 345 85 85
# description: PBS is a batch versitle batch system for SMPs and clusters
#
# Source the library functions
. /etc/rc.d/init.d/functions
chmod 755 /usr/spool/PBS/spool
# let see how we were called
case "$1" in
start)
echo "Starting PBS daemons: "
if [ -x /usr/local/sbin/pbs_mom ] ; then
echo -n "Starting pbs_mom: "
daemon /usr/local/sbin/pbs_mom
echo
fi
;;
stop)
echo "Shutting down PBS: "
if [ -x /usr/local/sbin/pbs_mom ] ; then
echo -n "Stopping pbs_mom: "
killproc pbs_mom
echo
fi
;;
status)
status pbs_mom
;;
restart)
echo "Restarting PBS"
$0 stop
$0 start
echo "done."
;;
*)
echo "Usage: pbs {start|stop|restart|status}"
exit 1
esac
chmod 777 /usr/spool/PBS/spool

Other Docs

Known Problems

44

SeisSpace System Administration

Starting PBS at boot on Solaris


Solaris uses init states or run levels to determine which processes to start at
boot time. All scripts for starting and stopping processes and daemons are
located in /etc/init.d, and they are linked to the appropriate /etc/rcX.d
directory, where X has a value of S or the numbers 0 through 6.
Become root, and then create the following file in the /etc/init.d directory.
The name of the file is arbitrary, but probably should include the letters
pbs.
vi pbs
#!/bin/sh
#
# pbs
This script will start and stop the PBS daemons
#
# description: PBS is a batch versitle batch system for SMPs and clusters
#
# let see how we were called
case "$1" in
start)
echo "Starting PBS daemons: "
if [ -x /usr/local/sbin/pbs_server ] ; then
chmod 755 /usr/spool/PBS/spool
echo -n "Starting pbs_server: "
daemon /usr/local/sbin/pbs_server
chmod 777 /usr/spool/PBS/spool
echo
fi
if [ -x /usr/local/sbin/pbs_sched ] ; then
chmod 755 /usr/spool/PBS/spool
echo -n "Starging pbs_sched: "
daemon /usr/local/sbin/pbs_sched
chmod 777 /usr/spool/PBS/spool
echo
fi
if [ -x /usr/local/sbin/pbs_mom ] ; then
chmod 755 /usr/spool/PBS/spool
echo -n "Starting pbs_mom: "
daemon /usr/local/sbin/pbs_mom
chmod 777 /usr/spool/PBS/spool
echo
fi
;;
stop)
echo "Shutting down PBS: "
if [ -x /usr/local/sbin/pbs_server ] ; then
echo -n "Stopping pbs_server: "
killproc pbs_server
echo
fi

Other Docs

Known Problems

45

SeisSpace System Administration

if [ -x /usr/local/sbin/pbs_mom ] ; then
echo -n "Stopping pbs_mom: "
killproc pbs_mom
echo
fi
if [ -x /usr/local/sbin/pbs_sched ] ; then
echo -n "Stopping pbs_sched: "
killproc pbs_sched
echo
fi
;;
status)
status pbs_server
status pbs_mom
status pbs_sched
;;
restart)
echo "Restarting PBS"
$0 stop
$0 start
echo "done."
;;
*)
echo "Usage: pbs {start|stop|restart|status}"
exit 1
esac

Run-level 3 is the default multi-user run level for Solaris, so change


directories to /etc/rc3.d.,and then assuming the name of the above script
was lmgrd.local, create the following hard link:
# ln /etc/init.d/pbs S96pbs

This link will cause pbs to be started when run-level 3 is entered. The
number 96 in the hard link is somewhat arbitrary, but it should be in the
range of 80 to 99, so that all necessary processes will have been started by
the time you attempt to start lmgrd.
You want to kill pbs if you change to run-levels S, 0, or 1. So in the 3
directories, /etc/rcS.d, /etc/rc0.d, and /etc/rc1.d, add the following link:
# ln /etc/init.d/pbs K21pbs

When a run-level of S, 0, or 1 is entered,pbs will be terminated. Again, the


number 21 is arbitrary, but it should be in the range of 20 to 30, so that you
can stop lmgrd before file systems start to be unmounted.

Other Docs

Known Problems

46

SeisSpace System Administration

Setting Up the MySQL Database


Flow replication values are stored in an SQL relational database. If you
have not previously set up MySQL for ProMANAGER, you will need to
install and configure MySQL.
If you already have MySQL setup for ProMANAGER you can proceed to
Managing the SQL databases.

Installing and configuring MySQL


We have included the MySQL files for Solaris on the ProMAX installation
CD, and they are automatically copied into your installation directory into
the $PROMAX_HOME/sys/bin/MySQL directory. MySQL for Linux is
available on the RedHat CDs or downloadable from the WEB. We suggest
you refer to the Release Notes for possible updated information.
MySQL uses a master database to control which users from which hosts
can connect to which working database. User databases will be added to
store the tables used for the flow replication data storage.
The data stored in the database are used in building flows, job status and
statistics, and environment definition information. This data requires no
more security than the your ordinary data. It is the system administrators
choice regarding what access control is implemented for MySQL. If you
wish to use higher levels of security, we refer you to the MySQL
documentation found at www.MySQL.com.
In this document we will describe the simplest configuration which
provides for general access to the information in the databases for all users.
The setup of the working database(s) depends on your production
environment. Production environments vary from single machine with a
single user to multiple process servers with multiple users at multiple
desktops. However, we only describe the case for one mysql database
server for any configuration.
The size of the MySQL database will depend on the amount you use the
flow replication. Small installations may need considerably less than one
gigabyte of disk space. Large production shops may require 5-10 gigabytes
or more. Consider your anticipated use of flow replication when deciding
whether to use the default database locations or selecting a larger disk
partition for the database. The default database locations are called
/usr/local/mysql/data for UNIX and /var/lib/mysql for Linux. Check the
available space on those partitions to decide whether you should install the
database in the default location.

Other Docs

Known Problems

47

SeisSpace System Administration

If disk space is a concern, the easiest solution is to move the MySQL


database directory to a larger partition, and then create a link from the
default location to the actual database directory location.

Setting up MySQL on LINUX


Verify whether the necessary packages are already installed by using the
following command:
rpm -qa | grep -i mysql

The following "rpm" packages are presumed to be in place for the


implementation of MySQL as described here.
libdbi-dbd-mysql-0.6.5.5
mod_auth_mysql-20030510-1.ent
mysql-3.23.58-1
mysql-bench-3.23.58-1
mysql-devel-3.23.58-1
mysql-server-3.23.58-1
php-mysql-4.3.2-11

These package versions are available on the RedHat WS 3.0 CD disc3. If


they are not installed, get them from your CDs or from the internet and
install via the standard rpm method. Higher version numbers probably will
work.
NOTE: mysql-server-3.23.58-1 is NOT on the RedHat CD set. You must
get it from the internet. Use a good search engine and look for:
mysql-server-3.23.58-1.i386.rpm for 32 bit operating systems, or
mysql-server-3.23.58-1.x86_64.rpm for 64 bit operating systems.
Download the server rpm and then install it with an rpm -i command.
If you cannot get access to the internet, contact ProMAX support and they
will help you locate the rpm files.
If you choose to install MySQL executables somewhere other than the
default location of /usr/bin, you will need to install from a "source
distribution" rather than from the "rpm". You can find source distribution
packages at www.MySQL.com and elsewhere on the internet.

Other Docs

Known Problems

48

SeisSpace System Administration

Initializing the MySQL Database on Linux


1. Initialize the database with the following command (by
default, this is under the /usr/bin directory):
./mysql_install_db

This will return a number of messages as shown below:


[root@mysqlserver bin]# ./mysql_install_db
Preparing db table
Preparing host table
Preparing user table
Preparing func table
Preparing tables_priv table
Preparing columns_priv table
Installing all prepared tables
date time /usr/libexec/mysqld: Shutdown Complete
To start mysqld at boot time you have to copy supportfiles/mysql.server to the right place for your system
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL
root USER !
This is done with:
/usr/bin/mysqladmin -u root password new-password
/usr/bin/mysqladmin -u root -h mysqlserver password newpassword
See the manual for more instructions.
You can start the MySQL daemon with:
cd /usr ; /usr/bin/safe_mysqld &
You can test the MySQL daemon with the benchmarks in the sqlbench directory:
cd sql-bench ; run-all-tests
Please report any problems with the /usr/bin/mysqlbug script!
The latest information about MySQL is available on the web at
http://www.mysql.com
Support MySQL by buying support/licenses at
https://order.mysql.com

If you wish to specify the location for the database files


(the default location is /var/lib/mysql), run the command
as follows:
./mysql_install_db --ldata=/your/preferred/location

Other Docs

Known Problems

49

SeisSpace System Administration

The mysql_install_db script generates new MySQL privilege


tables and database entry called "test". This command
does not affect other data that might already exist, and it
does nothing if the privilege tables are already installed.
2. Start the MySQL server daemon with the following
command:
service mysqld start

3. Ensure that mysqld will start on boot by setting the


following:
chkconfig mysqld on

4. Verify that the service is set to start at boot time:


chkconfig --list | grep mysqld

This should return a line of text that shows the service to be on for run
levels of 3, 4 or 5:
[root@mysqlserver root]# chkconfig --list | grep mysqld
mysqld

0:off 1:off 2:off 3:on

4:on

5:on

6:off

5. Check the contents of the database using SQL commands.


Note: that all SQL commands must end in a semi-colon ";".
First, connect to the mysql SQL database the you just created:
/usr/bin/mysql -u root mysql

show the list of tables in the mysql database using the show tables
command.
mysql> show tables;
+-----------------+
| Tables_in_mysql |
+-----------------+
| columns_priv |
| db
|
| func
|
| host
|
| tables_priv |
| user
|
+-----------------+
6 rows in set (0.02 sec)

Display the contents of the individual tables using the select command.
mysql> select * from columns_priv;
Empty set (0.00 sec)

Other Docs

Known Problems

50

SeisSpace System Administration

mysql> select * from db;+------+---------+------+-------------+-------------+-----| Host | Db


| User | Select_priv | Insert_priv | Update_priv |
+------+---------+------+-------------+-------------+-------------+-------------+---------|%
| test
|
|Y
|Y
|Y
| Y etc...
|%
| test\_% |
|Y
|Y
|Y
| Y etc...
+------+---------+------+-------------+-------------+-------------+-------------+---------2 rows in set (0.00 sec)

mysql> select * from func;


Empty set (0.00 sec)

mysql> select * from host;


Empty set (0.00 sec)

mysql> select * from tables_priv;


Empty set (0.01 sec)

mysql> select * from user;


+-----------+------+----------+-------------+-------------+-------------+-------------+---| Host
| User | Password | Select_priv | Insert_priv | Update_priv |
+-----------+------+----------+-------------+-------------+-------------+-------------+---| localhost | root |
|Y
|Y
|Y
| etc...
| dangmpro | root |
|Y
|Y
|Y
etc...
| localhost |
|
|N
|N
|N
|etc...
| dangmpro |
|
|N
|N
|N
etc...
4 rows in set (0.00 sec)

Quit from the mysql session


mysql> quit
Bye

This user table will need some editing and additions.


6. cd to the bin directory, /usr/bin for Linux if you are using the default
locations
7. Set a MySQL root password using the following command:
./mysqladmin -u root password <new password>

Note: This is the root user of MySQL. Do not confuse this with the
system root user. You are strongly advised to use different passwords
for MySQL root and system root.
For example:

Other Docs

Known Problems

51

SeisSpace System Administration

./mysqladmin -u root password rootpw

This has worked if no message is returned.


8. Add the root password for the hostname entry in the user
table. Add a database access user using the "grant"
command and create a database.
At the command prompt type the following command to
connect to the database
./mysql -u root -p<password> mysql

Note there is no space between the -p and the password.


For example:
[root@mysqlserver bin]# mysql -u root -prootpw mysql
Welcome to the MySQL monitor. Commands end with ; or \g
Your MySQL connection id is x to server version: 3.23.58
Type help; or \h for help. Type \c to clear the buffer.
mysql>

(Type the following commands in as a single line and


replace the "mysqlserver" with your mysql server
hostname.)
mysql> set password for root@mysqlserver=PASSWORD(rootpw);
Query OK, 0 rows affected (0.00 sec)

Add the user access privileges using the grant command.


We recommend to making the database access user a
generic user that is not the same name as any of the
individual unix/linux users. For example we suggest a
database access user of dbuser with an access password
of dbuserpw. This reduces confusion about who has
access to the database.
The first two lines set up access from the local machine
and the third one using the "%" wildcard set up access
from other machines.
The fourth line add root access via the long hostname
which is sometimes required in the admin tool to set the
SQL server host.
mysql> GRANT ALL PRIVILEGES ON *.* TO dbuser@mysqlserver
IDENTIFIED BY dbuserpw WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON *.* TO
dbuser@mysqlserver.domain.com IDENTIFIED BY dbuserpw
WITH GRANT OPTION;

Other Docs

Known Problems

52

SeisSpace System Administration

mysql> GRANT ALL PRIVILEGES ON *.* TO dbuser@"%"


IDENTIFIED BY dbuserpw WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON *.* TO
root@mysqlserver.domain.com IDENTIFIED BY rootpw WITH
GRANT OPTION;

If you plan to use the ProMAX/ProMANAGER


MySQLAdminTool later to enhance the default
permissions then add the following:
mysql> GRANT ALL PRIVILEGES ON *.* TO
root@localhost.localdomain IDENTIFIED BY rootpw WITH
GRANT OPTION;

If you plan to log into a different machine to do SQL


administration on another server you will need to either
add root privileges for all servers using root@% or a
specific host using root@anotherhost. For example, to
make the database open to root administration from any
other host use the following:
mysql> GRANT ALL PRIVILEGES ON *.* TO root@% IDENTIFIED
BY rootpw WITH GRANT OPTION;

For example, to make the database open to root


administration from a specific alternate host (loginnode1
for example) use the following:
mysql> GRANT ALL PRIVILEGES ON *.* TO root@loginnode1
IDENTIFIED BY rootpw WITH GRANT OPTION;

All grant commands should show the following response:


Query OK, 0 rows affected (0.00 sec)

QC the contents of the user table in the mysql database. You should see
a listing similar to that shown below except that your server name an
password encription entries may be different.
mysql> use mysql;

should respond with:

Database changed
mysql> select * from user;
This is a subset of the entire listing, there are many more columns than
described below.
+---------------------------+--------+------------------+-------------+
| Host
| User | Password
| Select_priv |
+---------------------------+--------+------------------+-------------+
| localhost
| root | 19b32459742bbba8 | Y
| colfaxamd1
| root | 19b32459742bbba8 | Y
| localhost
|
|
|N
| colfaxamd1
|
|
|N
| colfaxamd1
| dbuser | 63630b126dc91b8f | Y
| colfaxamd1.denver.lgc.com | dbuser | 63630b126dc91b8f | Y

Other Docs

Known Problems

53

SeisSpace System Administration

|%
| dbuser | 63630b126dc91b8f | Y
| localhost.localdomain | root | 19b32459742bbba8 | Y
colfaxamd1.denver.lgc.com| root | 19b32459742bbba8 | Y
|%
| root| 19b32459742bbba8 | Y
+---------------------------+--------+------------------+-------------+

7rows in set (0.00 sec)

9. Exit from the mysql administration session.


mysql> quit
Bye

10. Restart the mysqd daemon.


service mysqld restart
NOTE: Cases have been reported where you have to edit the
/etc/init.d/mysld file to replace the UNKNOWN user with root. If you get a
timout error when starting the mysql service edit the file as shown below
replacing the UNKNOWN_MYSQL_USER with root and the root user
password
# If youve removed anonymous users, this line must be changed to
# use a user that is allowed to ping mysqld.
#ping="/usr/bin/mysqladmin -uUNKNOWN_MYSQL_USER ping"
ping="/usr/bin/mysqladmin -u root -prootpw ping"

You may want more than one database depending on how you want to split
up your work. Some sites have separate databases for each user, other have
separate database for different projects. We recommend that at this time
you only add one database. After you complete the next step of building the
data model, populate the database with the example data and set up the run
time environments, the easiest way to make more databases is to go to the
directory where the databases are stored and copy the database directories
and then restart the mysql server.

Proceed to the Managing the SQL databases using the


Admin Tool section.

Other Docs

Known Problems

54

SeisSpace System Administration

Setting up MySQL on Solaris


It is recommended to do this installation as root.
The following compressed files are located in
$PROMAX_HOME/sys/bin/MySQL:
mysql-3.23.44-sun-solaris2.8-sparc.tar.Z
Copy, uncompress, and untar the file compatible with your system into
your /usr/local directory, which is the default installation point for MySQL.
For example: NOTE: you must use gtar instead of tar for this file, This is a
documented problem with tar on Solaris and is referenced in the MySQL
documentation.
cp $PROMAX_HOME/sys/bin/MySQL/mysqlxx.tar.Z /usr/local/.
cd /usr/local
uncompress mysql-3.23.44-sun-solaris2.8-sparc.tar.Z
gtar -xvf mysql-3.23.44-sun-solaris2.8-sparc.tar
ln -s mysql-3.23.44-sun-solaris2.8-sparc mysql
cd mysql
chmod +x ./support-files/mysql.server

Edit (using vi or other editor) the script ./support-files/mysql.server to add


"--user=root" to line number 107 as shown here:
.$bindir/safe_mysqld --datadir=$datadir --pid-file=$pid_file --user=root&

Alternatively to this change, you may choose to add a system username


called mysql. Refer to the MySQL documentation at www.MySQL.com
for details.
Note: The binary MySQL distributions assume /usr/local as the parent
directory. If you install under /usr/local, you are finished with this step and
can proceed to "Initializing the MySQL Database".
However, if you choose to install MySQL binary files or database files in a
different location, you must edit the following two scripts to match your
chosen locations:
bin/safe_mysqld
support-files/mysql.server

Other Docs

Known Problems

55

SeisSpace System Administration

In the bin/safe_mysqld script, after the ELSE statement, edit the following
three lines, replacing your MySQL installation path for /usr/local (these are
line numbers 76-78):
MY_BASEDIR_VERSION=
ledir=
DATADIR=

In the support-files/mysql.server script, alter the following three lines,


replacing your MySQL installation path for /usr/local (these are line
numbers 24-26):
basedir=
datadir=
pid_file=

Proceed to "Initializing the MySQL Database on Solaris"

Initializing the MySQL Database on Solaris


Initialize the database with the following command:
./scripts/mysql_install_db

This will return a number of messages as shown below:


[root@mysqlserver bin]# ./mysql_install_db
Preparing db table
Preparing host table
Preparing user table
Preparing func table
Preparing tables_priv table
Preparing columns_priv table
Installing all prepared tables
date time /usr/libexec/mysqld: Shutdown Complete
To start mysqld at boot time you have to copy supportfiles/mysql.server to the right place for your system
PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL
root USER !
This is done with:
/usr/bin/mysqladmin -u root password new-password
/usr/bin/mysqladmin -u root -h mysqlserver password newpassword
See the manual for more instructions.
You can start the MySQL daemon with:

Other Docs

Known Problems

56

SeisSpace System Administration

cd /usr ; /usr/bin/safe_mysqld &


You can test the MySQL daemon with the benchmarks in the sqlbench directory:
cd sql-bench ; run-all-tests
Please report any problems with the /usr/bin/mysqlbug script!
The latest information about MySQL is available on the web at
http://www.mysql.com
Support MySQL by buying support/licenses at
https://order.mysql.com

If you wish to specify the location for the database files (the default
location is /usr/local/mysql/data), run the command as follows:
./scripts/mysql_install_db --ldata=/your/preferred/location

The mysql_install_db script generates new MySQL privilege


tables and database entry called "test". This command
does not affect other data that might already exist, and it
does nothing if the privilege tables are already installed.
11. Start the MySQL server daemon with the following command:
./support-files/mysql.server start

You may need to hit Enter to get the command prompt back.
12. To start MySQL when your workstation boots:
Copy support-files/mysql.server into the /etc/init.d directory. In the
/etc/rc2.d directory create the following link
ln -s ../init.d/mysql.server S96mysql

In the /etc/rc0.d directory create the following link:


ln -s ../init.d/mysql.server K05mysql

13. Check the contents of the database using SQL commands.


Note: that all SQL commands must end in a semi-colon ";".
First, connect to the mysql SQL database the you just created:
/usr/local/mysql/bin/mysql -u root mysql

show the list of tables in the mysql database using the show tables
command.
mysql> show tables;
+-----------------+
| Tables_in_mysql |
+-----------------+

Other Docs

Known Problems

57

SeisSpace System Administration

| columns_priv |
| db
|
| func
|
| host
|
| tables_priv |
| user
|
+-----------------+
6 rows in set (0.02 sec)

Display the contents of the individual tables using the select command.
mysql> select * from columns_priv;
Empty set (0.00 sec)
mysql> select * from db;+------+---------+------+-------------+-------------+-----| Host | Db
| User | Select_priv | Insert_priv | Update_priv |
+------+---------+------+-------------+-------------+-------------+-------------+---------|%
| test
|
|Y
|Y
|Y
| Y etc...
|%
| test\_% |
|Y
|Y
|Y
| Y etc...
+------+---------+------+-------------+-------------+-------------+-------------+---------2 rows in set (0.00 sec)

mysql> select * from func;


Empty set (0.00 sec)

mysql> select * from host;


Empty set (0.00 sec)

mysql> select * from tables_priv;


Empty set (0.01 sec)

mysql> select * from user;


+-----------+------+----------+-------------+-------------+-------------+-------------+---| Host
| User | Password | Select_priv | Insert_priv | Update_priv |
+-----------+------+----------+-------------+-------------+-------------+-------------+---| localhost | root |
|Y
|Y
|Y
| etc...
| dangmpro | root |
|Y
|Y
|Y
etc...
| localhost |
|
|N
|N
|N
|etc...
| dangmpro |
|
|N
|N
|N
etc...
4 rows in set (0.00 sec)

Quit from the mysql session


mysql> quit
Bye

This user table will need some editing and additions.


Other Docs

Known Problems

58

SeisSpace System Administration

14. cd to the bin directory, /usr/local/mysql/bin for Solaris


if you are using the default locations
15. Set a MySQL root password using the following command:
./mysqladmin -u root password <new password>

Note: This is the root user of MySQL. Do not confuse this


with the system root user. You are strongly advised to use
different passwords for MySQL root and system root.
For example:
./mysqladmin -u root password rootpw

This has worked if no message is returned.


16. Add the root password for the hostname entry in the user
table. Add a database access user using the "grant"
command and create a database.
At the command prompt type the following command to
connect to the database
./mysql -u root -p<password> mysql

Note there is no space between the -p and the password.


For example:
[root@mysqlserver bin]# mysql -u root -prootpw mysql
Welcome to the MySQL monitor. Commands end with ; or \g
Your MySQL connection id is x to server version: 3.23.58
Type help; or \h for help. Type \c to clear the buffer.
mysql>

(type the following commands in as a single line and


replace the "mysqlserver" with your mysql server
hostname)
mysql> set password for root@mysqlserver=PASSWORD(rootpw);
Query OK, 0 rows affected (0.00 sec)

Add the user access privileges using the grant command.


It is recommended to make the database access user a
generic user that is not the same name as any of the
individual unix/linux users. For example we suggest a
database access user of dbuser with an access password
of dbuserpw. This reduces confusion about who has
access to the database.
The first two lines set up access from the local machine
and the third one using the "%" wildcard set up access

Other Docs

Known Problems

59

SeisSpace System Administration

from other machines.


The fourth line add root access via the long hostname
which is sometimes required in the admin tool to set the
SQL server host.
mysql> GRANT ALL PRIVILEGES ON *.* TO dbuser@mysqlserver
IDENTIFIED BY dbuserpw WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON *.* TO
dbuser@mysqlserver.domain.com IDENTIFIED BY dbuserpw
WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON *.* TO dbuser@"%"
IDENTIFIED BY dbuserpw WITH GRANT OPTION;
mysql> GRANT ALL PRIVILEGES ON *.* TO
root@mysqlserver.domain.com IDENTIFIED BY rootpw WITH
GRANT OPTION;

If you plan to use the ProMAX/ProMANAGER


MySQLAdminTool later to enhance the default
permissions then add the following:
mysql> GRANT ALL PRIVILEGES ON *.* TO
root@localhost.localdomain IDENTIFIED BY rootpw WITH
GRANT OPTION;

If you plan to log into a different machine to do SQL


administration on another server you will need to either
add root privileges for all servers using root@% or a
specific host using root@anotherhost. For example, to
make the database open to root administration from any
other host use the following:
mysql> GRANT ALL PRIVILEGES ON *.* TO root@% IDENTIFIED
BY rootpw WITH GRANT OPTION;

For example, to make the database open to root


administration from a specific alternate host (loginnode1
for example) use the following:
mysql> GRANT ALL PRIVILEGES ON *.* TO root@loginnode1
IDENTIFIED BY rootpw WITH GRANT OPTION;

All grant commands should show the following response:


Query OK, 0 rows affected (0.00 sec)

QC the contents of the user table in the mysql database. You should see
a listing similar to that shown below except that your server name an
password encription entries may be different.
mysql> use mysql;

should respond with:

Database changed
mysql> select * from user;

Other Docs

Known Problems

60

SeisSpace System Administration

This is a subset of the entire listing, there are many more columns than
described below.
+---------------------------+--------+------------------+-------------+
| Host
| User | Password
| Select_priv |
+---------------------------+--------+------------------+-------------+
| localhost
| root | 19b32459742bbba8 | Y
| colfaxamd1
| root | 19b32459742bbba8 | Y
| localhost
|
|
|N
| colfaxamd1
|
|
|N
| colfaxamd1
| dbuser | 63630b126dc91b8f | Y
| colfaxamd1.denver.lgc.com | dbuser | 63630b126dc91b8f | Y
|%
| dbuser | 63630b126dc91b8f | Y
| localhost.localdomain | root | 19b32459742bbba8 | Y
colfaxamd1.denver.lgc.com| root | 19b32459742bbba8 | Y
|%
| root| 19b32459742bbba8 | Y
+---------------------------+--------+------------------+-------------+
7rows in set (0.00 sec)

17. Exit from the mysql administration session.


mysql> quit

Bye
18. restart the mysqd daemon
cd /usr/local/mysql
./support-files/mysql.server stop
./support-files/mysql-server start

You may want more than one database depending on how you want to split
up your work. Some sites have separate databases for each user, other have
separate database for different projects. We recommend that at this time
you only add one database. After you complete the next step of building the
data model, populate the database with the example data and set up the run
time environments, the easiest way to make more databases is to go to the
directory where the databases are stored and copy the database directories
and then restart the mysql server.

Other Docs

Known Problems

61

SeisSpace System Administration

Managing the SQL databases using the Admin Tool


With MySQL running, the test database in place, and the users
configured, you can use the Administration Tool to manage the user
databases for the flow replication and associate these databases with
SeisSpace and ProMAX projects.
All Areas/Lines in a ProMAX project will be associated with the same user
database, and all subprojects under a SeisSpace project will be associated
with the same user database. You can have as many user databases as you
wish.
When you start the Administrator Tool you can go to the SQL tab to
manage the list of user databases.

Select host and password

enter a new database


name and execute to create

Enter the SQL server name and the sql root password (not the system root
password) and then click Test Connection to Add. The server name and a
list of existing databases appears in the lower part of the window. This list
will only list test if this is a new MySQL setup. If you are using an existing
MySQL instance set up for ProMANAGER, then the ProMANAGER
database appears in the list as well.
You CANNOT use an existing ProMANAGER database directly since the
data model has been changed between ProMANAGER and the SeisSpace
Flow Builder/Replicator. Database migration and translation routines will
be available in the future.

Other Docs

Known Problems

62

SeisSpace System Administration

You need to add one (or more) user database(s) by entering a name in the
SQL Database dialog, selecting the create option and then executing the
action.
1. Select Create and Execute Action.
This will generate some printout in the console from which the
SeisSpace Client was started. It will list the tables that were created and
indicate that database creation is complete.
[ssuser@dangmpro seisspace_class]$ Creating table = [About].
Database revision = [$Revision: 1.6 $]
Creating table = [Project].
Creating table = [Project_Parm].
Creating table = [Replica].
Creating table = [Replica_Parm].
Creating table = [Replica_Float].
Creating table = [Replica_Integer].
Creating table = [Replica_String].
Creating table = [Job].
Create database tables complete
Create database complete

Other Docs

Known Problems

63

SeisSpace System Administration

2. Associate the user database with the ProMAX and SeisSpace projects.
Switch to the SeisSpace Data Home and ProMAX Data Home tabs.

Select the Project Database, the SQL server, and the SQL database.
Click Associate SQL Database with Project Database.

Other Docs

Known Problems

64

SeisSpace System Administration

3. Associate the user database with the ProMAX projects by changing to


the ProMAX Data Home tab and perform the same operation as
described above.

4. In the Navigator, select the ProMAX project and use MB3 on the
project name and select the properties option. (Flow replication is not
supported in SeisSpace projects.) Click on the SQL tab and then enter
the (database)user and password for access to the database in the

Other Docs

Known Problems

65

SeisSpace System Administration

properties of the projects. In our example this was "dbuser" with a


password of "dbuserpw".

Select the SQL tab and enter the user (database) name and password for
access to the database. This will need to be done for each user
independently for security purposes.

Other Docs

Known Problems

También podría gustarte