Documentos de Académico
Documentos de Profesional
Documentos de Cultura
Administration
Release Notes
Page 1 of 65
May 2005
Configuring SeisSpace
Overview
Once you have logged in as an administrative user, you can start the
Administrator and select the Administrative host.
In the cluster environment where you have manager nodes, login nodes and
processing nodes, you may need to start additional work managers.
For example, if your cluster is configured so that you intend to log in to the
manager node and run the Navigator from there, they you only need the
workmanager on the manager node at this point. If you plan on having the
users log into one or two of the nodes as login nodes where these nodes
will run the GUIs and interactive jobs, then you will need to start a
workmanager on these login nodes as well. The goal here is to have the
administrative host list be the list of hosts where you intend to log into to
run the GUIs and interactive jobs.
To start the workmanagers on the login nodes you need to run the
...../apps/bin/workmanager script on those nodes. You can use your
"parallel shell" command to do this or you can physically log into each
node and type the ".../apps/bin/workmanager start" command in the
command line.
Other Docs
Known Problems
2. Click the host list to choose your Administrative host and select the
host. This window will display all machines with currently active
workmanagers.
3. Select the host where you plan to log in and where you want to run
local interactive jobs.
4. Use the following Wizards (from left to right) to configure SeisSpace:
Cluster Wizard
Data Home Wizard
User Wizard
The wizards add information to the netdir.xml file located in
$PROWESS_LOGDIR. and will create some project (or data) directories.
Note that the Admin Tool also provides the all the functionality of the
Wizards. See Administrator Tool for more detail.
Other Docs
Known Problems
Cluster Wizard
The Cluster Wizard leads you through the configuration of a Linux cluster
and defines the Virtual File System (VFS) disk resources for SeisSpace
datasets.. The text at the top of each panel gives a brief description of the
wizard and its use. Additional help is available from the pulldown menus
on each windows.
1. Click Cluster Wizard.
Click Next > in the Cluster Wizard introduction screen.
2. Select Yes Add Nodes to build a list of node names on your cluster.
You can later group these nodes into a variety of different subsets for
defining which nodes of the cluster to submit jobs.
Click Next >.
Other Docs
Known Problems
3. To populate the list of host names, use the Numeric Pattern and
Numeric Range options, manually add them, or import the list from an
ASCII file.
Other Docs
Known Problems
Click Add >>> to transfer the selected hosts from Available Hosts to
Hosts in Cluster and to the database.
Click Next >.
4. Select Yes, Add Job Clusters to group subsets of your clusters as a
single Job Cluster. (Job Clusters are groups of compute nodes (hosts)
on your cluster that are used for submitting jobs.)
For example, if you expect to submit a lot of jobs to 4 node subsets of
your cluster and you have a 16 node cluster, you can alias node1 node2
node3 and node4 to a name like nodes1-4.
Click Next >.
5. Enter a Cluster Name (such as, supcl1-4) and select a foreman node.
The foreman node runs any computation or operation that cannot be
distributed. Choose a node other than the sitemanager node, that is a
node that is less busy. You will generally want to exclude the manager
node and the login nodes from cluster aliases.
Select the cluster nodes from the Available Cluster Hosts and add
them to the Host in Job Clusters column.
Select all by clicking on supcl1, Ctrl A, and Add >>>.
Select individual entries or a range: Ctrl MB1 and Shift MB1.
Other Docs
Known Problems
Other Docs
Known Problems
8. Select VFS Basenames and Paths. In our example we made a VFS that
uses /export/d01 file system on each of the four nodes for storage.
To do this we:
add all four Available Hosts to the Basename Hosts column by
selecting them and clicking Add (the one under the Available Hosts
column). Make sure the Basenames specified include your network
paths.
In the VFS basename text field, enter /network/<host>/export/d01 using the <host> button to insert
the characters <host> in the pathname).
Enter a VFS ID and label. The ID is the name of the data directory.
The Label is the name of the directory created under the VFS ID. You
should use a project name that relates to the data. Click Add (the one
under VFS label).
VFS Label ties distributed disks together. For example, Alaska or
Testing. The directories listed under VFS Paths are the directories that
are built.
Other Docs
Known Problems
Other Docs
Known Problems
10
Other Docs
Known Problems
11
Other Docs
Known Problems
12
Other Docs
Known Problems
13
Select PROMAX_ETC_CONFIG_FILE_HOME.
Click Edit>.
Other Docs
Known Problems
14
Other Docs
Known Problems
15
User Wizard
This wizard allows you to add, edit, or remove users, passwords, and
privileges.
A new user must be a valid user on the host where the site and
workmanagers are running and the user must exist in the /etc/passwd
file. However, the password does not have to be the same.
Adding users
Click Next > in the wizard splash screen to add, edit, or remove users,
passwords, and privileges.
To remove a user, select a username from the Edit/Remove User
pulldown and click Remove.
To edit a password, select a username from the Edit/Remove User
pulldown and click Set Password.
To edit privileges, select a username from the Edit/Remove User
pulldown and click Change Actions.
The initial login user, useradmin, should only be used the first time you
use the User Wizard.
All users must be valid Unix/Linux users.
Any user with Administrator privileges is allowed to add new users, and set
up clusters and projects.
Other Docs
Known Problems
16
Administrator Tool
The Administrator tool allows you to add, delete, or modify the system
configuration information and verify the entries made in the previous
wizards.
Other Docs
Known Problems
17
Administrator Tabs
The following tabs allow you to select the type of information to work
with:
Host/User:
Cluster
VFS
Scratch
SQL
SeisSpace Data Homes
ProMAX Data Homes
Queues
Tree
Other Docs
Known Problems
18
Hosts/User
Tabs
for selecting
different
information
The Host/User tab is used to add additional hosts (or nodes) to the existing
configuration, copy the entire configuration from one administrative host to
another, and perform user administration.
The host copy function is used when you add a second cluster with a
separate manager node and you clone the configuration from one cluster to
another.
Other Docs
Known Problems
19
Cluster
The Cluster tab allows you to manage the cluster alias lists generated in the
Cluster wizard. You can review the existing cluster alias definitions, add
more, or delete unneeded ones.
Other Docs
Known Problems
20
VFS
The VFS tab allows you to manage the Virtual File System lists generated
in the Cluster wizard. You can review the existing VFSs, add more, or
delete unneeded ones. Use this tab to remove existing VFS directories) or
delete the contents of the VFS from disk.
Note: It is easier to use the Cluster wizard to create new VFSs than to use
the VFS tab of Administrator tool.
The Info option shows which file systems are incorporated into the VFS
and how much disk space is available.
Other Docs
Known Problems
21
Scratch
The Scratch tab allows you to manage the scratch directory lists generated
in the Cluster wizard. You can review the existing scratch directory
definitions, add more, or delete unneeded ones.
Note: This tab only affects the contents of the netdir file. It does not add or
remove directories from disk.
Other Docs
Known Problems
22
SQL
The SQL tab allows you to manage the SQL databases for flow replication.
In this dialog, you can create and delete databases. The options dump,
restore and convert are not implemented.
Other Docs
Known Problems
23
The SeisSpace Data Home tab allows you to manage the SeisSpace Data
Home definitions generated in the Data Home Wizard. You can review the
existing SeisSpace Data Home settings and add new definitions or delete
unneeded ones.
This tab can also be used to associate new VFSs to existing Data Homes,
disassociate unneeded VFS, and manage the scratch directories assigned to
SeisSpace Data Homes. It is also used to associate a SQL database to the
project for flow replication.
Note: After you make changes to the SQL, VFS and Scratch settings, you
must click the corresponding Associate buttons to save the changes for
each Data Home.
Other Docs
Known Problems
24
The ProMAX Data Home tab allows you to manage the ProMAX Data
Home settings and environment variable definitions generated in the Data
Home wizard. You can review the existing ProMAX Data Homes and their
associated environment variable lists, add more Data Homes, or delete
unneeded ones.
This tab is also used to associate a SQL database to the Data Home for flow
replication.
Note: After you make changes to the SQL settings, you must click the
Associate button to save the changes for each Data Home.
Other Docs
Known Problems
25
Queue
The Queue tab allows you to manage the list of Queue Directives; there is
no wizard to manage the Queue directives. You can review the existing
Queue directive definitions, add more, or delete unneeded ones.
Queue Directives
The SeisSpace Navigator/Flowbuilder only understands and supports the
PBS/Torque queues. The standard Queue directive settings used are:
#PBS -S /bin/sh
#PBS -N qflowname
#PBS -l nodes=1:ppn=1 (as entered in the job submit GUI)
#PBS -o PROMAX_DATA_HOME/AREA/LINE.FLOW/exec.#.log.out
#PBS -e PROMAX_DATA_HOME/AREA/LINE/FLOW/exec.#.log.err
Other Docs
Known Problems
26
To alter the nice value, set up a Queue directive called NICE5, for example:
#PBS -l nice=5
Segregating clusters
To segregate a cluster for SeisSpace jobs to run on one set of nodes and
ProMAX jobs run on another set of nodes, follow these steps:
1. Add a property to the /usr/spool/PBS/server_priv/nodes file for each
node that PBS can spawn a ProMAX job and a property for SeisSpace.
Here is an example nodes file:
n26 ss np=2
n27 ss np=2
n28 ss np=2
n29 pmx np=2
n30 pmx np=2
n31 pmx np=2
n32 pmx np=2
Nodes n26-n28 have the property ss and nodes n29-n32 have the
property pmx.
2. As root, restart the PBS server and scheduler.
service pbs restart
Tree
The Tree tab allows you to review the configurations for all Data Homes
and administrative hosts that have been defined in the netdir file. This is a
summary view without editing capabilities. You can also us a text editor tor
review the netdir.xml file.
Other Docs
Known Problems
27
Navigate the tree by selecting the Host folder and then opening the
database, data directory, and scratch directory entries. Once opened, these
folders should show the paths to all the directories you configured in the
Wizards.
Click File > Disconnect to exit from the Administrator Tool.
Click File > Exit to exit from the Administrator.
Other Docs
Known Problems
28
Other Docs
Known Problems
29
Known Problems
30
LP and NQS have an option that allows any user to bring the queues
up and down via the ProMAX Queues window. This is not available
for PBS unless the user that is running ProMAX is listed as a manager of the queues via the qmgr command.
Landmark suggests that you read the documentation for OpenPBS
registered users available at www.openpbs.org. This document
includes more information about the system and ways to customize
the configuration.
PBS requires that you have the hostnames and IP addresses in the
hosts files of all the nodes
Note: hostname is the name of your machine; hostname.domainname
can be found in /etc/hosts, and commonly ends with .com:
ip address hostname.domain.com hostname
For DHCP users, ensure that all of the processing and manager nodes
always get the same ip address.
Landmark presents one option of many that can be used to install and
configure OpenPBS Job Queues. For a successful installation the following
must exist:
Make PBS the queuing capability for a cluster environment. That is,
work is automatically distributed over the cluster as nodes are available.
Install PBS for all nodes of the cluster. The installation can be done
on each machine independently or you can use a mounted file system
from the manager node, this may be easier.
Install all components including the server and scheduler on one
node. This is known as the server node and serves the other main processing nodes. Normally this will be the cluster manager node.
The following files must be the same on all installations on all
machines:
/usr/spool/PBS/server_name
/usr/spool/PBS/mom_priv/config
These files are only used by the server and scheduler on the manager
machine:
/usr/spool/PBS/server_priv/nodes
/usr/spool/PBS/sched_priv/sched_config
Other Docs
Known Problems
31
Other Docs
Known Problems
32
9. cd <path>/OpenPBS_2_3_16
10. set the PROMAX_HOME environent variable: export
PROMAX_HOME=<path to ProMAX installation>
Other Docs
Known Problems
33
file you need is: torque-1.2.0p2.tar.gz. perform the same gunzip and tar as
you would have done for pbs.
1. gunzip -c torque-1.2.Op2.tar.gz | tar xpvf 2. cd to the torque-1.2.Op2 directory
3. On the 64 bit systems you will need to edit the configure script to point
to the lib64 instead of lib library directories as well as point to the
correct version of the tcl and tk that is loaded on the machine. Here is a
diff output on the machine that was used for testing to enable the
configure and make steps: A patch has been posted on the Torque web
site to take care of makeing these changes.
[root@h1 torque-1.2.0p2]# diff configure.orig configure
995c995
<
count=/bin/ls ${tcl_dir}/lib/libtk* 2> /dev/null | wc -l
-->
count=/bin/ls ${tcl_dir}/lib64/libtk* 2> /dev/null | wc -l
1042c1042
< count=/bin/ls -d $TCL_DIR/lib/libtcl${TCL_LIB_VER}.* 2> /dev/null |
wc -l
--> count=/bin/ls -d $TCL_DIR/lib64/libtcl${TCL_LIB_VER}.* 2> /dev/null
| wc -l
1045c1045
<
count=/bin/ls $TCL_DIR/lib/libtcl${TCL_LIB_VER}.* | wc -l
-->
count=/bin/ls $TCL_DIR/lib64/libtcl${TCL_LIB_VER}.* | wc -l
1083c1083
< count=/bin/ls $TCL_DIR/lib/libtk${TK_LIB_VER}.* 2> /dev/null | wc -l
--> count=/bin/ls $TCL_DIR/lib64/libtk${TK_LIB_VER}.* 2> /dev/null | wc
-l
1086c1086
<
count=/bin/ls $TCL_DIR/lib/libtk${TK_LIB_VER}.* | wc -l
-->
count=/bin/ls $TCL_DIR/lib64/libtk${TK_LIB_VER}.* | wc -l
1100c1100
< TCL_LIBS=$TCL_LIBS -L$(TCLX_DIR)/lib
--> TCL_LIBS=$TCL_LIBS -L$(TCLX_DIR)/lib64
1102c1102
< TCL_LIBS=$TCL_LIBS -ltclx -ltkx
--> TCL_LIBS=$TCL_LIBS -ltclx8.3 -ltkx8.3
Other Docs
Known Problems
34
1109c1109
<
TCL_LIBS=$TCL_LIBS -L$(TCL_DIR)/lib
-->
TCL_LIBS=$TCL_LIBS -L$(TCL_DIR)/lib64
1281c1281
<
count=/bin/ls $d/lib/libX11* 2> /dev/null | wc -l
-->
count=/bin/ls $d/lib64/libX11* 2> /dev/null | wc -l
1283c1283
<
X11LIB="-L$d/lib"
-->
X11LIB="-L$d/lib64"
Other Docs
Known Problems
35
All installations:
A typical cluster installation will need two different configurations built.
The first for the Queue server which will consist of the queue server and
scheduler and the second for the processing nodes which will need only the
mom" component of the Q. The steps are very similar for each build but
the configure command is different.
1. mkdir server
2. cd server
3. ../configure --enable-docs --enable-server --disable-mom --with-tclx=/usr
--enable-gui
4. make
This builds the executables.
5. make install
This distributes the executables into the requested installation
directory which by default is /usr/local/bin and /usr/local/sbin. You
can change the location by editing the makefile and the ProMAX
sys/exe/pbs and the SeisSpace workmanager scripts.
Other Docs
Known Problems
36
6.
cd /usr/spool/PBS
7.
vi server_name
The server node name is the name of the machine where the scheduler and
server will be running. We gave it the name manager. (The configure and
make steps should put the local machine name in this file and in general
you should not have to change it.)
8. vi server_priv/nodes
manager ntype=cluster np=2
node001 ntype=cluster np=2
...
node0xx ntype=cluster np=2
Note: If the server ,node will also be a compute node, include it in the
list, if not just list the processing nodes.
The np=2 setting says that each node has 2 CPUs. If you are
configuring a low-performance machine like a laptop, for example,
with a single cpu, set np=1
You can also optionally set queue properties in this file. For example,
you can add a name in the line to direct jobs of a specific type to
specific nodes. In this case, seisspace and promax.
node001 ntype=cluster np=2 promax:seisspace
node002 ntype=cluster np=2 promax
node003 ntype=cluster np=2 promax
node004 ntype=cluster np=2 seisspace
node005 ntype=cluster np=2 seisspace
In this example ProMAX jobs can be set to only run on nodes 1 2 and
3 and SeisSpace jobs can be set to run on 1 4 an 5.
You define which property to assign a job to when you queue the job
up and add the property in the queue submit menu that you get after
you select the Q name from the ProMAX UI,
9.
vi mom_priv/config
# log all but debug events
$logevent 255
$max_load 3
$ideal_load 2
$clienthost manager
Other Docs
Known Problems
37
Other Docs
Known Problems
38
Other Docs
Known Problems
39
np = 2
ntype = cluster
Node a1
state = state-unknown,down
np = 2
ntype = cluster
Node e1
state = state-unknown,down
np = 2
ntype = cluster
Qmgr:
The following commands stop and then restart the server with the new
configuration. You can also change properties of the server using qmgr
without having to restart the server.
Caution: Jobs running from the queue fail if you arbitrarilly restart the
server.
18. Stop the pbs_server. On linux you can use the "killall
pbs_server" command. On Solaris do a "ps -ef | grep pbs"
and find the processid and use the kill command to stop
the process.
19. ./pbs_server
Setting up the compute nodes
The following steps set up the base directory where the compute nodes are
installed. In our example, you are only building the moms. This step is not
necessary if you are doing a single workstation setup where all three
components will run on the single workstation and were configured in the
previous steps
On the queue server:
Other Docs
Known Problems
40
1.
2.
3.
4.
5.
cd /export/d01/PBS/OpenPBS_2_3_16
mkdir client
cd client
../configure --enable-docs --disable-server --enable-mom --with-tclx=/usr
make
Other Docs
Known Problems
41
Other Docs
Known Problems
42
Use chkconfig to start the selected daemons for initialization states of 2,3,4
and 5. As root type,
chkconfig pbs on
chkconfig --list | grep pbs
3:on
4:on
5:on
6:off
Other Docs
Known Problems
43
# pbs
This script will start and stop the PBS daemons
#
# chkconfig: 345 85 85
# description: PBS is a batch versitle batch system for SMPs and clusters
#
# Source the library functions
. /etc/rc.d/init.d/functions
chmod 755 /usr/spool/PBS/spool
# let see how we were called
case "$1" in
start)
echo "Starting PBS daemons: "
if [ -x /usr/local/sbin/pbs_mom ] ; then
echo -n "Starting pbs_mom: "
daemon /usr/local/sbin/pbs_mom
echo
fi
;;
stop)
echo "Shutting down PBS: "
if [ -x /usr/local/sbin/pbs_mom ] ; then
echo -n "Stopping pbs_mom: "
killproc pbs_mom
echo
fi
;;
status)
status pbs_mom
;;
restart)
echo "Restarting PBS"
$0 stop
$0 start
echo "done."
;;
*)
echo "Usage: pbs {start|stop|restart|status}"
exit 1
esac
chmod 777 /usr/spool/PBS/spool
Other Docs
Known Problems
44
Other Docs
Known Problems
45
if [ -x /usr/local/sbin/pbs_mom ] ; then
echo -n "Stopping pbs_mom: "
killproc pbs_mom
echo
fi
if [ -x /usr/local/sbin/pbs_sched ] ; then
echo -n "Stopping pbs_sched: "
killproc pbs_sched
echo
fi
;;
status)
status pbs_server
status pbs_mom
status pbs_sched
;;
restart)
echo "Restarting PBS"
$0 stop
$0 start
echo "done."
;;
*)
echo "Usage: pbs {start|stop|restart|status}"
exit 1
esac
This link will cause pbs to be started when run-level 3 is entered. The
number 96 in the hard link is somewhat arbitrary, but it should be in the
range of 80 to 99, so that all necessary processes will have been started by
the time you attempt to start lmgrd.
You want to kill pbs if you change to run-levels S, 0, or 1. So in the 3
directories, /etc/rcS.d, /etc/rc0.d, and /etc/rc1.d, add the following link:
# ln /etc/init.d/pbs K21pbs
Other Docs
Known Problems
46
Other Docs
Known Problems
47
Other Docs
Known Problems
48
Other Docs
Known Problems
49
This should return a line of text that shows the service to be on for run
levels of 3, 4 or 5:
[root@mysqlserver root]# chkconfig --list | grep mysqld
mysqld
4:on
5:on
6:off
show the list of tables in the mysql database using the show tables
command.
mysql> show tables;
+-----------------+
| Tables_in_mysql |
+-----------------+
| columns_priv |
| db
|
| func
|
| host
|
| tables_priv |
| user
|
+-----------------+
6 rows in set (0.02 sec)
Display the contents of the individual tables using the select command.
mysql> select * from columns_priv;
Empty set (0.00 sec)
Other Docs
Known Problems
50
Note: This is the root user of MySQL. Do not confuse this with the
system root user. You are strongly advised to use different passwords
for MySQL root and system root.
For example:
Other Docs
Known Problems
51
Other Docs
Known Problems
52
QC the contents of the user table in the mysql database. You should see
a listing similar to that shown below except that your server name an
password encription entries may be different.
mysql> use mysql;
Database changed
mysql> select * from user;
This is a subset of the entire listing, there are many more columns than
described below.
+---------------------------+--------+------------------+-------------+
| Host
| User | Password
| Select_priv |
+---------------------------+--------+------------------+-------------+
| localhost
| root | 19b32459742bbba8 | Y
| colfaxamd1
| root | 19b32459742bbba8 | Y
| localhost
|
|
|N
| colfaxamd1
|
|
|N
| colfaxamd1
| dbuser | 63630b126dc91b8f | Y
| colfaxamd1.denver.lgc.com | dbuser | 63630b126dc91b8f | Y
Other Docs
Known Problems
53
|%
| dbuser | 63630b126dc91b8f | Y
| localhost.localdomain | root | 19b32459742bbba8 | Y
colfaxamd1.denver.lgc.com| root | 19b32459742bbba8 | Y
|%
| root| 19b32459742bbba8 | Y
+---------------------------+--------+------------------+-------------+
You may want more than one database depending on how you want to split
up your work. Some sites have separate databases for each user, other have
separate database for different projects. We recommend that at this time
you only add one database. After you complete the next step of building the
data model, populate the database with the example data and set up the run
time environments, the easiest way to make more databases is to go to the
directory where the databases are stored and copy the database directories
and then restart the mysql server.
Other Docs
Known Problems
54
Other Docs
Known Problems
55
In the bin/safe_mysqld script, after the ELSE statement, edit the following
three lines, replacing your MySQL installation path for /usr/local (these are
line numbers 76-78):
MY_BASEDIR_VERSION=
ledir=
DATADIR=
Other Docs
Known Problems
56
If you wish to specify the location for the database files (the default
location is /usr/local/mysql/data), run the command as follows:
./scripts/mysql_install_db --ldata=/your/preferred/location
You may need to hit Enter to get the command prompt back.
12. To start MySQL when your workstation boots:
Copy support-files/mysql.server into the /etc/init.d directory. In the
/etc/rc2.d directory create the following link
ln -s ../init.d/mysql.server S96mysql
show the list of tables in the mysql database using the show tables
command.
mysql> show tables;
+-----------------+
| Tables_in_mysql |
+-----------------+
Other Docs
Known Problems
57
| columns_priv |
| db
|
| func
|
| host
|
| tables_priv |
| user
|
+-----------------+
6 rows in set (0.02 sec)
Display the contents of the individual tables using the select command.
mysql> select * from columns_priv;
Empty set (0.00 sec)
mysql> select * from db;+------+---------+------+-------------+-------------+-----| Host | Db
| User | Select_priv | Insert_priv | Update_priv |
+------+---------+------+-------------+-------------+-------------+-------------+---------|%
| test
|
|Y
|Y
|Y
| Y etc...
|%
| test\_% |
|Y
|Y
|Y
| Y etc...
+------+---------+------+-------------+-------------+-------------+-------------+---------2 rows in set (0.00 sec)
Known Problems
58
Other Docs
Known Problems
59
QC the contents of the user table in the mysql database. You should see
a listing similar to that shown below except that your server name an
password encription entries may be different.
mysql> use mysql;
Database changed
mysql> select * from user;
Other Docs
Known Problems
60
This is a subset of the entire listing, there are many more columns than
described below.
+---------------------------+--------+------------------+-------------+
| Host
| User | Password
| Select_priv |
+---------------------------+--------+------------------+-------------+
| localhost
| root | 19b32459742bbba8 | Y
| colfaxamd1
| root | 19b32459742bbba8 | Y
| localhost
|
|
|N
| colfaxamd1
|
|
|N
| colfaxamd1
| dbuser | 63630b126dc91b8f | Y
| colfaxamd1.denver.lgc.com | dbuser | 63630b126dc91b8f | Y
|%
| dbuser | 63630b126dc91b8f | Y
| localhost.localdomain | root | 19b32459742bbba8 | Y
colfaxamd1.denver.lgc.com| root | 19b32459742bbba8 | Y
|%
| root| 19b32459742bbba8 | Y
+---------------------------+--------+------------------+-------------+
7rows in set (0.00 sec)
Bye
18. restart the mysqd daemon
cd /usr/local/mysql
./support-files/mysql.server stop
./support-files/mysql-server start
You may want more than one database depending on how you want to split
up your work. Some sites have separate databases for each user, other have
separate database for different projects. We recommend that at this time
you only add one database. After you complete the next step of building the
data model, populate the database with the example data and set up the run
time environments, the easiest way to make more databases is to go to the
directory where the databases are stored and copy the database directories
and then restart the mysql server.
Other Docs
Known Problems
61
Enter the SQL server name and the sql root password (not the system root
password) and then click Test Connection to Add. The server name and a
list of existing databases appears in the lower part of the window. This list
will only list test if this is a new MySQL setup. If you are using an existing
MySQL instance set up for ProMANAGER, then the ProMANAGER
database appears in the list as well.
You CANNOT use an existing ProMANAGER database directly since the
data model has been changed between ProMANAGER and the SeisSpace
Flow Builder/Replicator. Database migration and translation routines will
be available in the future.
Other Docs
Known Problems
62
You need to add one (or more) user database(s) by entering a name in the
SQL Database dialog, selecting the create option and then executing the
action.
1. Select Create and Execute Action.
This will generate some printout in the console from which the
SeisSpace Client was started. It will list the tables that were created and
indicate that database creation is complete.
[ssuser@dangmpro seisspace_class]$ Creating table = [About].
Database revision = [$Revision: 1.6 $]
Creating table = [Project].
Creating table = [Project_Parm].
Creating table = [Replica].
Creating table = [Replica_Parm].
Creating table = [Replica_Float].
Creating table = [Replica_Integer].
Creating table = [Replica_String].
Creating table = [Job].
Create database tables complete
Create database complete
Other Docs
Known Problems
63
2. Associate the user database with the ProMAX and SeisSpace projects.
Switch to the SeisSpace Data Home and ProMAX Data Home tabs.
Select the Project Database, the SQL server, and the SQL database.
Click Associate SQL Database with Project Database.
Other Docs
Known Problems
64
4. In the Navigator, select the ProMAX project and use MB3 on the
project name and select the properties option. (Flow replication is not
supported in SeisSpace projects.) Click on the SQL tab and then enter
the (database)user and password for access to the database in the
Other Docs
Known Problems
65
Select the SQL tab and enter the user (database) name and password for
access to the database. This will need to be done for each user
independently for security purposes.
Other Docs
Known Problems