Está en la página 1de 121

IBM DB2 Information Integrator 

Data Mapper Guide for Classic Federation


and Classic Event Publishing
Version 8.2

SC18-9163-02
IBM DB2 Information Integrator 

Data Mapper Guide for Classic Federation


and Classic Event Publishing
Version 8.2

SC18-9163-02
Before using this information and the product it supports, be sure to read the general information under “Notices” on page 105.

This document contains proprietary information of IBM. It is provided under a license agreement and copyright law
protects it. The information contained in this publication does not include any product warranties, and any
statements provided in this manual should not be interpreted as such.
You can order IBM publications online or through your local IBM representative:
v To order publications online, go to the IBM Publications Center at www.ibm.com/shop/publications/order
v To find your local IBM representative, go to the IBM Directory of Worldwide Contacts at
www.ibm.com/planetwide
When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any
way it believes appropriate without incurring any obligation to you.
© Copyright International Business Machines Corporation 2003, 2004. All rights reserved.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract
with IBM Corp.
© CrossAccess Corporation 1993, 2003.
Contents
Chapter 1. Introduction to Data Mapper 1 How this IMS tutorial works . . . . . . . 37
Getting started mapping IMS data . . . . . . 37
Chapter 2. User Reference . . . . . . . 3 Mapping IMS data . . . . . . . . . . . 37
Exercise 1: Creating an IMS repository . . . . 37
Introduction to the Data Mapper User Reference . . 3
Exercise 2: Adding owners to an IMS repository
Data Mapper buttons . . . . . . . . . . . 3
(optional) . . . . . . . . . . . . . . 38
Data Mapper menus . . . . . . . . . . . . 4
Exercise 3: Creating an IMS data catalog . . . . 39
File Menu . . . . . . . . . . . . . . 5
Exercise 4: Loading DL/I DBDs for reference . . 39
Edit Menu . . . . . . . . . . . . . . 6
Exercise 5: Creating an IMS table . . . . . . 41
Window . . . . . . . . . . . . . . . 7
Exercise 6: Creating IMS columns (optional) . . 42
Help Menu . . . . . . . . . . . . . . 9
Exercise 7: Importing a Copybook for IMS tables 45
Exercise 8: Creating, updating, or deleting an
Chapter 3. CA-Datacom tutorial . . . . 11 IMS Index (optional) . . . . . . . . . . 49
Introduction to the CA-Datacom tutorial . . . . . 11 Exercise 9: Defining an IMS record array
How this CA-Datacom tutorial works . . . . 11 (Optional) . . . . . . . . . . . . . . 51
Getting started mapping CA-Datacom data . . . 11 Exercise 10: Generating IMS metadata grammar 52
Mapping CA-Datacom data . . . . . . . . 11 Exercise 11: Creating a relational view of IMS
Exercise 1: Creating a CA-Datacom repository . . 11 data . . . . . . . . . . . . . . . . 55
Exercise 2: Adding owners to a CA-Datacom
repository (optional) . . . . . . . . . . 12
Chapter 6. Sequential tutorial . . . . . 57
Exercise 3: Creating a CA-Datacom data catalog 13
Introduction to the Sequential tutorial . . . . . 57
Exercise 4: Creating a CA-Datacom table . . . 13
How this Sequential tutorial works . . . . . 57
Exercise 5: Creating CA-Datacom columns
Getting started mapping Sequential data . . . 57
(optional) . . . . . . . . . . . . . . 14
Mapping Sequential data . . . . . . . . . 57
Exercise 6: Importing a Copybook for
Exercise 1: Creating a Sequential repository . . . 57
CA-Datacom tables . . . . . . . . . . . 16
Exercise 2: Adding owners to a Sequential
Exercise 7: Defining a CA-Datacom record array
repository (optional) . . . . . . . . . . 58
(Optional) . . . . . . . . . . . . . . 19
Exercise 3: Creating a Sequential data catalog . . 59
Exercise 8: Generating CA-Datacom metadata
Exercise 4: Creating a Sequential table . . . . 59
grammar . . . . . . . . . . . . . . 21
Exercise 5: Creating Sequential columns
Exercise 9: Creating a relational view for
(optional) . . . . . . . . . . . . . . 60
CA-Datacom data . . . . . . . . . . . 23
Exercise 6: Importing a Copybook for Sequential
tables . . . . . . . . . . . . . . . 62
Chapter 4. CA-IDMS tutorial . . . . . . 25 Exercise 7: Defining a Sequential record array
Introduction to CA-IDMS tutorial . . . . . . . 25 (Optional) . . . . . . . . . . . . . . 66
How this CA-IDMS tutorial works . . . . . 25 Exercise 8: Generating Sequential metadata
Getting started mapping CA-IDMS data . . . . 25 grammar . . . . . . . . . . . . . . 67
Mapping CA-IDMS data . . . . . . . . . 25 Exercise 9: Creating a relational view for
Exercise 1: Creating an CA-IDMS repository . . 25 Sequential data . . . . . . . . . . . . 69
Exercise 2: Adding owners to an CA-IDMS
repository (optional) . . . . . . . . . . 26
Chapter 7. VSAM tutorial . . . . . . . 71
Exercise 3: Creating an CA-IDMS data catalog . . 27
Introduction to the VSAM tutorial . . . . . . . 71
Exercise 4: Loading CA-IDMS schema for
How this VSAM tutorial works . . . . . . . 71
reference . . . . . . . . . . . . . . 27
Getting Started mapping VSAM data . . . . . 71
Exercise 5: Creating a CA-IDMS table. . . . . 29
Mapping VSAM Data . . . . . . . . . . 71
Exercise 6: Creating CA-IDMS columns (optional) 30
Exercise 1: Creating a VSAM repository . . . . 71
Exercise 7: Importing a schema Copybook for
Exercise 2: Adding owners to a VSAM repository
CA-IDMS tables . . . . . . . . . . . . 31
(optional) . . . . . . . . . . . . . . 72
Exercise 8: Generating CA-IDMS metadata
Exercise 3: Creating a VSAM data catalog . . . 73
grammar . . . . . . . . . . . . . . 34
Exercise 4: Creating a VSAM table . . . . . . 73
Exercise 9: Creating a relational view of
Exercise 5: Creating VSAM columns (optional) . 75
CA-IDMS data . . . . . . . . . . . . 36
Exercise 6: Importing a Copybook for a VSAM
table . . . . . . . . . . . . . . . . 77
Chapter 5. IMS tutorial . . . . . . . . 37 Exercise 7: Creating, updating, and deleting a
Introduction to IMS tutorial . . . . . . . . . 37 VSAM index (optional) . . . . . . . . . 80

© Copyright IBM Corp. 2003, 2004 iii


Exercise 8: Defining a VSAM record array Documentation about event publishing and
(Optional) . . . . . . . . . . . . . . 82 replication function on Linux, UNIX, and Windows . 99
Exercise 9: Generating VSAM metadata grammar 83 Documentation about federated function on z/OS 100
Exercise 10: Creating a relational view for VSAM Documentation about federated function on Linux,
data . . . . . . . . . . . . . . . . 86 UNIX, and Windows . . . . . . . . . . . 100
Documentation about enterprise search function on
Appendix. Metadata grammar reference 87 Linux, UNIX, and Windows . . . . . . . . 102
Overview . . . . . . . . . . . . . . . 87 Release notes and installation requirements . . . 102
USE TABLE statement syntax . . . . . . . . 87
Supported data types . . . . . . . . . . 88 Notices . . . . . . . . . . . . . . 105
Trademarks . . . . . . . . . . . . . . 107
DB2 Information Integrator
documentation . . . . . . . . . . . 95 Index . . . . . . . . . . . . . . . 109
Accessing DB2 Information Integrator
documentation . . . . . . . . . . . . . 95 Contacting IBM . . . . . . . . . . . 111
Documentation about replication function on z/OS 97 Product information . . . . . . . . . . . 111
Documentation about event publishing function for Comments on the documentation . . . . . . . 111
DB2 Universal Database on z/OS . . . . . . . 98
Documentation about event publishing function for
IMS and VSAM on z/OS . . . . . . . . . . 98

iv DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Chapter 1. Introduction to Data Mapper
The Data Mapper is a Microsoft® Windows® application that automates many of
the tasks required to create a typical relational table (an IBM® DB2 Universal
Database™ for z/OS® table for example) from nonrelational data structures. It
accomplishes this by creating metadata grammar from existing nonrelational data
| definitions (COBOL copybooks, CA-IDMS schema and subschema definitions, and
| IMS DBDs). The metadata grammar is used as input to the metadata utility to
create metadata catalogs that define how the nonrelational data structure is
mapped to an equivalent logical table. The metadata catalogs are used by a data
server to facilitate translation of the data from the nonrelational data structure into
relational columns.

The Data Mapper import utilities create initial logical tables from COBOL
| copybooks, IMS™ DBD source, CA-IDMS schemas and CA-IDMS subschemas. You
then use the graphical user interface to refine these initial logical tables to create as
many views of your nonrelational data as your facility requires.

The Data Mapper functions include:


v Creating data catalogs. A data catalog is a collection of tables for a particular
nonrelational database type, such as VSAM or IMS DB.
v Creating a table. You create a relational table by mapping one or more data
structures from a nonrelational data structure into a single DB2® Information
Integrator Classic Federation for z/OS or DB2 Information Integrator Classic
| Event Publisher table, referred to as a logical table.
v Creating a column (optional). A column can represent one or more data items in
the corresponding nonrelational data structure. You can use the following
methods to define columns:
| – Import COBOL copybooks
| – For CA-IDMS, import schema and subschema definitions
| – Create columns manually
v Importing a copybook. A copybook refers to a COBOL copybook that is
transferred from the mainframe to the workstation for the Data Mapper to use.
Importing COBOL copybooks automatically creates column definitions.
v Loading DBDs for reference (IMS only). DBD refers to database definitions that
are transferred from the mainframe to the workstation for the Data Mapper to
use. This allows the Data Mapper to use the information as a reference when
creating relational data. The Data Mapper does not store IMS DBD source, so
you must reload the source each time you open a repository.
v Loading schemas and subschemas for reference (CA–IDMS only). A schema or
subschema refers to database definitions that are transferred from the mainframe
to the workstation for the Data Mapper to use. This allows the Data Mapper to
use the information as a reference when creating relational data. The Data
Mapper does not store CA–IDMS source, so you must reload the source each
time you open a repository.
v Generating metadata grammar. The Data Mapper generates metadata grammar,
also known as USE statements or USE grammar, for all of the tables in a specific
| data catalog. After metadata grammar has been created, it must subsequently be
| transferred from the workstation to the mainframe. The metadata grammar is
| supplied as input to the metadata utility that runs on the mainframe. The
| metadata utility uses the contents of the metadata grammar to create logical

© Copyright IBM Corp. 2003, 2004 1


| tables. Client applications use logical tables, which are non-relational-to-
| relational mappings, for SQL access non-relational data.
v File transfer of data from the workstation to the mainframe. The Data Mapper
facilitates file transfers from the workstation to the mainframe through its
built-in FTP facility. You can use the built-in FTP to transfer copybooks or DBD
source and generated metadata grammar.
v Creating relational views. By transferring the metadata grammar to the host
system and running the metadata utility with this input, you create a relational
view from nonrelational data. This relational view is ultimately used by a server
to enable SQL access to the mapped nonrelational data structure.
v Creating indexes. A logical index is a logical SQL index that maps an existing
physical index on a target database or file system, such as VSAM or IMS DB.

The tutorials in this guide include step-by-step information on how to use the Data
Mapper to map nonrelational data to a relational view.

2 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Chapter 2. User Reference
Introduction to the Data Mapper User Reference
This chapter describes all of the Data Mapper icons, windows, and menus. For
information on how to use the Data Mapper, see the tutorial chapters. For more
information on specific menus or window, see the online help, available from the
Data Mapper Help menu.

To start the Data Mapper from the Windows Start menu, click Programs –IBM
DB2 Information Integrator Classic Tools – Data Mapper.

Data Mapper buttons


The Data Mapper toolbar contains the following buttons.

Note: The status bar at the bottom of the window gives a one-line description of
the button or menu when you move the cursor over it.

The Open Repository button opens repositories.

The Close Repository button closes repositories. This button is available


from a data catalog window with any data catalog open.

The Import External File button is used to import external files into a table.
This button is only available from the Tables for Data Catalog window.

The Generate USE Statements for a Data Catalog button is used to


generate metadata input for the data catalog. The metadata input is used with the
mainframe Metadata Utility. This button is available from a data catalog window
with any data catalog open.

The Exit Data Mapper icon is used to exit the Data Mapper utility. This
button is available from any window.

The Create New button is available from the data catalog, table, column,
index, or owner windows. When you click this button while on one of these
windows, a corresponding window is presented to create a new data catalog, table,
column, index, or owner. For example, if you click the Create a New... button from
the data catalog window, a Create Data Catalog window displays.

The Delete Selected button is available from the data catalog, table, column,
index, or owner windows. When you click this button while on one of these
windows, a corresponding window is presented to delete the selected data catalog,

© Copyright IBM Corp. 2003, 2004 3


table, column, index, or owner. For example, if you click the Delete the Selected...
button from the data catalog window, a Delete Data Catalog window displays.

The Move a Table Column Up button is available from the Columns for
Table window only. You click the button to move a column up in the column list.

The Move a Table Column Down button is available from the Columns for
Table window only. You click the button to move a column down in the column
list.

The Owners button is available from the Tables or Data Catalog windows.
You click the button to get a list of owners for the data catalog or table.

The Tables button is available from the Data Catalog window. You click this
button to list the tables associated with a particular data catalog.

The Columns button is available from the Tables for Data Catalog window.
You click this button to list the columns in a particular table.

The Index button is available from the Tables for Data Catalog window. You
click this button to list the indexes in a particular table.

The Help button is available from any of the Data Mapper windows. You
click this button to launch the help for the Data Mapper.

Note: Help is available from any window in the Data Mapper by pressing F1.

Data Mapper menus


This section describes all of the menus available in the Data Mapper.

The Data Mapper toolbar is shown here.

4 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
File Menu
The File menu is shown here and is described in the following sections.

New Repository...
Selecting New Repository... creates a new repository in the Data Mapper. A
repository contains the data catalogs, tables, columns, indexes and owners. For
instructions on creating a new repository, see the tutorials.

Open Repository...
Open Repository... allows you to open an existing Data Mapper repository. When
you click Open Repository..., a list of existing repositories appears. Click a
repository from the list and click OK. The repository opens.

Close Repository
Close Repository closes the current repository.

Close All Repositories


Close All Repositories closes any repositories that are open.

Import External File...


Import External File... allows you to import a file into a table. This option is
available from the Tables window or the Columns window. When you click Import
External File..., the Import File window appears and you can import a file locally
or remotely. You use this option most frequently to import copybook files, which
you then use to generate metadata grammar.

Generate USE Statements...


The Generate USE Statements... menu option allows you to automatically
generate metadata grammar for a particular data catalog. To generate metadata
statements, open a repository, select a data catalog, and select Generate USE
Statements... For more information, see the tutorials.

Note: Although Data Mapper allows you to select more than one data catalog
from the data catalog window, metadata input is only generated for the first
data catalog of those selected.

Load DL/I DBD for Reference...


Load DL/I DBD for Reference... This option allows you to load IMS or DL/I Data
Base Definitions (DBDs) to use as a reference when building IMS tables and
generating metadata input. This option is available from any window.

Note: Although the Data Mapper will allow you to Load IMS or DL/I DBDs into
non-IMS environments, it will only reference the DBDs when mapping IMS
or DL/I data.

Chapter 2. User Reference 5


Load CA-IDMS Schema for Reference...
Load IDMS Schema for Reference... This option allows you to load CA-IDMS
schema to use as a reference when building tables and generating metadata input.
This option is available from any window.

Note: Only one schema can be loaded for reference at a time.

Exit
Selecting Exit from the File menu closes the Data Mapper application.

Edit Menu
The Edit menu is described in the sections that follow. This menu is available
when a repository is opened.

Create a New...
Create a New... creates a data catalog in the current repository, a table in the
current data catalog, a column in the current table, or an index in the current table
depending on what action is appropriate for the window you are viewing. For
example, if you have a repository open, the option is Create a new Data Catalog...
For more information on creating data catalogs, tables, and columns, see the
tutorials.

Create a Record Array...


The Create a Record Array... option defines a repeating group of one or more
columns in the column list. This option is only available from the Columns, Data
Catalog window.

Delete the Selected...


The Delete the Selected... option allows you to delete a data catalog in the current
repository, a table in a data catalog, or a column in a table. The option will change
depending on which window you are viewing. For example, if you have a data
catalog open, the option is Delete the selected Data Catalog. For more information
on deleting data catalogs, tables, and columns, see the tutorials.

Move Column Up
The Move Column Up option moves a column up one row in the column list. This
option is only available from the Columns for Data Catalog window.

Move Column Down


The Move Column Down option moves a column down one row in the column
list. This option is only available from the Columns for Data Catalog window.

6 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Window
The Window menu controls how the Data Mapper windows appear on your
window. The Windows menu options are described in the sections that follow.

Cascade
The Cascade option displays open Data Mapper windows in a layered fashion, as
shown in the following example.

Tile
The Tile option displays open Data Mapper windows in a tiled fashion, either
horizontally or vertically. The horizontal option is shown first, followed by the

Chapter 2. User Reference 7


vertical option.

Arrange Icons
The Arrange Icons selection arranges icons in a row at the bottom of the Data
Mapper window.

List Tables
The List Tables menu item is available from the Data Catalog window and lists all
of the tables for the selected data catalog.

8 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
List Indexes
The List Indexes menu item is available from the Tables for Data Catalog window
and lists all of the indexes for the selected table.

List Columns
The List Columns menu item is available from the Tables window and lists all of
the columns for the selected table.

List Owners
The List Owners menu item is available from the Data Catalog or Tables windows
and lists all of the owners for the selected data catalog.

Help Menu
The Help menu provides on-window information for Data Mapper.

The menu selections are described in the following sections.

Contents
The Contents menu lists the contents of the online help system.

Search for Help on...


The Search for Help On... menu item allows you to search for Help on a specific
topic, function, or operation. Selecting this item displays a search window, in
which you can specify search terms. Click OK to begin a search.

About Data Mapper


Click About Data Mapper... for the current release level of Data Mapper and
system information for Data Mapper.

Chapter 2. User Reference 9


10 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Chapter 3. CA-Datacom tutorial
Introduction to the CA-Datacom tutorial
The Data Mapper tutorial helps first-time users become more familiar with how
the Data Mapper operates. At the completion of this tutorial, you will be able to
use the Data Mapper to map nonrelational data to a relational view that you can
see using your system’s front-end tool.

How this CA-Datacom tutorial works


This tutorial is a series of steps that create a relational view from nonrelational
data. Each step is followed by a window or menu that shows you how the step is
performed. For a complete description of all the Data Mapper menus, windows,
and fields, see Chapter 2, “User Reference,” on page 3.

In addition to this tutorial, the Data Mapper includes an online help system that
describes how to use it. To launch the Help, pull down the Help menu or press F1.

Getting started mapping CA-Datacom data


To start the Data Mapper from the Windows Start menu:
1. Click Programs – IBM DB2 Information Integrator Classic Tools – Data
Mapper.

Mapping CA-Datacom data


Exercises 1 through 9 describe how to use the Data Mapper to map CA-Datacom
data to a relational view.

Exercise 1: Creating a CA-Datacom repository


The first step to mapping your nonrelational data to a relational view is to create a
repository.

A repository stores information (data catalogs, tables, columns, indexes and


owners) about the legacy data that Data Mapper is mapping.
To create a repository:
1. From the File menu, choose New Repository.
The Create a New Repository window appears.

© Copyright IBM Corp. 2003, 2004 11


2. Enter a file name and location for your repository in the Create a New
Repository window. You must assign a mdb file extension all Data Mapper
repository files.

Note: Repository names should have a meaning for your particular site. For
example, you may want to name your repository the same name as the
database you are mapping into the repository.
3. Click Save to create the repository.
The new repository you created appears. This is an empty repository. You will
add data catalogs to the repository in “Exercise 3: Creating a CA-Datacom data
catalog” on page 13.

You have completed Exercise 1.

To create a data catalog, skip to “Exercise 3: Creating a CA-Datacom data catalog”


on page 13. To add an owner to a repository, continue on to “Exercise 2: Adding
owners to a CA-Datacom repository (optional)” on page 12.

Exercise 2: Adding owners to a CA-Datacom repository


(optional)
Owners are authorized IDs for tables. When qualifying a table in SQL, the format
is as follows:
owner.tablename

If an owner is not assigned to a table, then the z/OS TSO ID that runs the
metadata utility becomes the owner of the table in z/OS.
To add owners to a repository:
1. If a repository is not currently open, open one.
2. From the Windows menu, choose List Owners.
If owners exist, a list of Owner Names appears. If no owners are defined for
this repository, the list will be empty.
3. From the Edit menu, choose Create a new owner...
4. Enter the owner name and remarks.
5. Click OK to add the owner.

12 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The owner name is included in the list of owners for that repository. To view
this list, click List Owners... from the Window menu
Repeat Steps 1 through 5 to add additional owners.
6. Minimize or close the Owners window.

This completes Exercise 2.

Exercise 3: Creating a CA-Datacom data catalog


To create a data catalog for the newly-created repository:
1. If a repository is not open, open one.
2. From the Edit menu, choose Create a new Data Catalog.
3. Enter the data catalog name, click its type, and add any remarks.
4. Click OK to create the data catalog.
The data catalog now appears in your repository.
Repeat Steps 2 through 4 to add additional data catalogs.

You have completed Exercise 3.

To create a table for this data catalog, continue on to “Exercise 4: Creating a


CA-Datacom table.”

Exercise 4: Creating a CA-Datacom table


You can create a logical table for CA-Datacom data that is equivalent to a DB2
Universal Database for z/OS table created by mapping one or more record types
from the nonrelational database into a single table.
To add tables to a data catalog:
1. If a repository is not open, open one.
2. To select a data catalog, click on the number to the left of the data catalog
name. This highlights the selected row.
3. From the Window menu, choose List Tables to list tables for the data catalog.
4. From the Edit menu, choose Create a new table...
The Create Datacom Table window appears.

5. To create a CA-Datacom table:


a. Enter the table name in the Name field.

Chapter 3. CA-Datacom tutorial 13


b.Select an owner from the Owners list box.
c.Enter the name of the CA-Datacom table, such as CUST.
d.Enter a value for the Status/Version, such as PROD.
e.Enter the name of the URT module that will be used to access CA-Datacom.
In this case, it would be CACDCURT.
When the CA-Datacom data source, CAC.CUSTDCOM is accessed with an
SQL statement, the CUST CA-Datacom table is opened.
f. (Optional) Check the Reference Only check box if the table you are creating
will be used for reference purposes only.
The reference table is used to build large column lists to populate other
tables. These reference tables are not generated into the data catalog’s
metadata input when metadata generation is requested.
This option is particularly useful when creating tables with hundreds of
columns, as you can drag and drop to copy columns between windows.
g. Enter any remarks in the Remarks field.
6. Click OK to create the table.

The table is now listed on the Datacom Tables for Data Catalog window for this
data catalog.

Repeat Steps 2 through 5 to add additional tables to the data catalog.

You have completed Exercise 4.

To define a column for the table, continue on to “Exercise 5: Creating CA-Datacom


columns (optional)” on page 14. To import a copybook, skip to “Exercise 6:
Importing a Copybook for CA-Datacom tables” on page 16.

Exercise 5: Creating CA-Datacom columns (optional)


You can create columns for CA-Datacom data that are equivalent to columns in a
DB2 UDB for z/OS table. Adding columns to a table in a data catalog is analogous
to adding columns to a logical table. The column in the data catalog can represent
| one or more data items in the corresponding nonrelational table. A logical table
| must contain at least one column definition.

This exercise shows you how to manually add columns to a data catalog. You do
not have to add columns manually for them to appear in the data catalog.
Importing a copybook automatically creates columns. See “Exercise 6: Importing a
| Copybook for CA-Datacom tables” on page 16, for more information about the
| recommended method of creating columns by importing a copybook.
To manually add a column:
1. Select a table by clicking on the number to the left of the Table Name. This
highlights the selected row.
2. From the Window menu, choose List Columns.
The Columns for Datacom Table for this table appears.
3. From the Edit menu, choose Create a new Column....

14 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The Create Datacom Column window appears.

4. To create the column:


a. Enter a 1 to 18 character column name in the Name field.
b. Enter the offset of the CA-DATACOM/DB field in the Datacom Record
Information Offset field.
c. Enter the length of the CA-DATACOM/DB field in the Datacom Record
Information Length field.
d. Select the CA-DATACOM/DB data type from the Datacom Record
Information Datatype drop-down list box. When selecting a native data
type for a CA-Datacom column, the SQL data type associated with the
selected data type is automatically set.
e. Select an SQL data type for the column from the SQL Usage Datatype
drop-down list box. You can map zoned decimal data to either a character
or decimal SQL data type. When selecting an SQL data type for a
CA-Datacom column, the length or scale of the data type is automatically
set from the length of the native data type, if defined.

Note: The n in the SQL Data Type CHAR(n) must be replaced by a number,
such as CHAR(8).
f. To create a nullable column, enter a value in the Null is field to delineate
null, such as 000.
g. Enter the name of a conversion exit in the SQL Usage Conversion Exit
field.
h. Enter any remarks in the Remarks field.
5. Click OK.
The column is created and displays in the column list when you view the
Column for Datacom Table window.
6. Close the Columns for DATACOM Table window.

Repeat Steps 1 through 5 to add additional columns to the table.

To update the entry for a column in the table, double-click on the number to the
left of the column name. The Update Datacom Column window appears, allowing
you to update the column name, CA-Datacom record information, SQL usage
information, and remarks.

Chapter 3. CA-Datacom tutorial 15


You can also copy one or more columns between tables if the two tables are the
same data catalog type. Generally, copying is between reference tables and other
tables.
To copy one or more columns between two tables:
1. Select the source table.
2. From the Window menu, choose Column List....
3. Select the target table.
4. From the Window menu, choose Column List....
5. Position the two column list windows side by side.
6. Select one or more columns to copy by clicking in the line number column for
the column to be copied. To select a block of columns, click in the line number
of the first column to be copied and hold down the left mouse button until you
reach the last column you want to copy.
7. Click again on the selected block and drag the columns to the target column
list window. The mouse cursor will change to the Drag icon to indicate that
you are in column drag mode. If the drag cursor does not appear, start the
process again after ensuring that both the source and target column list
windows are visible.
8. Release the mouse button to complete the copy.

To simplify dragging and dropping columns, minimize all open windows except
the source and target column windows and then use the Tile option from the
Windows menu.

Note: Data Mapper automatically enters drag mode when two or more column
lists are visible at a time and a block of columns is selected. If you are
editing a column list and do not want the list to switch to drag mode, close
all column lists except the one you are editing.

This completes Exercise 5.

Exercise 6: Importing a Copybook for CA-Datacom tables


This exercise describes how to import a COBOL copybook. Copybooks are
transferred from the mainframe to the workstation and must be given a file
extension of .fd.
To import a COBOL copybook:
1. If a repository is not open, open one.
2. From the Window menu, choose List Tables....

Note: You may have to close the Columns and Datacom Tables windows first
to reactivate these options.
3. Select the table you want to import the copybook into by clicking on the
number to the left of the table name.
4. From the File menu, choose Import External File....

16 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The Import File window appears.

Note: When importing files, be sure to use a file extension of fd, such as
caccus.fd, or Data Mapper will not recognize the file.
Continue on to Step 5 if you are importing a copybook from your hard drive.
Skip to Step 6 if you are importing a copybook from a remote location.
5. Select a copybook to import from the Data Mapper samples folder and click
OK.
The Import Copybook window appears.

Skip to Step 12.


6. Click the Remote button on the Import File window to import a copybook
from the FTP site.
The FTP Connect window appears.

7. Enter information in the following fields:

Chapter 3. CA-Datacom tutorial 17


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
8. Click the Connect button.
In z/OS, the Host panel appears.
9. Enter the following information in the Host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

Note: To navigate back to a previous data set list, double-click the ..


entry.
b. The Datasets drop-down list box contains a directory listing based on the
current working directory specification. Choose from the list of data sets
for the Remote File transfer. Names with an asterisk (*) have member
names. Double-click on the asterisk (*) to select a member list. It will
appear in the Members listbox.
After a data set is selected, it appears in the Remote File field.
10. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
such as ’CAC.INSTALL.SCACSAMP(CACUSFD)’
The Remote File field contains the full data set name ready for FTP. This
name is based on input in the Datasets and Members fields or an
explicitly-specified qualified data set name.
11. Click the Transfer button.
The Import Copybook window displays.

12. Select the table that you want to import the copybook into and select the
Import Options.

Note: This action was already completed in Step 2 (default selected) unless
you want to make a change using the dropdown list.
The import options include:

18 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Import Group Level Data Items: Creates a column for each COBOL data item
that is a group level item. Group level items are items without picture clauses
that contain subordinate data items with higher level numbers.
Import Selected Structure Only: Since most copybooks contain more than one
record or field definition, you can select a particular structure to import from
an existing copybook by clicking on the data item at which to start the import,
then selecting the Import Selected Structure Only check box. When structure
selection is used, the selected data item and all subordinate data items
(following data items with higher level numbers) are imported. The data item
selected can exist at any level in the structure.
OCCURS Clauses:
v Create Record Array: Defines a record array for data items within OCCURS
clauses in the copybook.
v Expand each occurrence: Creates a column for each occurrence of a data
item within the copybook. Data item names within the OCCURS clause are
suffixed with _1, _2, ..._n.
v Map first occurrence only: Create a column for the first occurrence of a
data item within the OCCURS clause only.
Append to Existing Columns: Adds the copybook columns to the bottom of
the list of existing columns in that table. Not selecting this option deletes all
existing columns and replaces them with the columns you are now importing.
Calculate Starting Offset: Use this option to append to existing columns in a
table. This allows the starting offset of the first appended column to be
calculated based on the columns already defined in the table. When selected,
the first appended column will be positioned at the first character position
after the last column (based on offset and length already defined for the
table).
Use Offset: When you have an explicit offset to be used for the first column
imported and it does not match the field’s offset in the copybook structure,
enter an offset in this field to override the default calculation based on the
COBOL structure. If you do not override the default, the offset for the first
imported column is determined by the COBOL field’s offset in the structure
you are importing.

Note: By default, the offset of the first COBOL data item imported is based on
the data item’s position in all of the structures defined in the import
file. This offset will always be zero unless you are importing a selected
structure from the copybook. In that case, the offset for the first column
imported from the structure will be the COBOL data item’s position
based on all structures that precede it in the import file. If the default
offset is not correct, then the Calculate Starting Offset or Use Offset
options can be used to override the default.
13. Click Import to import the copybook to your table.
The Columns for Datacom Table window displays with the newly-imported
columns.

Repeat Steps 2 through 12 to import additional copybooks. This completes Exercise


6. To Define a Record Array, continue on to Exercise 7.

Exercise 7: Defining a CA-Datacom record array (Optional)


A record array is one or more columns that occur multiple times in a single
database record.

Chapter 3. CA-Datacom tutorial 19


When deciding whether or not to define a record array, first review the copybook
you are importing for OCCURS clauses. In most cases, there should be no more
than one record array in a mapped table to keep it from being unwieldy. If the
copybook contains more than one OCCURS clause, then either skip creating record
arrays or generate the arrays and remove the unwanted ones after importing the
copybook.
To define a record array:
1. From the Import Copybook window, click the Create Record Array option and
click Import. If the imported copybook contains an OCCURS clause, the record
array will automatically be created during import.
2. To create a record array from the columns window, select the column or
columns to include in the record array.
3. From the Edit menu, Click Create a Record Array... .
The Create a Record Array window appears.

The fields on the Create a Record Array window are as follows:


v First Column in Array: (Required field) Identifies the start of the record
array in the database record.
v Last Column in Array: (Required field) Identifies the end of the record array
in the database record.
v Offset of Array in Parent: (Required field) Defines the starting offset of the
array based on either the beginning of the record or the beginning of a
parent record array.
v Length of a Single Occurrence: (Required field) Defines the internal record
length of each occurrence of the array.
v Max Number of Occurrences: (Required field) Defines the maximum
number of occurrences that can exist in the database record.
v NULL Occurrence Rule: Defines conditions under which an occurrence in
the array is to be considered null and not returned as a result row in select
clauses.
v NO NULL Occurrences: Returns all occurrences of the array in the result set.
v Count is in Column: Number of valid occurrences in the array is kept in the
column identified by the required column name attribute defined with this
rule.
v NULL is Value: Identifies a comparison value to be used at runtime to
determine if an occurrence of the array is null.

20 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
v Repeat Value for Length of Compare: Repeats the comparison value for the
length of the null compare.
v Compare Column: (Optional field) Identifies where in the array to do the
null comparison.

Note: For more information on record arrays, see the Data Mapper Help.
4. Click OK to create the record array.
The Columns for Datacom Table window appears with the record array data
you created.

This completes Exercise 7. To generate CA-Datacom metadata grammar, continue


on to Exercise 8.

Exercise 8: Generating CA-Datacom metadata grammar


This exercise describes the steps required to create metadata grammar. Metadata
grammar, also known as USE grammar, is generated by the Data Mapper for all of
| the tables in a specific data catalog. When metadata grammar has been created, it
| must subsequently be transferred from the workstation to the mainframe. The
| metadata grammar is supplied as input to the metadata utility that runs on the
| mainframe. The metadata utility uses the contents of the metadata grammar to
| create logical tables. Client applications use logical tables, which are
| non-relational-to-relational mappings, for SQL access non-relational data.
To create metadata grammar:
1. From the Data Catalog window, select a data catalog.
2. From the File menu, choose Generate USE Statements....
The Generate USE Statements window appears.

Continue on to Step 3 if you are generating USE statements on your hard


drive. Skip to Step 5 if you are generating USE statements to send to a remote
location.
3. Give the file a name, using use as the file extension, such as cusdcom.use.
A window appears, asking if you want to view the newly-created script.
4. Click YES to display the USE statement script.

| Note: Before each table definition in the metadata grammar file, a DROP table
| statement is generated. If a duplicate table exists in the metadata
| catalogs, the DROP table statement deletes the table and any indexes,
| views, and table privileges associated with the table. The

Chapter 3. CA-Datacom tutorial 21


| newly-generated USE statement creates the new table.

If necessary, you can edit this file directly from the Notepad where it appears.
Repeat the previous steps to generate additional USE Statements. Then, skip
to the end of this set of steps.
5. Click Remote on the Generate USE Statements window to generate USE
statements to send to a remote location.
The FTP Connect window appears.

6. Enter information in the following fields:


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
7. Click Connect.
In z/OS, the Host panel appears.

22 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
8. Enter the following information in the host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

Note: To navigate back to a previous data set list, double-click the ..


entry.
b. The Datasets listbox contains a directory listing based on the current
working directory specification. Choose from the list of data sets for the
Remote File transfer. Names with an asterisk (*) have member names.
Double-click on the asterisk (*) to select a member list. It will appear in
the Members listbox.
After a data set is selected, it appears in the Remote File field
9. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
for example: ’USER.GRAMMAR(CUSDCOM)’
The Remote File field contains the full data set name ready for FTP. This name
is based on input in the Datasets and Members list boxes or an explicitly
specified qualified data set name.
10. Click Transfer.
The file is transferred to the remote location and the tmp USE Statements
window displays. This window displays the exact data that exists on your
remote location.

This completes Exercise 8.

Exercise 9: Creating a relational view for CA-Datacom data


After completing Exercise 9, you have the Metadata input file you need to create a
relational view.
To create the relational view:
1. Transfer the metadata grammar file to the host system where the metadata
utility is run.
2. Run the metadata utility, using the metadata as input.

The metadata utility then creates the relational view.

For more information on the metadata utility, see the IBM DB2 Information
Integrator Reference for Classic Federation and Classic Event Publishing.

You have completed the Data Mapper CA-Datacom Tutorial. You have now
mapped CA-Datacom nonrelational data to a relational view.

Chapter 3. CA-Datacom tutorial 23


24 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Chapter 4. CA-IDMS tutorial
Introduction to CA-IDMS tutorial
The Data Mapper tutorial helps first-time users become more familiar with how
the Data Mapper operates. At the completion of this tutorial, you will be able to
use the Data Mapper to map nonrelational data to a relational view that you can
see using your system’s front-end tool.

How this CA-IDMS tutorial works


This tutorial is a series of steps that create a relational view from nonrelational
data. Each step is followed by a window or menu that shows you how the step is
performed. For a complete description of all the Data Mapper menus, windows,
and fields, see Chapter 2, “User Reference,” on page 3.

In addition to this tutorial, the Data Mapper includes a Help System that describes
how to use the application. To launch the Help, pull down the Help menu from
the Data Mapper menu or press F1.

Getting started mapping CA-IDMS data


To start the Data Mapper from the Windows Start menu, click Programs – IBM
DB2 Information Integrator Classic Tools – Data Mapper.

Mapping CA-IDMS data


Exercises 1 through 9 describe how to use the Data Mapper to map CA-IDMS data
to a relational view.

Exercise 1: Creating an CA-IDMS repository


The first step to mapping your nonrelational data to a relational view is to create a
repository.

A repository stores information (data catalogs, tables, columns, indexes, and


owners) about the legacy data that the Data Mapper is mapping.
To create a repository:
1. Click File, New Repository from the Data Mapper main menu.
2. Enter a File Name and location for your repository in the Create a New
Repository window. A file extension of mdb must be assigned to all Data
Mapper repository files.

© Copyright IBM Corp. 2003, 2004 25


Note: Repository names should have a meaning for your particular site. For
example, you may want to name your repository the same name as the
database you are mapping into the repository.
3. Click Save to create the repository.
After you click the Save button, the sample mdb window displays. This is an
empty repository. You will add data catalogs to the repository in “Exercise 3:
Creating an CA-IDMS data catalog” on page 27.

You have completed Exercise 1.

To add an owner to a repository, continue on to 2 on page 34. To create a data


catalog, skip to “Exercise 3: Creating an CA-IDMS data catalog” on page 27.

Exercise 2: Adding owners to an CA-IDMS repository


(optional)
Owners are authorized IDs for tables. When qualifying a table in SQL, the format
is:
owner.tablename

If an owner is not assigned to a table, then the TSO ID that runs the metadata
utility becomes the owner of the table in z/OS.

Note: Be sure you have a repository open before starting this exercise.
To add owners to a repository:
1. To add an owner to a repository, click Window, List Owners, or click the
Owners icon from the Repository window.
A list of Owner names appears.
2. To add a new owner, select Edit, Create a new owner...
The Create Owner window appears.
3. Enter the owner Name and Remarks. Then click OK to add the owner.
The owner is then included in the list of owners for that repository. To view
this list, click List Owners... from the Window menu.
Repeat Steps 1 through 3 to add additional owners.
4. Minimize or close the Owners window.

26 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
This completes Exercise 2.

Exercise 3: Creating an CA-IDMS data catalog


A data catalog is a collection of tables for a particular nonrelational database type,
for example, CA-IDMS.
To create a data catalog for your repository:
1. Click Edit, Create a New Data Catalog... from the Data Mapper main menu or
click the Create a New Data Catalog icon from the toolbar.
The Create Data Catalog window appears.
2. Enter the data catalog Name, Type, and any Remarks. To select from a list of
data catalog types, pull down the arrow next to the Types box.
3. Click OK to create the data catalog.
Repeat steps 1 through 3 to add additional data catalogs.
You have completed Exercise 3. To load CA-IDMS schema and subschema for
reference, continue on to “Exercise 4: Loading CA-IDMS schema for reference”
on page 27.

Exercise 4: Loading CA-IDMS schema for reference


This exercise describes how to load CA-IDMS schema and subschema for reference.
CA-IDMS schema and subschema are transferred from the mainframe to the
workstation and given file extensions of .sch and .sub.

The schema is used as a reference for building tables and columns and creating
metadata input.
1. Select the data catalog you created in Exercise 3 by clicking on the number to
the left of the data catalog name. This highlights the selected row.
2. Click File, Load IDMS Schema for Reference.
The Load IDMS Schema File window appears.

Continue on to Step 3 if you are loading an CA-IDMS schema file from your
hard drive. Skip to Step 4 if you are loading an CA-IDMS schema file from a
remote location.
3. Select a schema from the Data Mapper samples folder, or enter the file name
of the schema you want to load, then click OK. The schema is loaded for
reference.

Note: You must use a file extension of .sch when loading CA-IDMS schema
for reference or the Data Mapper will not recognize the file. The
CA-IDMS schema as the CA-IDMS IDD schema report generated at the
host where CA-IDMS is executing and downloaded to your PC.

Chapter 4. CA-IDMS tutorial 27


When you have successfully loaded the schema, a window appears
confirming that the schema was loaded and prompting you to load a
subschema.
Skip to Step 10 to select a subschema to be loaded.
4. Click the Remote button on the Load CA-IDMS Schema File window to load a
schema from the FTP site.
The FTP Connect window displays.

5. Enter information in the following fields:


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
6. Click the Connect button.
In z/OS, the Host panel displays.
7. Enter the following information in the Host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.
b. The Datasets listbox contains a directory listing based on the current
working directory specification. Choose from the list of data sets for the
Remote File transfer. Names with an asterisk (*) have member names.
Double-click on the asterisk (*) to select a member list. It will appear in
the Members listbox.
After a data set is selected, it appears in the Remote File field.
8. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name.
An example is shown in the following window.
The Remote File field contains the full data set name ready for FTP. This name
is based on input in the Datasets and Members listboxes or an explicitly
specified qualified data set name.
9. Click the Transfer button.
A window appears prompting you to load a subschema.
10. Select a subschema from the list and click OK to select a subschema to be
loaded.
11. A message appears confirming that the subschema was loaded correctly. Click
OK.

28 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
An IDMS schema folder appears on the Data Mapper repository window
indicating that the schema is loaded for reference for this repository.

This completes Exercise 4. To create a table for this data catalog, continue on to
“Exercise 5: Creating a CA-IDMS table” on page 29.

Exercise 5: Creating a CA-IDMS table


This exercise describes how to add a table to an CA-IDMS data catalog.

You can create a logical table for CA-IDMS data that is equivalent to a DB2 UDB
for z/OS table by mapping one or more record types from the nonrelational
database into a single table.
To create an CA-IDMS table:
1. Click Window, List Tables... or click the Table icon from the Data Mapper
toolbar. The Tables window for the data catalog appears. Notice that tables do
not yet exist for this data catalog.
2. Click Create a New Table... from the Edit menu, or click the Create a New
Table icon on the toolbar. The Create Table window appears.

3. Fill in the table information and select an owner from the pulldown list. A list
of records is available by clicking on the arrow to the right of the field.
The Additional Records field allows you to add or delete records from the
table.

To add a record, click the icon. A list of sets and records appears and
you can select the records you want to add by clicking on them.

To delete a record, click the icon. The record is deleted. If more than one
record exists, Data Mapper deletes the last record entered.

To move a record up in the table list, click the icon.

To move a record down in the table list, click the icon.

Chapter 4. CA-IDMS tutorial 29


4. Click OK to add the table to your data catalog. The table is added to the list of
tables for this data catalog.

Repeat steps 1 through 4 to add additional tables to the data catalog.

This completes Exercise 5. To create CA-IDMS columns, continue on to “Exercise 6:


Creating CA-IDMS columns (optional)” on page 30. To import a copybook, skip to
“Exercise 7: Importing a schema Copybook for CA-IDMS tables” on page 31. To
generate metadata input, skip to “Exercise 8: Generating CA-IDMS metadata
grammar” on page 34.

| When defining CA-IDMS logical tables, you can either create columns from the
| contents of the CA-IDMS schema definition or from COBOL copybook definitions.

| Recommendation: Create column definitions from the CA-IDMS schema definition.


| This method is more efficient because the columns are created from the CA-IDMS
| element definitions contained within the schema.

Exercise 6: Creating CA-IDMS columns (optional)


You can create columns for CA-IDMS data that are equivalent to columns in a DB2
UDB for z/OS table. Adding columns to a table in a data catalog is analogous to
adding columns to a logical table. The column in the data catalog can represent
| one or more data items in the corresponding nonrelational database table. A logical
| table must contain at least one column definition.

This exercise describes how to manually define CA-IDMS columns to a table.

| To automatically add columns by using the recommended method of importing a


copybook, see “Exercise 7: Importing a schema Copybook for CA-IDMS tables” on
page 31.
1. From the IDMS Tables for Data Catalog window, click Window, List Columns...
or click the Column icon on the toolbar. The Columns for IDMS Table window
appears. Since you have not created any columns, no columns are listed for this
table. Click Edit, Create a New Column... The Create IDMS Column window
appears.

2. Follow these steps to create a new column:


v Enter the name of the column.
v Click the Record Reference Name from the pulldown menu.

30 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
v Enter the Element Name, which should match the field of the CA-IDMS
record you are mapping to this column. This must be a valid COBOL name,
and is a required field.
v Enter the SQL Usage information, using the pulldown list for the Data Type.
v Enter any comments in the Remarks box.

Note: The (n) in the SQL Data Type must be replaced by a number, for
example, CHAR(8).
v Click OK to add the column.
The column is now included on the Columns for IDMS Table window.
Repeat steps 1 through 3 to add additional columns to the table.

This completes Exercise 6.

Exercise 7: Importing a schema Copybook for CA-IDMS tables


This exercise describes how to import an external file (copybook) to create
metadata input or use the previously loaded schema as input. Copybooks are
transferred from the mainframe to the workstation and must be given a file
extension of .fd.
1. Go to the IDMS Tables for Data Catalog window by selecting Window, List
Tables... or pressing the Table icon.
2. Select a table by pressing the number to the left of the Table Name.
3. Click File, Import External File... or click the Import an External file icon.
A window appears asking you if you want to import from the schema you
loaded at the beginning of this tutorial.
Continue on to Step 4, if you selected Yes. Skip to Step 5, if you selected No
and want to import a copybook from a remote location.
4. Click Yes.
The Import IDMS Record Select window appears.

This window allows you to choose which records to import from a loaded
CA-IDMS schema and the order in which to import them.
If the schema you have open is the correct schema for the table you are
importing into, the records defined for the target table are pre-selected for you
and you can click the Continue button.
To select additional records to add, select the record and then click the right
arrow button to move the record to the import box. To delete records from the
Import box, select the record and click the left arrow. Skip to Step 12.
5. If you are importing a copybook from your hard drive perform this step. Skip
to Step 6 if you are importing a copybook from a remote location. Select a

Chapter 4. CA-IDMS tutorial 31


copybook to import from the Data Mapper samples folder on the drive where
Data Mapper was installed and click Open.
The Import Copybook window displays.
Skip to step 12.
6. Click the Remote button on the Import File window to import a copybook
from the FTP site.
The FTP Connect window appears.

7. Enter information in the following fields:


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
8. Click the Connect button.
In z/OS, the Host panel displays.
9. Enter the following information in the Host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

Note: To navigate back to a previous data set list, double-click the ..


entry.
b. The Datasets listbox contains a directory listing based on the current
working directory specification. Choose from the list of data sets for the
Remote File transfer. Names with an asterisk (*) have member names.
Double-click on the asterisk (*) to select a member list. It will appear in
the Members listbox.
After a data set is selected, it appears in the Remote File field.
10. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
| such as: ’CAC.INSTALL.SCACSAMP(CACIDSCH)’ and
| ’CAC.INSTALL.SCACSAMP(CACIDSUB)’.
The Remote File field contains the full data set name ready for FTP. This name
is based on input in the Datasets and Members listboxes or an explicitly
specified qualified data set name.
11. Click the Transfer button.

32 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The Import Copybook window displays.

12. Set import options. The options are described below.


Import Group Level Data Items: Creates a column for each COBOL data item
that is a group level item. Group level items are items without picture clauses
that contain subordinate data items with higher level numbers.
Import Selected Structure Only: Since most copybooks contain more than one
record or field definition, you can select a particular structure to import from
an existing copybook by clicking on the data item at which to start the import,
then selecting the Import Selected Structure Only check box. When structure
selection is used, the selected data item and all subordinate data items
(following data items with higher level numbers) are imported. The data item
selected can exist at any level in the structure.
Prefix/Suffix Button: You can add or remove a prefix or suffix to column
names when you import columns. You can click the Prefix/Suffix... button to
set prefixes and suffixes in the Import Copybooks dialog box. For example if
the column names in a COBOL COPYBOOK are FOFO_FIRST_NAME and
FOFO_ LAST_NAME and you set remove prefix FOFO and add prefix CAC,
and you add suffix _IBM then the column names change to
CAC_FIRST_NAME_IBM and CAC_ LAST_NAME_IBM.
OCCURS Clauses:
v Create Record Array - Defines a record array for data items within OCCURS
clauses in the copybook.
v Expand each occurrence - Creates a column for each occurrence of a data item
within the copybook. Data item names within the OCCURS clause are
suffixed with _1, _2, ..._n.
v Map first occurrence only - Create a column for the first occurrence of a data
item within the OCCURS clause only.
Append to Existing Columns: Adds the copybook columns to the bottom of
the list of existing columns in that table. Not selecting this option deletes any
existing columns and replaces them with the columns you are now importing.
Calculate Starting Offset: Use this option when appending to existing
columns in a table. This allows the starting offset of the first appended
column to be calculated based on the columns already defined in the table.
When selected, the first appended column will be positioned at the first
character position after the last column (based on offset and length already
defined for the table).

Chapter 4. CA-IDMS tutorial 33


Use Offset: When you have an explicit offset to be used for the first column
imported that does not match the field’s offset in the COBOL structure, enter
an offset in this field to override the default calculation based on the COBOL
structure. If you do not want to override the default, the offset for the first
imported column is determined by the COBOL field’s offset in the structure
you are importing.

Note: By default, the offset of the first COBOL data item imported is based on
the data item’s position in all of the structures defined in the import
file. This offset will always be zero unless you are importing a selected
structure from the copybook. In that case, the offset for the first column
imported from the structure will be the COBOL data item’s position
based on all structures that precede it in the import file. If the default
offset is not correct, then the Calculate Starting Offset or Use Offset
options can be used to override the default.
Rec Name: The record name is automatically filled in for you when importing
| from an CA-IDMS schema. Verify that the Rec Name is correct.
13. Click Import to import the copybook.
The Columns for IDMS Tables window now includes the newly-imported
columns.
Repeat Steps 1 through 13 to import additional copybooks.

This completes Exercise 7.

Exercise 8: Generating CA-IDMS metadata grammar


This exercise describes the steps required to create metadata input.

Metadata grammar, also known as USE grammar, is generated by the Data


Mapper for all of the tables in a specific data catalog. When metadata grammar has
| been created, it must subsequently be transferred from the workstation to the
| mainframe. The metadata grammar is supplied as input to the metadata utility that
| runs on the mainframe. The metadata utility uses the contents of the metadata
| grammar to create logical tables. Client applications use logical tables, which are
| non-relational-to-relational mappings, for SQL access non-relational data.
1. From the Data Catalog window, click File, Generate USE Statements... from
the File menu or the Generate USE Statements for a Data Catalog icon from
the toolbar.
The Generate USE Statements window appears.

Continue on to Step 2 if you are generating USE statements on your hard


drive. Skip to Step 3 if you are generating USE statements to send to a remote
location.
2. Give the file name, using use as the file extension, for example, IDMS.use.

34 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
| Note: If a USE grammar file with the same name already exists, a window
| displays asking if you want to replace the old grammar with the new
| grammar. Click the Yes button if you want to replace the existing
| grammar. Click the No button to return to the Generate USE Statement
| window where you can specify another file name. When the grammar
| is generated, you will be prompted to view the grammar.
3. Click OK and the USE statement script displays.

| Note: Before each table definition in the metadata grammar file, a DROP table
| statement is generated. If a duplicate table exists in the metadata
| catalogs, the DROP table statement deletes the table and any indexes,
| views, and table privileges associated with the table. The
| newly-generated USE statement creates the new table.
If necessary, you can edit this file directly from the Notepad where it appears.

4. Repeat steps 1 and 2 to generate additional USE Statements from your hard
drive.
5. Click the Remote button on the Generate USE Statements window to generate
USE statements to send to a remote location.
The FTP Connect window appears.

6. Enter information in the following fields:

Chapter 4. CA-IDMS tutorial 35


a.Host Address: IP address or Host name of the remote machine.
b.Port ID: Valid FTP Port ID.
c.User ID: Valid user ID on the remote machine.
d.User Password: Valid user password for the user ID on the remote
machine.
7. Click the Connect button.
In z/OS, the Host panel displays.
8. Enter the following information in the Host panel:
a. Working Directory field displays the working directory for the current data
sets list. To change the working directory, enter a new high-level qualifier,
enclosed in single quotes (‘ ’). The default specification is the logon user
ID entered on the FTP Connect window.

Note: To navigate back to a previous data set list, double-click the ..


entry.
b. The Datasets listbox contains a directory listing based on the current
working directory specification. Choose from the list of data sets for the
Remote File transfer. Names with an asterisk (*) have member names.
Double-click on the asterisk (*) to select a member list. It will appear in
the Members listbox.
After a data set is selected, it appears in the Remote File field.
9. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
for example: ’USER.GRAMMAR(IDMSUSE)’
The Remote File field contains the full data set name ready for FTP. This name
is based on input in the Datasets and Members listboxes or an explicitly
specified qualified data set name.
10. Click the Transfer button.
The Use Statements window displays, showing the exact data that is on your
remote location.
11. Select a file from the list, and click OK to FTP the file to your local drive.
12. Return to Step 2 to complete the metadata grammar generation.

This completes Exercise 8. Exercise 9 will describe how to use this metadata
grammar file to create a relational view of your data.

Exercise 9: Creating a relational view of CA-IDMS data


After completing Exercise 8, you have the metadata input file you need to create a
relational view. Follow these steps to create the relational view.
1. Transfer the metadata grammar file to the host system where the metadata
utility runs.
2. Run the metadata utility, using the metadata as input.

For more information on the metadata utility, see the IBM DB2 Information
Integrator Reference for Classic Federation and Classic Event Publishing.

You have completed the Data Mapper tutorial. You have now mapped
nonrelational data to a relational view.

36 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Chapter 5. IMS tutorial
Introduction to IMS tutorial
This Data Mapper Tutorial helps first-time users become more familiar with how
the application operates. At the completion of this tutorial, you will be able to use
the Data Mapper to map nonrelational data to a relational view that you can see
using your system’s front-end tool.

How this IMS tutorial works


This tutorial includes a series of steps that create a relational view from
nonrelational data. The steps are followed by windows or menus that show you
how the steps are performed. For a complete description of all the Data Mapper
menus, windows, and fields, see Chapter 2, “User Reference,” on page 3.

In addition to this tutorial, the Data Mapper includes an online help system that
describes how to use the application. To launch help, pull down the Help menu or
press F1.

Getting started mapping IMS data


To start the Data Mapper from the Windows Start menu:
1. Click Programs – IBM DB2 Information Integrator Classic Tools – Data
Mapper.

Mapping IMS data


Exercises 1 through 11 describe how to use the Data Mapper to map IMS data to a
relational view on the z/OS platform.

Exercise 1: Creating an IMS repository


The first step to mapping your nonrelational data to a relational view is to create a
repository.

A repository stores information (data catalogs, tables, columns, indexes and


owners) about the legacy data the Data Mapper is mapping.
To create a repository:
1. From the File menu, choose New Repository....
The Create a New Repository window appears.

© Copyright IBM Corp. 2003, 2004 37


2. Enter a file name and location for your repository. Assign a mdb file extension.

Note: Repository names should have a meaning for your particular site. For
example, you may want to name your repository the same name as the
database you are mapping into the repository.
3. Click Save to create the repository.
The new repository you created appears. This is an empty repository. You will
add data catalogs to the repository in “Exercise 3: Creating an IMS data
catalog” on page 39.

You have completed Exercise 1.

To add an owner to a repository, continue on to “Exercise 2: Adding owners to an


IMS repository (optional)” on page 38. To create a data catalog, skip to “Exercise 3:
Creating an IMS data catalog” on page 39.

Exercise 2: Adding owners to an IMS repository (optional)


Owners are authorized IDs for tables. When qualifying a table in SQL, the format
is as follows:
owner.tablename

If an owner is not assigned to a table, then the TSO ID that runs the metadata
utility becomes the owner of the table in z/OS.
To add owners to a repository:
1. If a repository is not open, open one.
2. From the Window menu, choose List Owners....
A Users list appears.
3. From the Edit menu, choose Create a new owner....
4. Enter the owner name (up to 8 characters) and any remarks.
5. Click OK to add the owner.
The owner name is then included in the list of owners for the repository. To
view this list, click List Owners... from the Windows menu.
Repeat these steps to add additional owners.
6. Minimize or close the Owners window.

38 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
This completes Exercise 2.

Exercise 3: Creating an IMS data catalog


A data catalog is a collection of tables for a particular nonrelational database type,
such as IMS.
To create a data catalog for the newly-created repository.
1. From the Edit menu, choose Create a new Data Catalog....
2. Enter the data catalog name, select its type, and add any remarks.
3. Click OK to create the data catalog.

Repeat these steps to add additional data catalogs.

You have completed Exercise 3. To load IMS DBDs for reference, continue on to
“Exercise 4: Loading DL/I DBDs for reference” on page 39.

Exercise 4: Loading DL/I DBDs for reference


This exercise describes how to load DL/I Data Base Definitions (DBDs) for
reference. DL/I DBDs are transferred from the mainframe to the workstation and
given a file extension of.dbd.

The DBDs are used as reference for building table and column information. DBD
information is also used to provide lists of segment names and field names when
creating and updating the table and column information.
To load DL/I DBDs for reference:
1. Select the data catalog you created in Exercise 3 by clicking on the number to
the left of the data catalog name.
2. From the File menu, choose Load DL/I DBD for Reference....
The Load DBD File window appears.

Continue on to Step 3 if you are loading a DBD from your hard drive. Skip to
Step 5 if you are loading a DBD from a remote location.
3. Select a DBD from the Data Mapper samples folder, or enter the name and
location of the DBD to load.
4. Click OK.

Note: Data Mapper requires a file extension of dbd for all DBD files. It will
not recognize other file extensions.
The DBD reference is created. An IMS or DL/I DBD file folder icon appears
on the window indicating that the DBD is loaded for reference by subsequent
Data Mapper functions.

Chapter 5. IMS tutorial 39


Repeat the preceding steps to load a new DBD (only one DBD may be loaded
for reference at a time) from your hard drive. It will replace the current one.

Note: You must load the DBD each time you open the repository. Data
Mapper does not store DBD references between sessions. However, you
may switch DBDs at any time during the mapping process.
This completes loading a DBD from the hard drive.
5. Click Remote on the Load DBD File window to load a DBD from an FTP
location.
The FTP Connect window displays.

6. Enter information in the following fields:


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
7. Click Connect.
In z/OS, the Host panel displays.
8. Enter the following information in the host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

Note: To navigate back to a previous data set list, double-click the ..


entry.
b. The Datasets drop-down list box contains a directory listing based on the
current working directory specification. Choose from the list of data sets
for the Remote File transfer. Names with an asterisk (*) have member
names. Double-click on the asterisk (*) to select a member list. It will
appear in the Members listbox.
After a data set is selected, it appears in the Remote File field.
9. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
| such as ’CAC.INSTALL.SCACSAMP(CACIMPAR)’.
The Remote File field contains the full data set name ready for FTP. This
name is based on input in the Datasets and Members fields or an
explicitly-specified qualified data set name.
10. Click Transfer.

40 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The DBD reference is created. An IMS DBD file folder icon appears on the
window indicating that the DBD is loaded for reference by subsequent Data
Mapper functions.

This complete Exercise 4.

Exercise 5: Creating an IMS table


In this exercise, you will create a table for the data catalog you created in Exercise
3.

You can create a logical table for IMS data that is equivalent to a DB2 UDB for
z/OS table by mapping one or more record types from the nonrelational database
into a single table.
To create an IMS table:
1. If a repository is not open, open one.
2. From the Window menu, choose List Tables....
The IMS Tables for Data Catalog appears.
3. From the Edit menu, choose Create a new Table....
The Create IMS Table window appears.

4. To create an IMS table:


a. Enter the table name in the Name field.
b. Click an owner from the Owner drop-down list box.
c. Confirm that a name exists in the DBD Name field.
The name should have been automatically added to the field, based on the
DBD that was loaded for reference. If you did not load a DBD, you must
manually enter a name. This is a required field.
d. (Optional) Click an Index Root from the Index Root drop-down list box.
The index root name must be a 1 to 8 alphanumeric string.
e. Enter or select a Leaf Segment from the Leaf Seg drop-down list box.
If you loaded a DBD, this drop-down list box contains the segment names
defined in the DBD. This value defines the path in the IMS DB database to
include in the table. You can map information from any segments in this
path to columns in the table you are creating.

Chapter 5. IMS tutorial 41


Note: The DBD Name is automatically added, based on the DBD that was
loaded for reference in Exercise 4. If you did not load a DBD, you are
required to manually enter a name.
f. (Optional, z/OS only) The PSB Name defines the PSB to be scheduled when
accessing this table. The PSB name must be a 1 to 8 alphanumeric string.
g. (Optional, z/OS only) The JOIN PSB Name defines the PSB to be scheduled
when accessing this table as part of an SQL JOIN with other tables. The
JOIN PSB name must be a 1 to 8 alphanumeric string.
h. (Optional, z/OS only) Enter the PCB Prefix in the PCB Prefix field.
The prefix must be a 1 to 7 character alphanumeric string. The PCB Prefix is
used by the data server to identify the PCB used in all IMS queries for the
table. The prefix value specified is suffixed by the data server with a
character 0 through 9 prior to looking up the PCB name using the DL/I’s
AIB interface. If no PCB Prefix is specified, the data server searches the PCB
list for a valid PCB by issuing GU calls for the necessary path to the leaf
segment.
i. (Optional) Check the Reference Only check box if the table will be used for
reference only.
Reference tables allow you to build large column lists that can be used to
populate other tables using drag-and-drop between column windows.
Reference tables are not generated into the data catalog’s metadata input
when metadata generation is requested.
j. Enter any remarks in the Remarks field.
5. Click OK to create the table.
The table is now listed on the IMS Tables for Data Catalog window for this
data catalog.

Repeat these steps to add additional tables to a data catalog.

This completes Exercise 5. To create columns, continue on to “Exercise 6: Creating


IMS columns (optional)” on page 42.

Exercise 6: Creating IMS columns (optional)


You can create columns for IMS data that are equivalent to columns in a DB2 UDB
for z/OS table. Adding columns to a table in a data catalog is analogous to adding
columns to a logical table. The column in the data catalog can represent one or
| more data items in the corresponding nonrelational table. A logical table must
| contain at least one column definition.

This exercise describes how to manually define IMS columns to a table.

| To automatically add columns using the recommended method of importing a


copybook, see “Exercise 7: Importing a Copybook for IMS tables” on page 45.
To manually define IMS columns to a table:
1. Open the IMS Table for Data Catalog window.
2. From the Window menu, choose Window, List Columns....
The Columns for IMS Table window appears. Since you have not created any
columns, no column is listed for this table.
3. From the Edit menu, choose Create a new Column....

42 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The Create IMS Column window appears.

4. Follow these steps to create a new column:


a. Specify a column name.
A column name must be a valid SQL name and cannot exceed 30 characters
in length. This attribute is required.
b. Specify a segment name.
The name of the segment in the DBD from which this column is mapped. If
an IMS DBD is loaded for reference, this field is presented as a combo box
containing the segments defined for the table. The segment name is a 1 to 8
character alphanumeric field and is required.
c. Specify the segment offset.
Specifies the offset into the segment where the column starts. The offset
must be a valid numeric string. This is an optional field.
d. Specify the segment length.
Specifies the physical length of the column in the segment. The length must
be a valid numeric string. This is an optional field.
e. Specify the segment data type by selecting from the drop-down list box.
This is the data type of the column in the IMS Segment. Valid values
include:
v Character,
v Packed Decimal,
v Unsigned Packed Decimal,
v Doubleword,
v Halfword,
v Fullword,
v Variable Length Character, and
v Zoned Decimal.
To remove a data type definition after one has been assigned, use the
Delete or Backspace key in the Datatype field. This attribute is optional.
f. Specify the SQL Data Type.
A valid SQL data type for returning the data to the requesting applications.
Valid SQL data types include:

Chapter 5. IMS tutorial 43


v CHAR,
v DECIMAL,
v FLOAT,
v GRAPHIC,
v INTEGER,
v LONG VARCHAR,
v LONG VARGRAPHIC,
v SMALLINT,
v VARCHAR, and
v VARGRAPHIC.
Types requiring a precision specification are suffixed with (n). DECIMAL
data types are specified as DECIMAL(p,s) where p is the total number of
digits and s is the number of digits to the right of the implied decimal point.
When selecting an SQL data type for an IMS/DLI column, the length or
scale of the data type is automatically set from the length of the native data
type, if defined.

Note: The n in the SQL Usage Data Types pull-down list box must be
replaced by a number, for example, CHAR(8).
g. Specify the Null is value.
A string that defines a value interpreted as NULL in the target database for
this column. Character or hexadecimal values can be used to specify the
value. This attribute is optional.
h. Specify a conversion exit.
A 1 to 8 character entry name called whenever this column is retrieved. This
exit must be available to the data server at execution time. To disable the
exit in the generated USE statements, do not check the Exit Active check
box. This attribute is optional.
i. Add any remarks.
Remarks are an optional description of the IMS column. Remarks may be up
to 32K characters in length.
5. Click OK to add the column.
The column is created and displays when you view the Columns for IMS Table
window.

Repeat these steps to add additional columns to the table.

If you need to update the entry for a column in the table, double-click on the
number to the left of the column name. The Update Column window appears,
allowing you to update the column name, Sequential name, IMS segment
information, SQL usage information, and remarks.

You can also copy one or more columns between tables if the two tables are the
same data catalog type. Generally, copying is between reference tables and other
tables.
To copy one or more columns between two tables:
1. Select the source table.
2. From the Window menu, choose Column List....
3. Select the target table.
4. From the Window menu, choose Column List....

44 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
5. Position the two column lists side by side.
6. Select one or more columns to copy by clicking in the line number column for
the column to be copied. To select a block of columns, click on the line number
of the first column to be copied and hold down the left mouse button until you
reach the last column you want to copy.
7. Click again on the selected block and drag the columns to the target column
list window. The mouse cursor will change to the Drag icon to indicate that
you are in column drag mode. If the drag cursor does not appear, start the
process again after ensuring that both the source and target column list
windows are visible.
8. Release the mouse button to complete the copy.

To simplify dragging and dropping columns, minimize all open windows except
for the source and target column windows and then use the Tile option from the
Windows menu.

Data Mapper automatically enters drag mode when two or more column lists are
visible at a time and a block of columns is selected. If you are editing a column list
and do not want the list to switch to drag mode, close or minimize all column lists
except for the one you are editing.

This completes Exercise 6. Continue on to “Exercise 7: Importing a Copybook for


IMS tables” on page 45.

Exercise 7: Importing a Copybook for IMS tables


This exercise describes how to import a copybook. Copybooks are transferred from
the mainframe to the workstation and must be given a file extension of.fd.
To import a copybook:
1. Close any open Columns and IMS Tables windows.
2. From the Window menu, choose List Tables....
3. Select the table you want to import the copybook into by clicking on the
number to the left of the table name.
4. From the File menu, choose Import External File....
The Import File window appears.

Note: When transferring IMS copybooks to the workstation, be sure to


include the file extension of fd, such as boxes.fd.
Continue on to Step 5 if you are importing a copybook from your hard drive.
Skip to Step 7 if you are importing a copybook from a remote location.
5. Select a copybook to import from the DM\samples directory.
6. Click OK.

Chapter 5. IMS tutorial 45


The Import Copybook window appears.

Skip to Step 13.


7. Click Remote to import a copybook from the FTP site.
The FTP Connect window appears.

8. Enter information in the following fields:


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
9. Click Connect button.
In z/OS, the Host panel appears.
10. Enter the following information in the Host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

46 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Note: To navigate back to a previous data set list, double-click the ..
entry.
b. The Datasets drop-down list box contains a directory listing based on the
current working directory specification. Choose from the list of data sets
for the Remote File transfer. Names with an asterisk (*) have member
names. Double-click on the asterisk (*) to select a member list. It will
appear in the Members listbox.
After a data set is selected, it appears in the Remote File field.
11. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
| for example: ’CAC.INSTALL.SCACSAMP(CACIMROT)’.
The Remote File field contains the full data set name ready for FTP. This name
is based on input in the Datasets and Members list boxes or an explicitly
specified qualified data set name.
12. Click Transfer.
The Import Copybook window appears.

13. Set import options. The options include:


Import Group Level Data Items: Creates a column for each COBOL data item
that is a group level item. Group level items are items without picture clauses
that contain subordinate data items with higher level numbers.
Import Selected Structure Only: Since most copybooks contain more than one
record or field definition, you can select a particular structure to import from
an existing copybook by clicking on the data item at which to start the import,
then selecting the Import Selected Structure Only check box. When structure
selection is used, the selected data item and all subordinate data items
(following data items with higher level numbers) are imported. The data item
selected can exist at any level in the structure.
OCCURS Clauses:
v Create Record Array: Defines a record array for data items within OCCURS
clauses in the copybook.

Chapter 5. IMS tutorial 47


v Expand each occurrence: Creates a column for each occurrence of a data
item within the copybook. Data item names within the OCCURS clause are
suffixed with _1, _2, ..._n.
v Map first occurrence only: Create a column for the first occurrence of a
data item within the OCCURS clause only.
Append to Existing Columns: Adds the copybook columns to the bottom of
the list of existing columns in that table. Not selecting this option deletes all
existing columns and replaces them with the columns you are now importing.
Calculate Starting Offset: Use this option to append existing columns to a
table. This allows the starting offset of the first appended column to be
calculated based on the columns already defined in the table. When selected,
the first appended column will be positioned at the first character position
after the last column (based on offset and length already defined for the
table).
Use Offset: When you have an explicit offset to be used for the first column
imported that does not match the field’s offset in the COBOL copybook
structure, enter an offset in this field to override the default calculation based
on the COBOL copybook structure. If you do not want to override the default,
the offset for the first imported column is determined by the COBOL
copybook field’s offset in the structure you are importing.

Note: By default, the offset of the first COBOL data item imported is based on
the data item’s position in all of the structures defined in the import
file. This offset will always be zero unless you are importing a selected
structure from the copybook. In that case, the offset for the first column
imported from the structure will be the COBOL data item’s position
based on all structures that precede it in the import file. If the default
offset is not correct, then the Calculate Starting Offset or Use Offset
options can be used to override the default.
Seg Name: This is the segment name and is selectable, using the DBD
previously loaded for reference. The segment name defaults to the leaf
segment name selected when the table is defined.
14. Click Import to import the copybook into your table.
The IMS columns window includes the newly-imported columns.
Repeat the preceding steps to import additional copybooks.
To check the default definition of a Field Name for the Column Name or to
designate a different Field Name for the column, complete Steps 12 through
14.
15. Select a column from the list by double-clicking on the number to the left of
the column name.

48 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The Update IMS Column window appears.

The Column Name, Segment Name, DL/I Segment Information, and Data
Type are already filled in.

Note: Each column’s offset and length is compared to the fields in the DBD. If
a match is found, the DBD field name is assigned to the imported
column.
16. Click Cancel if the default is acceptable. Otherwise, select a different field
entry and then click OK to update the column information.

This completes Exercise 7.

Exercise 8: Creating, updating, or deleting an IMS Index


(optional)
This exercise describes how to create, update, and delete IMS indexes.

Note: You must have the IMS tables window open and a table selected before
starting this exercise.

Creating an IMS index


To create an IMS index:
1. From the Window menu, choose List Indexes....
2. From the Edit menu, choose Create a new Index....

Chapter 5. IMS tutorial 49


The Create IMS Index window appears.

3. Enter information in the following fields:


a. Name: Specifies the name of the index (required).
b. Owner: Specifies the authorization ID to be assigned to the index (optional).
c. Index is Unique: If checked, every key in the index has a unique value.
d. PCB Prefix: A character string used by the server to identify by name the
PCB to be used in all IMS queries for the index (optional).
e. Included Columns: Contains the columns comprising the index and their
physical order in the index. At least one column must be defined in the
included columns list.
f. Remarks: Description of the IMS index (optional).
4. Click OK to add the index.
The Indexes are listed on the IMS Table window.

Updating an IMS index


To update an IMS index:
1. Open the IMS Tables for Data Catalog window.
2. From the Window menu, choose List Indexes....
The Indexes for IMS Table window appears.
3. Double-click column 1 of the row containing the index to update.

50 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The Update IMS Index window appears.

4. Enter the information to update and click OK.


The Indexes is added and is listed on the IMS Table window.

Deleting an IMS index


To delete an IMS index:
1. From the Window menu, choose List Indexes....
2. Click in column 1 of the row containing the index to delete.
3. From the Edit menu, choose Delete the selected Index.
The Confirm Delete window appears.
4. Click Yes to delete the index.

After confirmation, the index is deleted from the repository.

Exercise 9: Defining an IMS record array (Optional)


A record array is 1 or more columns that occur multiple times in a single database
record.
To define a record array:
1. Open the Import Copybook window.
2. Select the Create Record Array option and click Import.
If the imported copybook contains an OCCURS clause, the record array will
automatically be created during import.
3. To create a record array from the Columns window, select the column or
columns to include in the record array.
4. From the Edit menu, select Create a Record Array....

Chapter 5. IMS tutorial 51


The Create a Record Array window appears.

The fields on the Create a Record Array window are as follows:


v First Column in Array: (Required field) Identifies the start of the record
array in the database record.
v Last Column in Array: (Required field) Identifies the end of the record array
in the database record.
v Offset of Array in Parent: (Required field) Defines the starting offset of the
array based on either the beginning of the record or the beginning of a
parent record array.
v Length of a Single Occurrence: (Required field) Defines the internal record
length of each occurrence of the array.
v Max Number of Occurrences: (Required field) Defines the maximum
number of occurrences that can exist in the database record.
v NULL Occurrence Rule: Defines conditions under which an occurrence in
the array is to be considered null and not returned as a result row in select
clauses.
v NO NULL Occurrences: Returns all occurrences of the array in the result set.
v Count is in Column: Number of valid occurrences in the array is kept in the
column identified by the required column name attribute defined with this
rule.
v NULL is Value: Identifies a comparison value to be used at runtime to
determine if an occurrence of the array is null.
v Repeat Value for Length of Compare: Repeats the comparison value for the
length of the null compare.
v Compare Column: (Optional field) Identifies where in the array to do the
null comparison.

Note: For more information on record arrays, see the Data Mapper online
help.
The Columns for IMS Table window now includes the record array data you
selected.

This completes Exercise 9. To generate IMS metadata grammar, continue on to


Exercise 10.

Exercise 10: Generating IMS metadata grammar


This exercise describes the steps required to create metadata grammar.

52 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Metadata grammar, also known as USE grammar, is generated by the Data
Mapper for all of the tables in a specific data catalog. When metadata grammar has
| been created, it must subsequently be transferred from the workstation to the
| mainframe. The metadata grammar is supplied as input to the metadata utility that
| runs on the mainframe. The metadata utility uses the contents of the metadata
| grammar to create logical tables. Client applications use logical tables, which are
| non-relational-to-relational mappings, for SQL access non-relational data.
To create metadata grammar:
1. Open the Data Catalog window and select a data catalog.
2. From the File menu, choose Generate USE Statements....
The Generate USE Statements window appears.

Continue on to Step 3 if you are generating USE statements on your hard


drive. Skip to Step 7 if you are generating USE statements to send to a remote
location.
3. Give the file a name with a use file extension, such as ims.use.
4. Click OK to generate the metadata input.
A window appears asking if you want to view the newly-created script.
5. Click Yes.
The USE statement script displays.

| Note: Before each table definition in the metadata grammar file, a DROP table
| statement is generated. If a duplicate table exists in the metadata
| catalogs, the DROP table statement deletes the table and any indexes,
| views, and table privileges associated with the table. The
| newly-generated USE statement creates the new table.
If necessary, you can edit this file directly from the Notepad where it appears.
6. Repeat Steps 2 through 5 to generate additional USE Statements from your
hard drive.
7. Click Remote to generate USE statements from a remote location.

Chapter 5. IMS tutorial 53


The FTP Connect window displays.

8. Enter information in the following fields:


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
9. Click Connect button.
In z/OS, the Host panel displays.
10. Enter the following information in the host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

Note: To navigate back to a previous data set list, double-click the ..


entry.
b. The Datasets drop-down list box contains a directory listing based on the
current working directory specification. Choose from the list of data sets
for the Remote File transfer. Names with an asterisk (*) have member
names. Double-click on the asterisk (*) to select a member list. It will
appear in the Members listbox.
After a data set is selected, it appears in the Remote File field.
11. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
for example: ’USER.GRAMMAR(PARTS)’
The Remote File field contains the full data set name ready for FTP. This
name is based on input in the Datasets and Members list boxes or an
explicitly specified qualified data set name.
12. Click Transfer.
The tmp USE Statements window displays, showing the exact data that is on
your remote location.
13. Select a file from the list, and click OK to FTP the file to your local drive.
14. Return to Step 3 to complete the metadata grammar generation.
Exercise 11 will describe how to use this metadata grammar file to create a
relational view of your data.

This completes Exercise 10.

54 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Exercise 11: Creating a relational view of IMS data
After completing Exercise 10, you have the metadata input file that DB2
Information Integrator Classic Federation for z/OS or DB2 Information Integrator
Classic Event Publisher for IMS needs to create a relational view.
To create the relational view:
1. Transfer the metadata input file to the host system where the database resides.
2. Run the metadata utility, using the metadata as input.
The metadata utility then creates the relational view.

For more information on metadata utilities, see the IBM DB2 Information Integrator
Reference for Classic Federation and Classic Event Publishing.

You have completed the Data Mapper tutorial. You have now mapped
nonrelational data to a relational view.

Chapter 5. IMS tutorial 55


56 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Chapter 6. Sequential tutorial
Introduction to the Sequential tutorial
The Data Mapper tutorial helps first-time users become more familiar with how
the Data Mapper operates. At the completion of this tutorial, you will be able to
use the Data Mapper to map nonrelational data to a relational view that you can
see using your system’s front-end tool.

How this Sequential tutorial works


This tutorial is a series of steps that create a relational view from nonrelational
data. Each step is followed by a window or menu that shows you how the step is
performed. For a complete description of all the Data Mapper menus, windows,
and fields, see Chapter 2, “User Reference,” on page 3.

In addition to this tutorial, the Data Mapper includes an online help system that
describes how to use it. To access Help, pull down the Help menu or press F1.

Getting started mapping Sequential data


To start the Data Mapper from the Windows Start menu:
1. Click Programs – IBM DB2 Information Integrator Classic Tools – Data
Mapper.

Mapping Sequential data


Exercises 1 through 9 describe how to use the Data Mapper to map Sequential data
to a relational view.

Exercise 1: Creating a Sequential repository


The first step to mapping your nonrelational data to a relational view is to create a
repository.

A repository stores information (data catalogs, tables, columns, and owners) about
the legacy data that Data Mapper is mapping.
To create a repository:
1. From the File menu, choose New Repository.
The Create a new Repository window appears.

© Copyright IBM Corp. 2003, 2004 57


2. Enter a file name and location for your repository in the Create a New
Repository window. You must assign a mdb file extension all Data Mapper
repository files.

Note: Repository names should have a meaning for your particular site. For
example, you may want to name your repository the same name as the
database you are mapping into the repository.
3. Click Save to create the repository.
The new repository you created appears. This is an empty repository. You will
add data catalogs to the repository in “Exercise 3: Creating a Sequential data
catalog” on page 59.

You have completed Exercise 1.

To create a data catalog, skip to “Exercise 3: Creating a Sequential data catalog” on


page 59. To add an owner to a repository, continue on to “Exercise 2: Adding
owners to a Sequential repository (optional)” on page 58.

Exercise 2: Adding owners to a Sequential repository


(optional)
Owners are authorized IDs for tables. When qualifying a table in SQL, the format
is as follows:
owner.tablename

If an owner is not assigned to a table, then the z/OS TSO ID that runs the
metadata utility becomes the owner of the table in z/OS.
To add owners to a repository:
1. If a repository is not currently open, open one.
2. From the Windows menu, choose List Owners.
If owners exist, a list of Owner Names appears. If no owners are defined for
this repository, the list will be empty.
3. From the Edit menu, choose Create a new owner...
4. Enter the owner name and remarks.
5. Click OK to add the owner.

58 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The owner name is included in the list of owners for that repository. To view
this list, select List Owners... from the Window menu
Repeat Steps 1 through 5 to add additional owners.
6. Minimize or close the Owners window.

This completes Exercise 2.

Exercise 3: Creating a Sequential data catalog


To create a data catalog for the newly-created repository:
1. If a repository is not open, open one.
2. From the Edit menu, choose Create a new Data Catalog.
3. Enter the data catalog name, select its type, and add any remarks.
4. Click OK to create the data catalog.
The data catalog now appears in your repository.
Repeat Steps 2 through 4 to add additional data catalogs.

You have completed Exercise 3.

To create a table for this data catalog, continue on to “Exercise 4: Creating a


Sequential table” on page 59.

Exercise 4: Creating a Sequential table


You can create a logical table for Sequential data that is equivalent to a DB2 UDB
for z/OS table by mapping one or more record types from the nonrelational
database into a single table.
To add tables to a data catalog:
1. If a repository is not open, open one.
2. To select a data catalog, click on the number to the left of the data catalog
name. This highlights the selected row.
3. From the Window menu, choose List Tables to list tables for the data catalog.
4. From the Edit menu, choose Create a new table...
The Create Sequential Table window appears.

5. To create a Sequential Table:


a. Enter the table name in the Name field.

Chapter 6. Sequential tutorial 59


b. Click an owner from the Owners list box.
c. Enter the data set name or the DD name of the data set for a Sequential
server:
1) For a data set name, select the DS option button.
2) Enter a 1 to 44 character data set name, such as EMPLOYEE.FILE, in the
Dataset Name box.
In z/OS, the DS option uses dynamic allocation services to access the
associated Sequential data set.
3) For a DD name, click the DD option. Enter a 1 to 8 character
alphanumeric DD name in the Dataset Name field.
The DD option requires a DD/DLBL statement with that DD name in
the data server start-up procedure JCL.
4) In the Record Exit Name field, enter the Record Exit name.
5) In the Record Exit Max Lth field, enter the Record Exit maximum
length.
6) (Optional) Check the Reference Only check box if the table you are
creating will be used for reference purposes only.
The reference table is used to build large column lists to populate other
tables. These reference tables are not generated into the data catalog’s
metadata input when metadata generation is requested. This option is
particularly useful when creating tables with hundreds of columns, as
you can drag and drop to copy columns between windows.
7) Enter any remarks in the Remarks field.
6. Click OK to create the table.

The table is now listed on the Sequential Tables for Data Catalog window for this
data catalog.

Repeat Steps 2 through 5 to add additional tables to the data catalog.

You have completed Exercise 4.

To define a column for the table, continue on to “Exercise 5: Creating Sequential


columns (optional)” on page 60. To import a copybook, skip to “Exercise 6:
Importing a Copybook for Sequential tables” on page 62.

Exercise 5: Creating Sequential columns (optional)


You can create columns for Sequential data that are equivalent to columns in a DB2
UDB for z/OS table. Adding columns to a table in a data catalog is analogous to
adding columns to a logical table. The column in the data catalog can represent
| one or more data items in the corresponding nonrelational table. A logical table
| must contain at least one column definition.

This exercise shows you how to manually add columns to a data catalog. You do
not have to add columns manually for them to appear in the data catalog.
Importing a copybook automatically creates columns. See “Exercise 6: Importing a
| Copybook for Sequential tables” on page 62, for more information on the
| recommended method of creating columns by importing a copybook.
To manually add a column:
1. Select a table by clicking on the number to the left of the table Name. This
highlights the selected row.

60 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
2. From the Window menu, choose List Columns.
The Columns for Sequential Table for this table appears.
3. From the Edit menu, choose Create a new Column....
The Create Sequential Column window appears.

4. To create the column:


a. Enter a 1–18 character column name in the Name field.
b. Enter the offset of the Sequential field in the Sequential Record Offset
field.
c. Enter the length of the Sequential field in the Sequential Record Length
field.
d. Select the Sequential data type from the Sequential Record Datatype
drop-down list box. When selecting a native data type for a Sequential
column, the SQL data type associated with the selected data type is
automatically set.
e. Select an SQL data type for the column from the SQL Usage Data Type
drop-down list box. You can map zoned decimal data to either a character
or decimal SQL data type. When selecting an SQL data type for a Sequential
column, the length or scale of the data type is automatically set from the
length of the native data type, if defined.

Note: The n in the SQL Data Type CHAR(n) must be replaced by a number,
such as CHAR(8).
f. To create a nullable column, enter a value in the Null is field to delineate
null, such as 000.
g. Enter the name of a conversion exit in the SQL Usage Conversion Exit
field.
h. Enter any remarks in the Remarks field.
5. Click OK.
The column is created and displays in the column list when you view the
Column for Sequential Table window.
6. Close the Columns for Table window.

Repeat Steps 1 through 5 to add additional columns to the table.

Chapter 6. Sequential tutorial 61


To update the entry for a column in the table, double-click on the number to the
left of the column name. The Update Sequential Column window appears,
allowing you to update the column name, Sequential record information, SQL
usage information, and remarks.

You can also copy one or more columns between tables if the two tables are the
same data catalog type. Generally, copying is between reference tables and other
tables.
To copy one or more columns between two tables:
1. Select the source table.
2. From the Window menu, choose Column List....
3. Select the target table.
4. From the Window menu, choose Column List....
5. Position the two column list windows side by side.
6. Select one or more columns to copy by clicking in the line number column for
the column to be copied. To select a block of columns, click in the line number
of the first column to be copied and hold down the left mouse button until you
reach the last column you want to copy.
7. Click again on the selected block and drag the columns to the target column
list window. The mouse cursor will change to the Drag icon to indicate that
you are in column drag mode. If the drag cursor does not appear, start the
process again after ensuring that both the source and target column list
windows are visible.
8. Release the mouse button to complete the copy.

To simplify dragging and dropping columns, minimize all open windows except
the source and target column windows and then use the Tile option from the
Windows menu.

Note: Data Mapper automatically enters drag mode when two or more column
lists are visible at a time and a block of columns is selected. If you are
editing a column list and do not want the list to switch to drag mode, close
all column lists except the one you are editing.

This completes Exercise 5.

Exercise 6: Importing a Copybook for Sequential tables


This exercise describes how to import a COBOL copybook. Copybooks are
transferred from the mainframe to the workstation and must be given a file
extension of .fd.
To import a COBOL copybook:
1. If a repository is not open, open one.
2. From the Window menu, choose List Tables....

Note: You may have to close the Columns and Sequential Tables windows
first to reactivate these options.
3. Select the table you want to import the copybook into by clicking on the
number to the left of the table name.
4. From the File menu, choose Import External File....

62 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The Import File window appears.

Note: When importing files, be sure to use a file extension of fd, such as
copybook.fd, or Data Mapper will not recognize the file.
Continue on to Step 5 if you are importing a copybook from your hard drive.
Skip to Step 6 if you are importing a copybook from a remote location.
5. Select a copybook to import from the Data Mapper samples folder and click
OK.
The Import Copybook window appears.

Skip to Step 12.


6. Click the Remote button on the Import File window to import a copybook
from the FTP site.
The FTP Connect window appears.

7. Enter information in the following fields:

Chapter 6. Sequential tutorial 63


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
8. Click the Connect button.
In z/OS, the Host panel appears.
9. Enter the following information in the Host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

Note: To navigate back to a previous data set list, double-click the ..


entry.
b. The Datasets drop-down list box contains a directory listing based on the
current working directory specification. Choose from the list of data sets
for the Remote File transfer. Names with an asterisk (*) have member
names. Double-click on the asterisk (*) to select a member list. It will
appear in the Members listbox.
After a data set is selected, it appears in the Remote File field.
10. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
| such as ’CAC.INSTALL.SCACSAMP(CACEMPFD)’
The Remote File field contains the full data set name ready for FTP. This
name is based on input in the Datasets and Members fields or an
explicitly-specified qualified data set name.
11. Click the Transfer button.
The Import Copybook window displays.

12. Select the table that you want to import the copybook into and select the
Import Options.

Note: This action was already completed in Step 2 (default selected) unless
you want to make a change using the dropdown list.
The import options include:

64 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Import Group Level Data Items: Creates a column for each COBOL data item
that is a group level item. Group level items are items without picture clauses
that contain subordinate data items with higher level numbers.
Import Selected Structure Only: Since most copybooks contain more than one
record or field definition, you can select a particular structure to import from
an existing copybook by clicking on the data item at which to start the import,
then selecting the Import Selected Structure Only check box. When structure
selection is used, the selected data item and all subordinate data items
(following data items with higher level numbers) are imported. The data item
selected can exist at any level in the structure.
OCCURS Clauses:
v Create Record Array: Defines a record array for data items within OCCURS
clauses in the copybook.
v Expand each occurrence: Creates a column for each occurrence of a data
item within the copybook. Data item names within the OCCURS clause are
suffixed with _1, _2, ..._n.
v Map first occurrence only: Create a column for the first occurrence of a
data item within the OCCURS clause only.
Append to Existing Columns: Adds the copybook columns to the bottom of
the list of existing columns in that table. Not selecting this option deletes all
existing columns and replaces them with the columns you are now importing.
Calculate Starting Offset: Use this option to append to existing columns in a
table. This allows the starting offset of the first appended column to be
calculated based on the columns already defined in the table. When selected,
the first appended column will be positioned at the first character position
after the last column (based on offset and length already defined for the
table).
Use Offset: When you have an explicit offset to be used for the first column
imported and it does not match the field’s offset in the copybook structure,
enter an offset in this field to override the default calculation based on the
COBOL structure. If you do not override the default, the offset for the first
imported column is determined by the COBOL field’s offset in the structure
you are importing.

Note: By default, the offset of the first COBOL data item imported is based on
the data item’s position in all of the structures defined in the import
file. This offset will always be zero unless you are importing a selected
structure from the copybook. In that case, the offset for the first column
imported from the structure will be the COBOL data item’s position
based on all structures that precede it in the import file. If the default
offset is not correct, then the Calculate Starting Offset or Use Offset
options can be used to override the default.
13. Click Import to import the copybook to your table.
The Columns for Sequential Table window displays with the newly-imported
columns.

Repeat Steps 2 through 12 to import additional copybooks. This completes Exercise


6. To Define a Record Array, continue on to Exercise 7.

Chapter 6. Sequential tutorial 65


Exercise 7: Defining a Sequential record array (Optional)
A record array is one or more columns that occur multiple times in a single
database record.
To define a record array:
1. From the Import Copybook window, select the Create Record Array option and
click Import. If the imported copybook contains an OCCURS clause, the record
array will automatically be created during import.
2. To create a record array from the columns window, select the column or
columns to include in the record array.
3. From the Edit menu, click Create a Record Array... .
The Create a Record Array window appears.

The fields on the Create a Record Array window are as follows:


v First Column in Array: (Required field) Identifies the start of the record
array in the database record.
v Last Column in Array: (Required field) Identifies the end of the record array
in the database record.
v Offset of Array in Parent: (Required field) Defines the starting offset of the
array based on either the beginning of the record or the beginning of a
parent record array.
v Length of a Single Occurrence: (Required field) Defines the internal record
length of each occurrence of the array.
v Max Number of Occurrences: (Required field) Defines the maximum
number of occurrences that can exist in the database record.
v NULL Occurrence Rule: Defines conditions under which an occurrence in
the array is to be considered null and not returned as a result row in select
clauses.
v NO NULL Occurrences: Returns all occurrences of the array in the result set.
v Count is in Column: Number of valid occurrences in the array is kept in the
column identified by the required column name attribute defined with this
rule.
v NULL is Value: Identifies a comparison value to be used at runtime to
determine if an occurrence of the array is null.
v Repeat Value for Length of Compare: Repeats the comparison value for the
length of the null compare.

66 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
v Compare Column: (Optional field) Identifies where in the array to do the
null comparison.

Note: For more information on record arrays, see the Data Mapper Help.
4. Click OK to create the record array.
The Columns for Sequential Table window appears with the record array data
you created.

This completes Exercise 7. To generate Sequential metadata grammar, continue on


to Exercise 8.

Exercise 8: Generating Sequential metadata grammar


This exercise describes the steps required to create metadata grammar. Metadata
grammar, also known as USE grammar, is generated by the Data Mapper for all of
| the tables in a specific data catalog. When metadata grammar has been created, it
| must subsequently be transferred from the workstation to the mainframe. The
| metadata grammar is supplied as input to the metadata utility that runs on the
| mainframe. The metadata utility uses the contents of the metadata grammar to
| create logical tables. Client applications use logical tables, which are
| non-relational-to-relational mappings, for SQL access non-relational data.
To create metadata grammar:
1. From the Data Catalog window, click a data catalog.
2. From the File menu, choose Generate USE Statements....
The Generate USE Statements window appears.

Continue on to Step 3 if you are generating USE statements on your hard


drive. Skip to Step 5 if you are generating USE statements to send to a remote
location.
3. Give the file a name, using use as the file extension, such as generate.use.
A window appears, asking if you want to view the newly-created script.
4. Click YES to display the USE statement script.

| Note: Before each table definition in the metadata grammar file, a DROP table
| statement is generated. If a duplicate table exists in the metadata
| catalogs, the DROP table statement deletes the table and any indexes,
| views, and table privileges associated with the table. The

Chapter 6. Sequential tutorial 67


| newly-generated USE statement creates the new table.

If necessary, you can edit this file directly from the Notepad where it appears.
Repeat the previous steps to generate additional USE Statements. Then, skip
to the end of this set of steps.
5. Click Remote on the Generate USE Statements window to generate USE
statements to send to a remote location.
The FTP Connect window appears.

6. Enter information in the following fields:


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
7. Click Connect.
In z/OS, the Host panel displays.
8. Enter the following information in the host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

68 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Note: To navigate back to a previous data set list, double-click the ..
entry.
b. The Datasets listbox contains a directory listing based on the current
working directory specification. Choose from the list of data sets for the
Remote File transfer. Names with an asterisk (*) have member names.
Double-click on the asterisk (*) to select a member list. It will appear in
the Members listbox.
After a data set is selected, it appears in the Remote File field
9. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
for example: ’USER.GRAMMAR(SEQUSE)’
The Remote File field contains the full data set name ready for FTP. This name
is based on input in the Datasets and Members list boxes or an explicitly
specified qualified data set name.
10. Click Transfer.
The file is transferred to the remote location and the tmp USE Statements
window displays. This window displays the exact data that exists on your
remote location.

This completes Exercise 8.

Exercise 9: Creating a relational view for Sequential data


After completing Exercise 9, you have the metadata input file you need to create a
relational view.
To create the relational view:
1. Transfer the metadata grammar file to the host system where the metadata
utility runs.
2. Run the metadata utility, using the metadata as input.

The metadata utility then creates the relational view.

For more information on Metadata Utilities, see the IBM DB2 Information Integrator
Reference for Classic Federation and Classic Event Publishing.

You have completed the Data Mapper Sequential Tutorial. You have now mapped
Sequential nonrelational data to a relational view.

Chapter 6. Sequential tutorial 69


70 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Chapter 7. VSAM tutorial
Introduction to the VSAM tutorial
The Data Mapper tutorial helps first-time users become more familiar with how it
operates. At the completion of this tutorial, you will be able to use the Data
Mapper to map nonrelational data to a relational view that you can see using your
system’s front-end tool.

How this VSAM tutorial works


This tutorial includes a series of steps that create a relational view from
nonrelational data. The steps are followed by a window or menu that shows you
how they are performed.

In addition to this tutorial, the Data Mapper includes an online help system that
describes how to use the application. To access help, pull down the Help menu or
press F1.

Getting Started mapping VSAM data


To start the Data Mapper from the Windows Start menu:
1. Click Programs – IBM DB2 Information Integrator Classic Tools – Data
Mapper.

Mapping VSAM Data


Exercises 1 through 10 describe how to use the Data Mapper to map VSAM data to
a relational view.

Exercise 1: Creating a VSAM repository


The first step to mapping your nonrelational data to a relational view is to create a
repository.

A repository stores information (data catalogs, tables, columns, indexes, and


owners) about the legacy data that the Data Mapper is mapping.
To create a repository:
1. From the File menu, choose New Repository.
The Create a New Repository window appears.

© Copyright IBM Corp. 2003, 2004 71


2. Enter a file name and location for your repository in the Create a New
Repository window. You must assign a mdb extension to all Data Mapper
repository files.

Note: Repository names should have a meaning for your particular site. For
example, you may want to name your repository the same name as the
database you are mapping into the repository.
3. Click Save to create the repository.
The new repository you created appears. This is an empty repository. You will
add data catalogs to the repository in “Exercise 3: Creating a VSAM data
catalog” on page 73.

You have completed Exercise 1.

To create a data catalog, skip to “Exercise 3: Creating a VSAM data catalog” on


page 73. To add an owner to a repository, continue on to “Exercise 2: Adding
owners to a VSAM repository (optional)” on page 72.

Exercise 2: Adding owners to a VSAM repository (optional)


Owners are authorized IDs for tables. When qualifying a table in SQL, the format
is as follows:
owner.tablename

If an owner is not assigned to a table, then the z/OS TSO ID that runs the
metadata utility becomes the owner of the table in z/OS.
To add owners to a repository:
1. If you do not have a repository open, open one.
2. From the Window menu, choose List Owners....
A list of owner names appears.
3. From the Edit menu, choose Create a new owner....
4. Enter the owner name and any remarks.
5. Click OK to add the owner.
The owner name is then included in the list of owners window for that
repository. To view this list, click List Owners... from the Window menu.

72 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Repeat Steps 2 through 5 to add additional owners.
6. Minimize or close the Owners window.

This completes Exercise 2.

Exercise 3: Creating a VSAM data catalog


To create a data catalog for the newly-created repository:
1. From the Edit menu, choose Create a new Data Catalog....
2. Enter the data catalog name, type, and any remarks. Click the data catalog
types from the Type drop-down list box.
3. Click OK to create the data catalog.

Repeat the previous steps to add additional data catalogs.

You have completed Exercise 3.

To create a table for this data catalog, continue on to “Exercise 4: Creating a VSAM
table” on page 73.

Exercise 4: Creating a VSAM table


The following steps describe how to add tables to a data catalog.

You can create a logical table for VSAM data that is equivalent to a DB2 UDB for
z/OS table by mapping one or more record types from the nonrelational database
into a single table.
To add tables to a data catalog:
1. If you don’t have a repository open, open one.
2. Select a data catalog by clicking on the number to the left of the data catalog
name.
3. From the Window menu, choose List Tables....
4. From the Edit menu, choose Create a new table....

Chapter 7. VSAM tutorial 73


The Create VSAM Table window appears.

5. To create a VSAM table:


a. Enter the table name in the Name field.
b. Click an owner from the Owner drop-down list box.
c. Enter the data set name or the DD name of the data set for a VSAM server:
1) For a data set name, click the DS option.
2) Enter a 1 to 44 character data set name, such as EMPLOYEE.FILE, in the
data set Name field.
In z/OS, the DS option uses z/OS dynamic allocation services to access
the associated VSAM data set.
3) For a DD name, click the DD option.
4) Enter a 1 to 8 character alphanumeric DD name in the Dataset Name
field.
The DD option requires a DD/DLBL statement with that DD name in
the data server start-up procedure JCL.
5) For CICS® access, define APPC session parameters for VSAM access
within a CICS address space. This option is only valid with the
DD/CICS data set option where the DD name identifies the CICS file
control table name on the VSAM data set. With CICS access, all VSAM
requests are routed through an APPC connection to a CICS transaction
which performs the necessary VSAM access
The session parameters are:
Local Applid—required 1 to 8 character Application ID (LUNAME) for
the data server.
CICS Applid—required 1 to 8 character Application ID (LUNAME) for
the APPC session to CICS.
Logmode—required 1 to 8 character LOGMODE value for the APPC
session.
CICS Transaction ID—required 1 to 4 character CICS Transaction ID.

74 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Remote Network Name—optional 1 to 8 character network name for the
APPC session.
d. In the Record Exit Name field, enter the Record Exit name.
e. In the Record Exit Max Lth field, enter the Record Exit maximum length.
f. (Optional) Check the Reference Only check box if the table you are creating
will be used for reference purposes only.
The reference table is used to build large column lists to populate other
tables. These reference tables are not generated into the data catalog’s
metadata input when metadata generation is requested. This option is
particularly useful when creating tables with hundreds of columns, as you
can drag and drop to copy columns between windows.
g. Enter any remarks in the Remarks field.
h. Click OK.

The table is now listed on the VSAM Tables for Data Catalog window for this
data catalog.

Repeat these steps to add additional tables to the data catalog.

You have completed Exercise 4.

To define a column for the table, continue on to “Exercise 5: Creating VSAM


columns (optional)” on page 75. To import an external file, skip to “Exercise 7:
Creating, updating, and deleting a VSAM index (optional)” on page 80.

Exercise 5: Creating VSAM columns (optional)


You can create columns for VSAM data that are equivalent to columns in a DB2
UDB for z/OS table. Adding columns to a table in a data catalog is analogous to
adding columns to a logical table. The column in the data catalog can represent
| one or more data items in the corresponding nonrelational table. A logical table
| must contain at least one column definition.

This exercise shows you how to manually add columns to a data catalog. You do
not have to add columns manually for them to appear in the data catalog.
Importing a copybook automatically creates columns. See “Exercise 7: Creating,
updating, and deleting a VSAM index (optional)” on page 80 for more information
| on creating columns by using the recommended method of importing a copybook.
To manually add columns to a data catalog:
1. Select a table by clicking on the number to the left of the table name.
2. From the Window menu, choose List Columns....
The Columns for VSAM Table for this table appears.
3. From the Edit menu, choose Create a new Column....

Chapter 7. VSAM tutorial 75


The Create VSAM Column window appears.

4. To create the column:


a. Enter a 1-18 character column name in the Name field.
b. Enter the offset of the VSAM field in the VSAM Record Offset field. This is
a required value.
c. Enter the length of the VSAM field in the VSAM Record Length field.
d. Click the VSAM data type from the VSAM Record Datatype drop-down
list box. When selecting a native data type for a VSAM column, the SQL
data type associated with the selected data type is automatically set.
e. Click an SQL data type for the column from the SQL Usage Data Type
drop-down list box. You can map zoned decimal data to either a character
or decimal SQL data type. When selecting an SQL data type for a VSAM
column, the length or scale of the data type is automatically set from the
length of the native data type, if defined.

Note: The n in the SQL Data Type CHAR(n) must be replaced by a number,
such as CHAR(8).
f. To create a nullable column, enter a value in the SQL Usage Null is field to
delineate null, such as 000.
g. Enter the name of a conversion exit in the SQL Usage Conversion Exit box.
h. Enter any remarks in the Remarks field.
5. Click OK.
The column is created and displays in the column list when you view the
Column for VSAM Table window.
6. Close the Columns for Table window.

Repeat these steps to add additional columns to the table.

To update the entry for a column in the table, double-click on the number to the
left of the column name. The Update VSAM Column window appears, allowing
you to update the column name, VSAM record information, SQL usage
information, and remarks.

You can also copy one or more columns between tables if the two tables are the
same data catalog type. Generally, copying is between reference tables and other

76 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
tables.
To copy one or more columns between two tables:
1. Select the source table.
2. From the Window menu, choose Column List....
3. Select the target table.
4. From the Window menu, choose Column List....
5. Position the two column lists side by side.
6. Select one or more columns to copy by clicking in the line number column for
the column to be copied. To select a block of columns, click in the line number
of the first column to be copied and hold down the left mouse button until you
reach the last column you want to copy.
7. Click again on the selected block and drag the columns to the target column
list window. The mouse cursor will change to the Drag icon to indicate that
you are in column drag mode. If the drag cursor does not appear, start the
process again after ensuring that both the source and target column list
windows are visible.
8. Release the mouse button to complete the copy.

To simplify dragging and dropping columns, minimize all open windows except
the source and target column windows and then use the Tile option from the
Windows menu.

Note: Data Mapper automatically enters drag mode when two or more column
lists are visible at a time and a block of columns is selected. If you are
editing a column list and do not want the list to switch to drag mode close
all column lists except the one you are editing.

This completes Exercise 5.

Now that you have created a repository, added data catalogs, tables, columns, and
owners, you are ready to create an index, which is described in Exercise 6.

Exercise 6: Importing a Copybook for a VSAM table


This exercise describes how to import a COBOL copybook. Copybooks are
transferred from the mainframe to the workstation and must be given a file
extension of .fd.
1. From the Windows menu, choose List Tables....

Note: You may have to close the Columns and VSAM Tables windows first to
reactivate these options.
2. Select the table you want to import the copybook into by clicking on the
number to the left of the table name.
3. From the File menu, choose Import External File....

Chapter 7. VSAM tutorial 77


The Import File window appears.

Note: When importing files, use the file extension fd, such as copybook.fd, or
the Data Mapper will not recognize the file.
Continue on to Step 4 if you are importing a copybook from your hard drive.
Skip to Step 5 if you are importing a copybook from a remote location.
4. Select a copybook to import from the Data Mapper samples folder and click
OK.
The Import Copybook window appears.

Skip to Step 11.


5. Click Remote on the Import File window to import a copybook from the FTP
site.
The FTP Connect window appears.

6. Enter information in the following fields:

78 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
7. Click Connect.
In z/OS, the Host panel displays (as shown in the following example).
8. Enter the following information in the Host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

Note: To navigate back to a previous data set list, double-click the ..


entry.
b. The Datasets drop-down list box contains a directory listing based on the
current working directory specification. Choose from the list of data sets
for the Remote File transfer. Names with an asterisk (*) have member
names. Double-click on the asterisk (*) to select a member list. It will
appear in the Members listbox.
After a data set is selected, it appears in the Remote File field.
9. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
| such as ’CAC.INSTALL.SCACSAMP(CACEMPFD)’.
The Remote File field contains the full data set name ready for FTP. This name
is based on input in the Datasets and Members list boxes or an explicitly
specified qualified data set name.
10. Click Transfer.
The Import Copybook window appears.

11. Select the table that you want to import the copybook to and select the Import
Options.

Note: This action was already completed in Step 2 (default selected) unless
you want to make a change via the dropdown list.
The import options include:

Chapter 7. VSAM tutorial 79


Import Group Level Data Items: Creates a column for each COBOL data item
that is a group level item. Group level items are items without picture clauses
that contain subordinate data items with higher level numbers.
Import Selected Structure Only: Since most copybooks contain more than one
record or field definition, you can select a particular structure to import from
an existing copybook by clicking on the data item at which to start the import,
then selecting the Import Selected Structure Only check box. When structure
selection is used, the selected data item and all subordinate data items
(following data items with higher level numbers) are imported. The data item
selected can exist at any level in the structure.
OCCURS Clauses:
v Create Record Array: Defines a record array for data items within OCCURS
clauses in the copybook.
v Expand each occurrence: Creates a column for each occurrence of a data
item within the copybook. Data item names within the OCCURS clause are
suffixed with _1, _2, ..._n.
v Map first occurrence only: Create a column for the first occurrence of a
data item within the OCCURS clause only.
Append to Existing Columns: Adds the copybook columns to the bottom of
the list of existing columns in that table. Not selecting this option deletes all
existing columns and replaces them with the columns you are now importing.
Calculate Starting Offset: Use this option to append to existing columns in a
table. This allows the starting offset of the first appended column to be
calculated based on the columns already defined in the table. When selected,
the first appended column will be positioned at the first character position
after the last column (based on offset and length already defined for the table).
Use Offset: When you have an explicit offset to be used for the first column
imported and it does not match the field’s offset in the copybook structure,
enter an offset in this field to override the default calculation based on the
COBOL structure. If you do not override the default, the offset for the first
imported column is determined by the COBOL field’s offset in the structure
you are importing.

Note: By default, the offset of the first COBOL data item imported is based on
the data item’s position in all of the structures defined in the import
file. This offset will always be zero unless you are importing a selected
structure from the copybook. In that case, the offset for the first column
imported from the structure will be the COBOL data item’s position
based on all structures that precede it in the import file. If the default
offset is not correct, then the Calculate Starting Offset or Use Offset
options can be used to override the default.
12. Click Import to import the copybook to your table.
The Columns for VSAM Table window displays with the newly-imported
columns.

Repeat these steps to import additional copybooks.

This completes Exercise 6.

Exercise 7: Creating, updating, and deleting a VSAM index


(optional)
This exercise describes how to create, update, and delete VSAM indexes.

80 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Creating a VSAM Index
To create a VSAM index:
1. Open the VSAM tables window and select a table.
2. From the Window menu, choose List Indexes....
3. From the Edit menu, choose Create a new Index....
The Create VSAM Index window appears.

4. Enter information in the following fields:


a. Name: Specifies the name of the index (required).
b. Owner: Specifies the authorization ID to be assigned to the index (optional).
c. Index is Unique: If checked, every key in the index has a unique value.
d. VSAM Alternate Index: Select either DS or DD and enter a 1 to 44 character
VSAM PATH name (optional) or enter a 1 to 8 character DD Name .
e. If alternate index information is not defined, the index is assumed to be the
primary index of a VSAM KSDS.
f. Included Columns: Contains the columns comprising the index and their
physical order in the index. At least one column must be defined in the
included columns list.
g. Remarks: Description of the VSAM index (optional).
5. Click OK to add the index.

Updating a VSAM index


To update a VSAM index:
1. Open the VSAM Tables for Data Catalog window.
2. From the Window menu, choose List Indexes....
3. Double-click column 1 of the row containing the index to update.

Chapter 7. VSAM tutorial 81


The Update VSAM Index window appears.

4. Enter the information to update and click OK.


The new Indexes are added to the VSAM table.

Deleting a VSAM index


To delete a VSAM index:
1. Open the VSAM Tables for Data Catalog window.
2. From the Window menu, choose List Indexes....
3. Click in column 1 of the row containing the index to delete.
4. From the Edit menu, choose Delete the selected Index.
5. Click Yes to delete the index when the Confirm Delete window appears.

After confirmation, the index is deleted from the repository.

This completes Exercise 7.

Exercise 8: Defining a VSAM record array (Optional)


A record array is 1 or more columns that occur multiple times in a single database
record.
To define a record array:
1. From the Import Copybook window, click the Create Record Array option and
click Import. If the imported copybook contains an OCCURS clause, the record
array will automatically be created during import.
2. To create a record array from the columns window, select the column or
columns to include in the record array by highlighting them in the Columns
window.
3. From the Edit menu, click Create a Record Array....

82 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The Create a Record Array window appears.

The fields on the Create a Record Array window are as follows:


v First Column in Array: (Required field) Identifies the start of the record
array in the database record.
v Last Column in Array: (Required field) Identifies the end of the record array
in the database record.
v Offset of Array in Parent: (Required field) Defines the starting offset of the
array based on either the beginning of the record or the beginning of a
parent record array.
v Length of a Single Occurrence: (Required field) Defines the internal record
length of each occurrence of the array.
v Max Number of Occurrences: (Required field) Defines the maximum
number of occurrences that can exist in the database record.
v NULL Occurrence Rule: Defines conditions under which an occurrence in
the array is to be considered null and not returned as a result row in select
clauses.
v NO NULL Occurrences: Returns all occurrences of the array in the result set.
v Count is in Column: Number of valid occurrences in the array is kept in the
column identified by the required column name attribute defined with this
rule.
v NULL is Value: Identifies a comparison value to be used at runtime to
determine if an occurrence of the array is null.
v Repeat Value for Length of Compare: Repeats the comparison value for the
length of the null compare.
v Compare Column: (Optional field) Identifies where in the array to do the
null comparison.

Note: For more information on record arrays, see the Data Mapper Help.
4. Click OK to create the record array.

This completes Exercise 8. To generate VSAM metadata grammar, continue on to


Exercise 9.

Exercise 9: Generating VSAM metadata grammar


This exercise describes the steps required to create metadata grammar. Metadata
grammar, also known as USE grammar, is generated by the Data Mapper for all of
| the tables in a specific data catalog. When metadata grammar has been created, it

Chapter 7. VSAM tutorial 83


| must subsequently be transferred from the workstation to the mainframe. The
| metadata grammar is supplied as input to the metadata utility that runs on the
| mainframe. The metadata utility uses the contents of the metadata grammar to
| create logical tables. Client applications use logical tables, which are
| non-relational-to-relational mappings, for SQL access non-relational data.
To generate VSAM metadata grammar:
1. From the Data Catalog window, select a data catalog.
2. From the File menu, choose Generate USE Statements....
The Generate USE Statements window appears.

Continue on to Step 3 if you are generating USE statements on your hard


drive. Skip to Step 4 if you are generating USE statements to send to a remote
location.
3. Give the file a name with a use file extension, such as generate.use.
A window appears asking if you want to view the newly-created script.
4. Click YES.

| Note: Before each table definition in the metadata grammar file, a DROP table
| statement is generated. If a duplicate table exists in the metadata
| catalogs, the DROP table statement deletes the table and any indexes,
| views, and table privileges associated with the table. The
| newly-generated USE statement creates the new table.

84 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
If necessary, you can edit this file directly from the Notepad where it appears.
Repeat the preceding steps to generate additional USE Statements.
5. Click the Remote button on the Generate USE Statements window to generate
USE statements to send to a remote location.
The FTP Connect window displays.

6. Enter information in the following fields:


a. Host Address: IP address or Host name of the remote machine.
b. Port ID: Valid FTP Port ID.
c. User ID: Valid user ID on the remote machine.
d. User Password: Valid user password for the user ID on the remote
machine.
7. Click Connect.
In z/OS, the Host panel displays.
8. Enter the following information in the host panel:
a. The Working Directory field displays the working directory for the current
data sets list. To change the working directory, enter a new high-level
qualifier, enclosed in single quotes (‘ ’). The default specification is the
logon user ID entered on the FTP Connect window.

Note: To navigate back to a previous data set list, double-click the ..


entry.
b. The Datasets drop-down list box contains a directory listing based on the
current working directory specification. Choose from the list of data sets
for the Remote File transfer. Names with an asterisk (*) have member
names. Double-click on the asterisk (*) to select a member list. It will
appear in the Members listbox.
After a data set is selected, it appears in the Remote File field.
9. Select the member name (if applicable) or type a new member name, enclosed
in parentheses, to be included in the Remote File field after the data set name,
such as ’USER.GRAMMAR(VSAMUSE)’.
The Remote File field contains the full data set name ready for FTP. This
name is based on input in the Datasets and Members list boxes or an
explicitly specified qualified data set name.
10. Click Transfer.
The file is transferred to the remote location and the tmp USE Statements
window displays, showing the exact data that exists on your remote location.

This completes Exercise 9.

Chapter 7. VSAM tutorial 85


Exercise 10: Creating a relational view for VSAM data
After completing Exercise 9, you have the metadata input file you need to create a
relational view.
To create the relational view:
1. Transfer the metadata grammar file to the host system where the metadata
utility runs.
2. Run the metadata utility, using the metadata as input.
The metadata utility then creates the relational view.

For more information on Metadata Utilities, see the IBM DB2 Information Integrator
Reference for Classic Federation and Classic Event Publishing.

You have completed the Data Mapper VSAM tutorial. You have now mapped
VSAM nonrelational data to a relational view.

86 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Appendix. Metadata grammar reference
Overview
When you use the Data Mapper, you typically won’t need to work directly with
the metadata grammar that it outputs. But, you may occasionally run across
problems when using the metadata utility to update the metadata catalog. The
information provided in this appendix may help you to resolve some issues with
generated metadata grammar.

USE TABLE statement syntax


The USE TABLE statement consists of two basic components:
v Information that identifies the logical table name and a table source definition
that associates the logical table to a physical database or file. This information is
required.
v Column definitions. The column definition identifies the DB2 UDB for z/OS
attributes associated with a column as well as database/file specific information.
One or more column definitions must be supplied for a logical table.

There is a different format for the table source definition for each source database
type, and differing formats for the column definitions. The column definitions for a
single USE TABLE statement must be separated by commas with a single pair of
parentheses enclosing the entire set of definitions. All strings that contain
embedded blanks or USE TABLE keywords must be enclosed in quotes. Quotes
must be double (“ ”). All statements must include a terminating semi-colon (;).

The USE TABLE statement syntax is shown in the syntax diagram below and is
described in Table 1.

 USE TABLE[owner]table-name DBTYPE=database-type table-source-definitions 

 column-definitions, 

Table 1. USE Table Parameters and Descriptions


Parameters Descriptions
USE TABLE Keywords that identify the statement. All subsequent
parameters describe the logical table identified by table-name
until the next USE TABLE or DROP TABLE keywords are
encountered, or EOF on SYSIN is encountered.
owner SQL authorization ID of the owner. If owner is not specified,
the user ID used to run the metadata utility is assigned as
the owner. Maximum length is 8 characters.
table-name Represents the name for the table you are creating. The
maximum length is 18 characters. The combination of owner
and table-name must be unique within a System Catalog. If
owner is specified, the syntax is: owner.table-name.
DBTYPE Keyword for the clause that identifies the database type.
database-type Identifies the source database/file type. Valid values are IMS,
VSAM, SEQUENTIAL, CA-IDMS, ADABAS, and DATACOM.

© Copyright IBM Corp. 2003, 2004 87


Table 1. USE Table Parameters and Descriptions (continued)
Parameters Descriptions
table-source-definitions Represents the set of parameters that defines the table source
for the specified database/file type.
length The maximum length of the updated record.
column-definitions Represents the set of parameters that define the column
source for the specified table.

Supported data types


The following table contains the supported data types and their descriptions. The
values in the supported Data Type column are used in the Column Definitions
clause of the USE TABLE statements. They are used for the DBTYPE parameter in
the USE AS keyword. The default mappings of source data into supported data
types are described under each individual database type.
Table 2. Supported Data Type Mappings
Supported
Source Data Types Description
Character (VSAM, Sequential) CHAR(n) CHAR data types are fixed-length character
strings of length n, where 1 £ n £ 254.
C (IMS, CA-Datacom)

A (Alphanumeric - ADABAS)

DISPLAY (CA-IDMS)
DECIMAL(p[,s]) DECIMAL data types are zoned decimal strings
containing numeric data on which arithmetic
comparisons or operations are likely to be
performed.
v p is the precision decimal, specifying the total
number of digits,
v s is the total number of digits to the right of
the decimal.
GRAPHIC(n) Fixed-length DBCS strings of length n where 1 £
n £ 127. For GRAPHIC data types, n specifies the
number of DBCS characters, not the amount of
physical storage occupied by the field.
Double Float (VSAM, Sequential) FLOAT(n) Double-precision floating point; 64-bit; n -
integer 22 £ n £ 53.
F (ADABAS)

COMP-2 (CA-IDMS) DOUBLE PRECISION Floating point 1 £ n £ 53. Double-precision value.


L (CA-Datacom)
Fullword (VSAM,Sequential) INTEGER Fullword signed hexadecimal, 32-bit; no decimal
point allowed. For ADABAS, this can also
I (ADABAS) specify a three-byte binary field.
COMP-4 (CA-IDMS)

PIC 9(8)

B (4 bytes; CA-Datacom)

88 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Table 2. Supported Data Type Mappings (continued)
Supported
Source Data Types Description
Single Float (VSAM, Sequential) FLOAT(n) Single-precision floating point; 32-bit; n - integer
1 £ n £ 21.
F (ADABAS)

COMP-1 (CA-IDMS) REAL Floating point 1 £ n £ 21. Single-precision value.


S (CA-Datacom)
H (VSAM,Sequential) SMALLINT Halfword signed hexadecimal; 16-bit ; no
decimal point allowed.
I (ADABAS)
ADABAS field can be 8-bits or 16-bits.
COMP-4 (CA-IDMS)

PIC 9(4)

B (2 bytes; CA-Datacom)
Packed (VSAM,Sequential) DECIMAL(p[,s]) Packed decimal value where:
v p is the precision decimal, specifying the total
P (IMS, ADABAS)
number of digits and
COMP-3 (CA-IDMS) v s is the total number of digits to the right of
the decimal.
D (CA-Datacom)
Unsigned Packed (VSAM, Sequential) DECIMAL(up[,s]) Unsigned Packed decimal value where:
v up is the precision decimal, specifying the total
P (IMS, ADABAS)
number of digits and
v s is the total number of digits to the right of
the decimal.
N (Numeric - ADABAS, CHAR(n) CHAR data types are fixed-length character
CA-Datacom) strings of length n, where 1 £ n £ 29.
DECIMAL(p[,s]) DECIMAL data types are zoned decimal strings
containing numeric data on which arithmetic
comparisons or operations are likely to be
performed.
v p is the precision decimal, specifying the total
number of digits,
v s is the total number of digits to the right of
the decimal.

Appendix. Metadata grammar reference 89


Table 2. Supported Data Type Mappings (continued)
Supported
Source Data Types Description
V (Variable length character or VARCHAR(n)* Variable length character string, in which n is an
graphic field. Character strings longer integer 1 £ n £ 32704.
than 254 are changed to LONG
LONG VARCHAR* Variable length character string, for which the
VARCHAR Data Type. Graphic
size is calculated. See the IBM DB2 SQL Reference
strings longer than 127 are changed
Guide for information on calculating LONG
to LONG VARGRAPHIC Data Type).
VARCHAR lengths.
GRAPHIC (n)* This is a fixed-length, double-byte character set
(DBCS) string where 1 £ n £ 16352. The value of
n specifies the number of DBCS characters. For
example, GRAPHIC(10) specifies a column that
occupies 20 bytes of storage.
VARGRAPHIC(n)* Variable-length DBCS string where 1 £ n £ 127.
The value of n specifies the number of DBCS
characters. For example, VARGRAPHIC(10)
specifies a column that occupies 20-bytes of
storage.
LONG VARGRAPHIC* Variable-length DBCS string where 1 £ n £ 16352.
The value of n specifies the number of DBCS
characters. See the IBM DB2 SQL Reference Guide
for information on calculating LONG VARCHAR
lengths.
DATE DATE “date-format” This data type is supported only for Adabas and
represents the Natural date system variable. The
date-format indicates the format of the returned
date field. Any combination of MM (month),
MMM (name of month), DD (day of month),
DDD (day of year), YY (year), YYYY (full year)
along with other characters and spaces can be
used to represent the format of the date.
TIME TIME “time-format” This data type is supported only for Adabas and
represents the Natural time system variable. The
time-format indicates the format of the returned
time field. Any combination of MM (month),
MMM (name of month), DD (day of month),
DDD (day of year), YY (year), YYYY (full year),
HH (hour), MI (minute), or SS (seconds) along
with other characters and spaces can be used to
represent the format of the time.

* Optionally, these supported data types can include the USE RECORD LENGTH
clause, which causes the length of the data to be used to create a single column
from the variable data.

Zoned Decimal support


Zoned Decimal data types are character data types consisting of only digits and an
optional sign. This data type exists because COBOL supports a data element with a
numeric picture clause having a USAGE type of DISPLAY. This allows the creation
of numeric data, which can be used to perform arithmetic and can also be stored
in a readable format.

The following COBOL formats of Zoned Decimal are supported.


v UNSIGNED Numeric, specified as: PIC 999 USAGE IS DISPLAY.

90 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
v SIGNED Numeric, specified as: PIC S999 USAGE IS DISPLAY. The sign in this
case is kept in the first four bits of the last byte in the field. For example, the
value 123 (x’F1F2F3’) would be stored as x’F1F2C3’ for +123 and x’F1F2D3’
for -123. COBOL also allows the sign to be kept in the first or last byte of the
field, or separate from the field as either a leading or trailing ( + or -) character.

The only external change required to support Zoned Decimal is in the metadata
grammar, which is mapped in the Data Mapper. To define a Zoned Decimal field,
change the SQL data type of the field to DECIMAL(p,s) instead of the default
CHAR(n) and add a DATAYPE C for signed numbers and DATAYPE UC for
unsigned numbers.

For example, a 6-byte Zoned Decimal field is defined to the Data Mapper by
specifying its internal length as 6 and the data type as character. However, instead
of specifying its SQL data type as CHAR(6), it is specified as DECIMAL(6). This
results in the client application seeing the data as SQL decimal and allows
arithmetic operations and comparisons to be performed on the field.

Data Mapper will also transform COBOL Zoned Decimal fields on an import to be
SQL DECIMAL data types if either of the following conditions is true:
v The field is declared as having a sign, for example PIC S999.
v The field has implied decimal positions (PIC 999V9).

VARCHAR
This section and the following three sections all deal with VAR type fields. If your
application does not use VAR type fields, feel free to skip these four sections.

The first two bytes of a field must map to a column defined as VARCHAR to
contain a binary length indicator (LL). There are two types of length definitions:
v LL represents the length of the field, excluding the two bytes required for LL.
v LL represents the total length of the field, including the two bytes required for
LL.

To extract VARCHAR data from the target database correctly, the USE TABLE
statements must account for the different field length definitions.

If the VARCHAR data in the target database has an LL field that excludes the two
bytes required for LL (definition 1, above), the USE statement must specify the
LENGTH parameter two bytes greater than the specification for the USE AS
VARCHAR parameter.

For example:
USE TABLE CACEMP
DBTYPE DBB CACEMP EMPLOYEE
(
DEPT SOURCE DEFINITION
DATAMAP OFFSET 45 LENGTH 5
USE AS VARCHAR(3)
)

If the data in column DEPT is “CAC” and the USE statement above is used, the
following should be in the target database:
LL Data
0003 CAC

Appendix. Metadata grammar reference 91


If the VARCHAR data in the target database has an LL field that includes the two
bytes required for LL, the USE statement must specify the LENGTH parameter
equal to the “USE AS VARCHAR” specification. For example:
USE TABLE CACEMP
DBTYPE DBB CACEMP EMPLOYEE
(
DEPT SOURCE DEFINITION
DATAMAP OFFSET 45 LENGTH 5
USE AS VARCHAR(5)
)

If the data in column DEPT is “CAC” and the USE statement is used, the following
should be in the target database:
LL Data
0005 CAC

The record in the target database is translated as follows:


LL Data
0003 CAC

In the following example, LENGTH and USE AS VARCHAR have the same value.
OFFSET 0 LENGTH 30000
USE AS VARCHAR(30000)

In the previous example, the LL field contains the length of the data plus the LL
field, so two bytes are subtracted when returning data to the application.

In the next example, LENGTH and USE AS VARCHAR differ by two bytes.
OFFSET 0 LENGTH 30000
USE AS VARCHAR(29998)

In the previous example, the LL field contains the size of the data only, so its value
is returned to the application as-is.

LONG VARCHAR
If a VARCHAR definition exceeds 254 bytes, it is converted to the data type to
LONG VARCHAR. With respect to the LL field, LONG VARCHAR is handled like
VARCHAR.

The following is an example of a typical relational LONG VARCHAR where the LL


field contains the size of the data only, so the value is returned to the application
as-is.
OFFSET 0 LENGTH 30000
USE AS LONG VARCHAR

VARGRAPHIC
The first two bytes of a field must map to a column defined as VARGRAPHIC to
contain a binary length indicator (LL). There are two types of length definitions:
v LL represents the length of the field in bytes, excluding the two bytes required
for LL.
v LL represents the total length of the field in bytes, including the two bytes
required for LL.

Note: The LL field is converted from a length in bytes to a length in DBCS.

92 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
To extract VARGRAPHIC data from the target database correctly, the USE TABLE
statements must account for the different field length definitions.

If the VARGRAPHIC data in the target database has an LL field that excludes the
two bytes required for LL (definition 1, above), the USE statement must specify the
LENGTH parameter one DBCS character greater than the specification for the USE
AS VARGRAPHIC parameter.

In the following example, LENGTH and USE AS VARGRAPHIC have the same
value.
OFFSET 0 LENGTH 15000
USE AS VARGRAPHIC(15000)

In the previous example, the LL field contains the length of the data in bytes plus
the LL field, so two bytes are subtracted and the result is divided by two when
returning data to the application.

In the next example, LENGTH and USE AS VARGRAPHIC differ by one graphic
character (two bytes).
OFFSET 0 LENGTH 15000
USE AS VARGRAPHIC(14999)

In the previous example, the LL Field contains the size of the data only in bytes, so
its value is divided by two and returned to the application as-is.

LONG VARGRAPHIC
If a VARGRAPHIC definition exceeds 127 bytes, it is converted to the data type to
LONG VARGRAPHIC. With respect to the LL field, LONG VARGRAPHIC is
handled like VARGRAPHIC.

The following is an example of LONG VARGRAPHIC where the LL field contains


the size of the data only in bytes. Its value is divided by 2 and returned to the
application as-is.
OFFSET 0 LENGTH 15000
USE AS LONG VARGRAPHIC

Appendix. Metadata grammar reference 93


94 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
DB2 Information Integrator documentation
This topic provides information about the documentation that is available for DB2
Information Integrator. The tables in this topic provide the official document title,
form number, and location of each PDF book. To order a printed book, you must
know either the official book title or the document form number. Titles, file names,
and the locations of the DB2 Information Integrator release notes and installation
requirements are also provided in this topic.

This topic contains the following sections:


v Accessing DB2 Information Integrator documentation
v Documentation for replication function on z/OS
v Documentation for event publishing function for DB2 Universal Database on
z/OS
v Documentation for event publishing function for IMS and VSAM on z/OS
v Documentation for event publishing and replication function on Linux, UNIX,
and Windows
v Documentation for federated function on z/OS
v Documentation for federated function on Linux, UNIX, and Windows
v Documentation for enterprise search on Linux, UNIX, and Windows
v Release notes and installation requirements

Accessing DB2 Information Integrator documentation


All DB2 Information Integrator books and release notes are available in PDF files
from the DB2 Information Integrator Support Web site at
www.ibm.com/software/data/integration/db2ii/support.html.

To access the latest DB2 Information Integrator product documentation, from the
DB2 Information Integrator Support Web site, click on the Product Information
link, as shown in Figure 1 on page 96.

© Copyright IBM Corp. 2003, 2004 95


Figure 1. Accessing the Product Information link from DB2 Information Integrator Support Web site

You can access the latest DB2 Information Integrator documentation, in all
supported languages, from the Product Information link:
v DB2 Information Integrator product documentation in PDF files
v Fix pack product documentation, including release notes
v Instructions for downloading and installing the DB2 Information Center for
Linux, UNIX, and Windows
v Links to the DB2 Information Center online

Scroll though the list to find the product documentation for the version of DB2
Information Integrator that you are using.

96 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
The DB2 Information Integrator Support Web site also provides support
documentation, IBM Redbooks, white papers, product downloads, links to user
groups, and news about DB2 Information Integrator.

You can also view and print the DB2 Information Integrator PDF books from the
DB2 PDF Documentation CD.

To view or print the PDF documentation:


1. From the root directory of the DB2 PDF Documentation CD, open the index.htm
file.
2. Click the language that you want to use.
3. Click the link for the document that you want to view.

Documentation about replication function on z/OS


Table 3. DB2 Information Integrator documentation about replication function on z/OS
Form
Name number Location
ASNCLP Program Reference for Replication N/A DB2 Information Integrator
and Event Publishing Support Web site
Introduction to Replication and Event GC18-7567 DB2 Information Integrator
Publishing Support Web site
Migrating to SQL Replication N/A DB2 Information Integrator
Support Web site
Replication and Event Publishing Guide and SC18-7568 v DB2 PDF Documentation CD
Reference
v DB2 Information Integrator
Support Web site
Replication Installation and Customization SC18-9127 DB2 Information Integrator
Guide for z/OS Support Web site
SQL Replication Guide and Reference SC27-1121 v DB2 PDF Documentation CD
v DB2 Information Integrator
Support Web site
Tuning for Replication and Event Publishing N/A DB2 Information Integrator
Performance Support Web site
Tuning for SQL Replication Performance N/A DB2 Information Integrator
Support Web site
Release Notes for IBM DB2 Information N/A v In the DB2 Information
Integrator Standard Edition, Advanced Edition, Center, Product Overviews >
and Replication for z/OS Information Integration >
DB2 Information Integrator
overview > Problems,
workarounds, and
documentation updates
v DB2 Information Integrator
Installation launchpad
v DB2 Information Integrator
Support Web site
v The DB2 Information Integrator
product CD

DB2 Information Integrator documentation 97


Documentation about event publishing function for DB2 Universal
Database on z/OS
Table 4. DB2 Information Integrator documentation about event publishing function for DB2
Universal Database on z/OS
Form
Name number Location
ASNCLP Program Reference for Replication N/A DB2 Information Integrator
and Event Publishing Support Web site
Introduction to Replication and Event GC18-7567 v DB2 PDF Documentation CD
Publishing
v DB2 Information Integrator
Support Web site
Replication and Event Publishing Guide and SC18-7568 v DB2 PDF Documentation CD
Reference
v DB2 Information Integrator
Support Web site
Tuning for Replication and Event Publishing N/A DB2 Information Integrator
Performance Support Web site
Release Notes for IBM DB2 Information N/A v In the DB2 Information
Integrator Standard Edition, Advanced Edition, Center, Product Overviews >
and Replication for z/OS Information Integration >
DB2 Information Integrator
overview > Problems,
workarounds, and
documentation updates
v DB2 Information Integrator
Installation launchpad
v DB2 Information Integrator
Support Web site
v The DB2 Information Integrator
product CD

Documentation about event publishing function for IMS and VSAM on


z/OS
Table 5. DB2 Information Integrator documentation about event publishing function for IMS
and VSAM on z/OS
Form
Name number Location
Client Guide for Classic Federation and Event SC18-9160 DB2 Information Integrator
Publisher for z/OS Support Web site
Data Mapper Guide for Classic Federation and SC18-9163 DB2 Information Integrator
Event Publisher for z/OS Support Web site
Getting Started with Event Publisher for z/OS GC18-9186 DB2 Information Integrator
Support Web site
Installation Guide for Classic Federation and GC18-9301 DB2 Information Integrator
Event Publisher for z/OS Support Web site
Operations Guide for Event Publisher for z/OS SC18-9157 DB2 Information Integrator
Support Web site

98 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Table 5. DB2 Information Integrator documentation about event publishing function for IMS
and VSAM on z/OS (continued)
Form
Name number Location
Planning Guide for Event Publisher for z/OS SC18-9158 DB2 Information Integrator
Support Web site
Reference for Classic Federation and Event SC18-9156 DB2 Information Integrator
Publisher for z/OS Support Web site
System Messages for Classic Federation and SC18-9162 DB2 Information Integrator
Event Publisher for z/OS Support Web site
Release Notes for IBM DB2 Information N/A DB2 Information Integrator
Integrator Event Publisher for IMS for z/OS Support Web site
Release Notes for IBM DB2 Information N/A DB2 Information Integrator
Integrator Event Publisher for VSAM for z/OS Support Web site

Documentation about event publishing and replication function on


Linux, UNIX, and Windows
Table 6. DB2 Information Integrator documentation about event publishing and replication
function on Linux, UNIX, and Windows
Name Form number Location
ASNCLP Program Reference for Replication and N/A DB2 Information Integrator
Event Publishing Support Web site
Installation Guide for Linux, UNIX, and GC18-7036 v DB2 PDF Documentation CD
Windows
v DB2 Information Integrator
Support Web site
Introduction to Replication and Event GC18-7567 v DB2 PDF Documentation CD
Publishing
v DB2 Information Integrator
Support Web site
Migrating to SQL Replication N/A DB2 Information Integrator
Support Web site
Replication and Event Publishing Guide and SC18-7568 v DB2 PDF Documentation CD
Reference
v DB2 Information Integrator
Support Web site
SQL Replication Guide and Reference SC27-1121 DB2 Information Integrator
Support Web site
Tuning for Replication and Event Publishing N/A DB2 Information Integrator
Performance Support Web site
Tuning for SQL Replication Performance N/A DB2 Information Integrator
Support Web site

DB2 Information Integrator documentation 99


Table 6. DB2 Information Integrator documentation about event publishing and replication
function on Linux, UNIX, and Windows (continued)
Name Form number Location
Release Notes for IBM DB2 Information N/A v In the DB2 Information
Integrator Standard Edition, Advanced Edition, Center, Product Overviews
and Replication for z/OS > Information Integration >
DB2 Information Integrator
overview > Problems,
workarounds, and
documentation updates
v DB2 Information Integrator
Installation launchpad
v DB2 Information Integrator
Support Web site
v The DB2 Information
Integrator product CD

Documentation about federated function on z/OS


Table 7. DB2 Information Integrator documentation about federated function on z/OS
Name Form number Location
Client Guide for Classic Federation and Event SC18-9160 DB2 Information Integrator
Publisher for z/OS Support Web site
Data Mapper Guide for Classic Federation and SC18-9163 DB2 Information Integrator
Event Publisher for z/OS Support Web site
Getting Started with Classic Federation for z/OS GC18-9155 DB2 Information Integrator
Support Web site
Installation Guide for Classic Federation and GC18-9301 DB2 Information Integrator
Event Publisher for z/OS Support Web site
Reference for Classic Federation and Event SC18-9156 DB2 Information Integrator
Publisher for z/OS Support Web site
System Messages for Classic Federation and SC18-9162 DB2 Information Integrator
Event Publisher for z/OS Support Web site
Transaction Services Guide for Classic SC18-9161 DB2 Information Integrator
Federation for z/OS Support Web site
Release Notes for IBM DB2 Information N/A DB2 Information Integrator
Integrator Classic Federation for z/OS Support Web site

Documentation about federated function on Linux, UNIX, and Windows


Table 8. DB2 Information Integrator documentation about federated function on Linux, UNIX,
and Windows
Form
Name number Location
Application Developer’s Guide SC18-7359 v DB2 PDF Documentation CD
v DB2 Information Integrator
Support Web site

100 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Table 8. DB2 Information Integrator documentation about federated function on Linux, UNIX,
and Windows (continued)
Form
Name number Location
C++ API Reference for Developing Wrappers SC18-9172 v DB2 PDF Documentation CD
v DB2 Information Integrator
Support Web site
Data Source Configuration Guide N/A v DB2 PDF Documentation CD
v DB2 Information Integrator
Support Web site
Federated Systems Guide SC18-7364 v DB2 PDF Documentation CD
v DB2 Information Integrator
Support Web site
Guide to Configuring the Content Connector for N/A DB2 Information Integrator
VeniceBridge Support Web site
Installation Guide for Linux, UNIX, and GC18-7036 v DB2 PDF Documentation CD
Windows
v DB2 Information Integrator
Support Web site
Java API Reference for Developing Wrappers SC18-9173 v DB2 PDF Documentation CD
v DB2 Information Integrator
Support Web site
Migration Guide SC18-7360 v DB2 PDF Documentation CD
v DB2 Information Integrator
Support Web site
Wrapper Developer’s Guide SC18-9174 v DB2 PDF Documentation CD
v DB2 Information Integrator
Support Web site
Release Notes for IBM DB2 Information N/A v In the DB2 Information
Integrator Standard Edition, Advanced Edition, Center, Product Overviews
and Replication for z/OS > Information Integration >
DB2 Information Integrator
overview > Problems,
workarounds, and
documentation updates
v DB2 Information Integrator
Installation launchpad
v DB2 Information Integrator
Support Web site
v The DB2 Information
Integrator product CD

DB2 Information Integrator documentation 101


Documentation about enterprise search function on Linux, UNIX, and
Windows
Table 9. DB2 Information Integrator documentation about enterprise search function on Linux,
UNIX, and Windows
Name Form number Location
Administering Enterprise Search SC18-9283 DB2 Information
Integrator Support Web
site
Installation Guide for Enterprise Search GC18-9282 DB2 Information
Integrator Support Web
site
Programming Guide and API Reference for SC18-9284 DB2 Information
Enterprise Search Integrator Support Web
site
Release Notes for Enterprise Search N/A DB2 Information
Integrator Support Web
site

Release notes and installation requirements


Release notes provide information that is specific to the release and fix pack level
for your product and include the latest corrections to the documentation for each
release.

Installation requirements provide information that is specific to the release of your


product.
Table 10. DB2 Information Integrator Release Notes and Installation Requirements
Name File name Location
Installation Requirements for IBM Prereqs v The DB2 Information Integrator
DB2 Information Integrator Event product CD
Publishing Edition, Replication
v DB2 Information Integrator
Edition, Standard Edition, Advanced
Installation Launchpad
Edition, Advanced Edition Unlimited,
Developer Edition, and Replication for
z/OS
Release Notes for IBM DB2 ReleaseNotes v In the DB2 Information Center,
Information Integrator Standard Product Overviews > Information
Edition, Advanced Edition, and Integration > DB2 Information
Replication for z/OS Integrator overview > Problems,
workarounds, and documentation
updates
v DB2 Information Integrator
Installation launchpad
v DB2 Information Integrator Support
Web site
v The DB2 Information Integrator
product CD
Release Notes for IBM DB2 N/A DB2 Information Integrator Support
Information Integrator Event Web site
Publisher for IMS for z/OS

102 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Table 10. DB2 Information Integrator Release Notes and Installation
Requirements (continued)
Name File name Location
Release Notes for IBM DB2 N/A DB2 Information Integrator Support
Information Integrator Event Web site
Publisher for VSAM for z/OS
Release Notes for IBM DB2 N/A DB2 Information Integrator Support
Information Integrator Classic Web site
Federation for z/OS
Release Notes for Enterprise Search N/A DB2 Information Integrator Support
Web site

To view the installation requirements and release notes that are on the product CD:
v On Windows operating systems, enter:
x:\doc\%L
x is the Windows CD drive letter and %L is the locale of the documentation that
you want to use, for example, en_US.
v On UNIX operating systems, enter:
/cdrom/doc/%L/
cdrom refers to the UNIX mount point of the CD and %L is the locale of the
documentation that you want to use, for example, en_US.

DB2 Information Integrator documentation 103


104 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in
all countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user’s responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not give you
any license to these patents. You can send license inquiries, in writing, to:
IBM Director of Licensing
IBM Corporation
North Castle Drive
Armonk, NY 10504-1785
U.S.A.

For license inquiries regarding double-byte (DBCS) information, contact the IBM
Intellectual Property Department in your country/region or send inquiries, in
writing, to:
IBM World Trade Asia Corporation
Licensing
2-31 Roppongi 3-chome, Minato-ku
Tokyo 106-0032, Japan

The following paragraph does not apply to the United Kingdom or any other
country/region where such provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS
PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS
FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or
implied warranties in certain transactions; therefore, this statement may not apply
to you.

This information could include technical inaccuracies or typographical errors.


Changes are periodically made to the information herein; these changes will be
incorporated in new editions of the publication. IBM may make improvements
and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for
convenience only and do not in any manner serve as an endorsement of those Web
sites. The materials at those Web sites are not part of the materials for this IBM
product, and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it
believes appropriate without incurring any obligation to you.

© Copyright IBM Corp. 2003, 2004 105


Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information that has been exchanged, should contact:
IBM Corporation
J46A/G4
555 Bailey Avenue
San Jose, CA 95141-1003
U.S.A.

Such information may be available, subject to appropriate terms and conditions,


including in some cases payment of a fee.

The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement, or any equivalent agreement
between us.

Any performance data contained herein was determined in a controlled


environment. Therefore, the results obtained in other operating environments may
vary significantly. Some measurements may have been made on development-level
systems, and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been
estimated through extrapolation. Actual results may vary. Users of this document
should verify the applicable data for their specific environment.

Information concerning non-IBM products was obtained from the suppliers of


those products, their published announcements, or other publicly available sources.
IBM has not tested those products and cannot confirm the accuracy of
performance, compatibility, or any other claims related to non-IBM products.
Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products.

All statements regarding IBM’s future direction or intent are subject to change or
withdrawal without notice, and represent goals and objectives only.

This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious, and any similarity to the names and addresses used by an actual
business enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs, in source language, which


illustrate programming techniques on various operating platforms. You may copy,
modify, and distribute these sample programs in any form without payment to
IBM for the purposes of developing, using, marketing, or distributing application
programs conforming to the application programming interface for the operating
platform for which the sample programs are written. These examples have not
been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or
imply reliability, serviceability, or function of these programs. You may copy,
modify, and distribute these sample programs in any form without payment to
IBM for the purposes of developing, using, marketing, or distributing application
programs conforming to IBM’s application programming interfaces.

106 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Each copy or any portion of these sample programs or any derivative work must
include a copyright notice as follows:

© (your company name) (year). Portions of this code are derived from IBM Corp.
Sample Programs. © Copyright IBM Corp. _enter the year or years_. All rights
reserved.

Trademarks
The following terms are trademarks of International Business Machines
Corporation in the United States, other countries, or both:

IBM
CICS
DB2
DB2 Universal Database
IMS
z/OS

The following terms are trademarks or registered trademarks of other companies:

Java and all Java-based trademarks and logos are trademarks or registered
trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of
Microsoft Corporation in the United States, other countries, or both.

Intel, Intel Inside (logos), MMX and Pentium are trademarks of Intel Corporation
in the United States, other countries, or both.

UNIX is a registered trademark of The Open Group in the United States and other
countries.

Other company, product or service names may be trademarks or service marks of


others.

Notices 107
108 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Index
C Datacom.
See CA-DATACOM/DB
O
CA-Datacom 11 DBD owners
adding owners to repository 12 loading 1 adding to CA-Datacom repository 12
creating a Data Catalog 13 DBDs adding to CA-IDMS repository 26
creating a repository 11 loading DL/I 39 adding to VSAM repository 72
defining DL/I
columns 14 loading DBDs 39
record arrays 19
relational views 23
R
Record arrays
tables 13
generating metadata grammar 21 F defining
FTP support CA-Datacom 19
importing copybooks 16
Data Mapper 2 IMS 51
starting the Data Mapper 11
Sequential 66
CA-IDMS
VSAM 82
adding owners to a repository 26
relational views
creating a Data Catalog 27 I defining
creating a repository 25 IMS IMS 55
defining columns 30 adding owners to repository 38 Relational views
defining relational views 36 creating a data catalog 39 defining 2
defining tables 29 creating a repository 37 CA-Datacom 23
generating metadata grammar 34 defining CA-IDMS 36
importing schema copybooks 31 columns 42 Sequential 69
loading schema for reference 27 record array 51 VSAM 86
CA-IDMS schema relational views 55 repository
See DBDs. tables 41 adding owners
COBOL copybooks generating metadata grammar 52 IMS 38
See Copybooks importing copybooks 45 Repository
columns indexes adding owners
copying between tables for defining 49 Sequential 58
CA-Datacom 16 mapping data 37 creating
defining 1 IMS DBD CA-Datacom 11
CA-Datacom 14 See DBDs. CA-IDMS 25
CA-IDMS 30 Indexes IMS 37
IMS 42 defining 2 Sequential 57
Sequential 60 IMS 49 VSAM 71
VSAM 75 VSAM 80
Copybooks
importing 1
CA-Datacom 16
L S
CA-IDMS 31 schema
IMS 45 LONG VARCHAR 92 loading 1
Sequential 62 LONG VARGRAPHIC 93 schema copybooks.
VSAM 77 See Copybooks
Sequential
M adding owners to repository 58
D Mainframe, transferring data to creating
a Data Catalog 59
data catalog workstation 2
mapping data a repository 57
creating 1
IMS 37 defining
IMS 39
Mapping data a record array 66
Data Catalog
Sequential 57 columns 60
creating
VSAM 71 metadata grammar 67
CA-Datacom 13
metadata grammar relational views 69
CA-IDMS 27
generating 1 tables 59
Sequential 59
CA-Datacom 21 importing copybooks 62
VSAM 73
CA-IDMS 34 mapping data 57
Data Mapper
description 1 IMS 52
features 1 Sequential 67
starting 11 VSAM 83

© Copyright IBM Corp. 2003, 2004 109


T
tables
defining 1
CA-Datacom 13
CA-IDMS 29
IMS 41
Sequential 59
VSAM 73

U
USE grammar.
See metadata grammar
USE statements.
See metadata grammar
USE TABLE statement 87

V
VARCHAR 91
VARGRAPHIC 92
Views
creating relational 2
VSAM
adding owners to repository 72
creating a data catalog 73
defining
a record array 82
a table 73
columns 75
relational views 86
generating metadata grammar 83
importing copybooks 77
indexes
defining 80
mapping data 71

W
Workstation, transferring data to
mainframe 2

Z
Zoned decimal support 90

110 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing
Contacting IBM
To contact IBM customer service in the United States or Canada, call
1-800-IBM-SERV (1-800-426-7378).

To learn about available service options, call one of the following numbers:
v In the United States: 1-888-426-4343
v In Canada: 1-800-465-9600

To locate an IBM office in your country or region, see the IBM Directory of
Worldwide Contacts on the Web at www.ibm.com/planetwide.

Product information
Information about DB2 Information Integrator is available by telephone or on the
Web.

If you live in the United States, you can call one of the following numbers:
v To order products or to obtain general information: 1-800-IBM-CALL
(1-800-426-2255)
v To order publications: 1-800-879-2755

On the Web, go to www.ibm.com/software/data/integration/db2ii/support.html.


This site contains the latest information about:
v The technical library
v Ordering books
v Client downloads
v Newsgroups
v Fix packs
v News
v Links to Web resources

Comments on the documentation


Your feedback helps IBM to provide quality information. Please send any
comments that you have about this book or other DB2 Information Integrator
documentation. You can use any of the following methods to provide comments:
v Send your comments using the online readers’ comment form at
www.ibm.com/software/data/rcf.
v Send your comments by e-mail to comments@us.ibm.com. Include the name of
the product, the version number of the product, and the name and part number
of the book (if applicable). If you are commenting on specific text, please include
the location of the text (for example, a title, a table number, or a page number).

© Copyright IBM Corp. 2003, 2004 111


112 DB2 II Data Mapper Guide for Classic Federation and Classic Event Publishing


Printed in USA

SC18-9163-02
Spine information:

DB2 II Data Mapper Guide for Classic Federation and


 IBM DB2 Information Integrator Classic Event Publishing Version 8.2

También podría gustarte