Está en la página 1de 91

47703453-FAQs-On-Informatica-final.

pdf

1) How can you recognise whether or not the newly added rows in the source are
gets insert in the target?
Ans) 1.checking target success rows in the workflow monitor
2.through scd type2 flag version
2) What is the difference between Informatica 7.0 and 8.0 ?
Ans)
Informatica 7.0 Informatica 8.1.1
Architectur
e
Informatica 7.0 is a client
server architecture where
8.0 is service oriented
architecture
PC8 is service-oriented for modularity, scalability
and flexibility.
Service The Repository Service and Integration Service (as
replacement for Rep Server and Informatica
Server) can be run on different
computers in a network (so called nodes), even
redundantly.
Manageme
nt
Management is centralized, that means services
can be started and stopped on nodes via a central
web interface.
Tools Client Tools access the repository via that
centralized machine, resources are distributed
dynamically.
Portability Running all services on one machine is still
possible, of course.
Supports . It has a support for unstructured data which
includes spreadsheets, email, Microsoft Word files,
presentations and .PDF documents. It provides
high availability, seamless fail over, eliminating
single points of failure.
Performanc
e
grid and pushdown
optimization is not there in
7.0 but in 8.0 these are
available
It has added performance improvements (To bump
up systems performance, Informatica has added
"push down optimization" which moves data
transformation processing to the native relational
database I/O engine whenever its is most
appropriate.)
Capabilities Through 7.0 migration is
critical where as with 8.0
migration is possible and
easy
Informatica has now added more tightly
integrated data profiling, cleansing, and matching
capabilities.
web Informatica has added a new web based
administrative console.
Additional
Transforma
tions
Ability to write a Custom Transformation in C++ or
Java. Midstream SQL transformation has been
added in 8.1.1, not in 8.1.
encryption and description User defined functions
35

is not in 7.0 possible with


8.0
7.0 we cant change the
lookup cache size but with
8.0 we can change the
lookup cache size
.Dynamic configuration of caches and partitioning
Power Center 8 release has "Append to Target file"
feature.
3) Performance tuning in Informatica?
Ans)
Network connections: The performance of the Informatica Server is related to
network
connections. Data generally moves across a network at less than 1 MB per second,
whereas
a local disk moves data five to twenty times faster. Thus network connections often
affect
on session performance. So avoid network connections.
Flat files: If your flat files stored on a machine other than the informatica
server, move
those files to the machine that consists of informatica server.
Relational data sources: Minimize the connections to sources, targets and
informatica server
to improve session performance. Moving target database into server system may
improve
session performance.
Staging areas: If u uses staging areas u force informatica server to perform
multiple data
passes. Removing of staging areas may improve session performance.
Distributing load: Distributing the session load to multiple informatica servers
may improve
session performance.
Data Movement: Run the informatica server in ASCII data movement mode improves the
session performance .Because ASCII data movement mode stores a character value in
one
byte .Unicode mode takes 2 bytes to store a character.
If a session joins multiple source tables in one Source Qualifier, optimizing the
query may
improve performance. Also, single table select statements with an ORDER BY or
GROUP BY clause may benefit from optimization such as adding indexes.
We can improve the session performance by configuring the network packet size
,which
allows data to cross the network at one time .To do this go to server manger
,choose server
configure database connections.
If u are target consists key constraints and indexes u slow the loading of data .To
improve
the session performance in this case drop constraints and indexes before u run the
session and rebuild them after completion of session.
Running parallel sessions by using concurrent batches will also reduce the time of
loading
the data. So concurrent batches may also increase the session performance .
Partitioning the session improves the session performance by creating multiple
connections
to sources and targets and loads data in parallel pipe lines.
In some cases if a session contains a aggregator transformation, You can use
incremental
aggregation to improve session performance.
Avoid transformation errors to improve the session performance.
If the session contains lookup transformation You can improve the session
performance by
enabling the look up cache.
If Ur session contains filter transformation, create that filter transformation
nearer to the
sources or you can use filter condition in source qualifier.
Aggreagator, Rank and joiner transformation may often decrease the session
performance .Because they must group data before processing it. To improve session
performance in this case use sorted ports option.
35

4) Differences between Normalizer and Normalizer transformation.


Ans)
Normalizer Normalization
It is a transformation mainly using for cobol
sources,
To remove the redundancy and
inconsistency
it's change the rows into columns and columns
into rows
Normalizer Transformation can be used to obtain
multiple columns from a single row.
5) How do we do unit testing in Informatica?How do we load data in informatica?
Ans) Unit testing are of two types
1. Quantitaive testing
2.Qualitative testing
Steps.
1. First validate the mapping
2.Create session on the mapping and then run workflow.
Once the session is succeeded the right-click on session and go for statistics tab.
There you can see how many number of source rows is applied and how many number of
rows loaded in to targets and how many number of rows rejected. This is called
Quantitative
testing.If once rows are successfully loaded then we will go for qualitative
testing.
Steps
1.Take the DATM(DATM means where all business rules are mentioned to the
corresponding
source columns) and check whether the data is loaded according to the DATM in to
target
table. If any data is not loaded according to the DATM then go and check in the
code and
rectify it.This is called Qualitative testing.This is what a developer will do in
Unit Testing.
6) How do you handle decimal places while importing a flatfile into informatica?
Ans) While importing flat file definition just specify the scale for a numeric data
type. In the
mapping, the flat file source supports only number data type (no decimal and
integer). In
the SQ associated with that source will have a data type as decimal for that number
port of
the source .
source ->number data type port ->SQ -> decimal data type .Integer is not supported.
Hence decimal is taken care. Import the field as string and then use expression to
convert
it, so that we can avoid truncation if decimal places in source itself.
7) Diff between static and dynamic cache? and please explain with one example?
Ans) Difference between static and dynamic cache-
Static cache Dynamic cache
1) Once the data is cached , it will not
change. example unconnected lookup
uses static cache.
2) The cache is updated as to reflect
the update in the table( or source)
for which it is reffering to.(ex.
connected lookup).
3) while using a static cache in lookup we
can use all operators like =,<,>... while
giving condition in condition tab
4) but in using dynamic cache we
only can use = operator
5) It is read-only cache 6) Dynamic Cache: It is Read and
Write
35

7) Informatica returns value when condition


is true and if it is false it will return
default value in connected look up and
Null value in unconnected look up
8) It will return only if condition is
false
9) We can configure any static or read only
cache for any lookup source.By Default
Integration service creates a static
cache.In ths it caches the lookup table,
and when lookup condition is true it
returns a value.
10)To cache a target table/FF src and
1)insert rows
or 2)update existing rows in the
cache
8) What is power center repository?
Ans) Standalone repository. A repository that functions individually, unrelated and
unconnected to other repositories.
Global repository. (PowerCenter only.) The centralized repository in a domain, a
group of
connected repositories. Each domain can contain one global repository. The global
repository can contain common objects to be shared throughout the domain through
global
shortcuts.
Local repository. (PowerCenter only.) A repository within a domain that is not the
global
repository. Each local repository in the domain can connect to the global
repository and use
objects in its shared folders.
Power Center repository is used to store informatica's meta data .
Information such as mapping name,location,target definitions,source
definitions,transformation and flow is stored as meta data in the repository.
9) How the informatica server sorts the string values in Ranktransformation?
Ans) We can run informatica server either in UNICODE data moment mode or ASCII data
moment mode.Unicode mode: in this mode informatica server sorts the data as per the
sorted order in session. It uses the sort order configured in session properties.
ASCII Mode:in this mode informatica server sorts the data as per the binary order
10) Is sorter an active or passive transformation?What happens if we uncheck the
distinct option in sorter.Will it be under active or passive transformation?
Ans) Sorter is an active transformation. if you don't check the distinct option it
is
considered as a passive transformation. because this distinct option eliminates the
duplicate
records from the table.
11) What is the difference between stop and abort
Ans) stop: In the session if u want to stop a part of batch you must stop the
batch,
if the batch is part of nested batch, Stop the outer most batch
Abort:
You can issue the abort command , it is similar to stop command except it has 60
second
time out .
If the server cannot finish processing and committing data within 60 sec
12) Explain about Informatica server Architecture?
35

Ans) Informatica Server Architecture is as above.


1. Node
2. Integration Server
3. Repository Server
13) How can you improve session performance in aggregator transformation?
Ans) There are 3 ways to improve session performance for an aggregator
transformation :-
A)1)Size of data cache = Bytes required for variable columns + bytes required for
output
columns.
2) Size of index cache = size of ports used in group by clause.
B) If you provide sorted data for group by ports aggregation will be faster, so for
ports
which are used in group by of an aggregator sort those ports in a sorter.
C) We can use incremental aggregation if we think that there will be no change in
data
which is already aggregated.
14) How can we use pmcmd command in a workflow or to run a session
Ans) pmcmd>startworkflow -f foldername workflowname
15) In update strategy target table or flat file which gives more performance ?
why?
Ans) Pros: Loading, Sorting, Merging operations will be faster as there is no index
concept
and Data will be in ASCII mode.
Cons: There is no concept of updating existing records in flat file.
As there is no indexes, while lookups speed will be lesser.
16) What is the difference between filter and lookup transformation?
Ans) 1) Filter transformation is an Active transformation and Lookup is a Passive
transformation
2) Filter transformation is used to Filter rows based on condition and Lookup is
used to to
look up data in a flat file or a relational table, view, or synonym
17) What are the out put files that the informatica server creates during the
session running?
Ans) Informatica server log: Informatica server(on unix) creates a log for all
status and
error messages(default name: pm.server.log). It also creates an error log for error
messages.
These files will be created in informatica home directory:-
Session log file: Informatica server creates session log file for each session.It
writes
information about session into log files such as initialization process,creation of
sql
commands for reader and writer threads,errors encountered and load summary.The
amount
of detail in session log file depends on the tracing level that you set.
Session detail file: This file contains load statistics for each targets in
mapping.Session
detail include information such as table name,number of rows written or rejected.U
can view this file by double clicking on the session in monitor window
Performance detail file: This file contains information known as session
performance details
which helps you where performance can be improved.To genarate this file select
the performance detail option in the session property sheet.
35

Reject file: This file contains the rows of data that the writer does notwrite to
targets.
Control file: Informatica server creates control file and a target file when you
run a session
that uses the external loader.The control file contains the information about the
target flat file such as data format and loading instructios for the external
loader.
Post session email: Post session email allows you to automatically communicate
information
about a session run to designated recipents.You can create two different
messages.One if the session completed sucessfully the other if the session fails.
Indicator file: If you use the flat file as a target,You can configure the
informatica server to
create indicator file.For each target row,the indicator file contains a number to
indicate
whether the row was marked for insert,update,delete or reject.
output file: If session writes to a target file,the informatica server creates the
target file
based on file prpoerties entered in the session property sheet.
Cache files: When the informatica server creates memory cache it also creates cache
files.
For the following circumstances informatica server creates index and datacache
files:-
Aggreagtor transformation
Joiner transformation
Rank transformation
Lookup transformation
18) How many types of dimensions are available in Informatica?
Ans) The types of dimensions available are:
1. Junk dimension
2. Degenerative Dimension
3. Conformed Dimension
19) Define informatica repository?
Ans) Infromatica Repository:The informatica repository is at the center of the
informatica
suite. You create a set of metadata tables within the repository database that the
informatica application and tools access. The informatica client and server access
the
repository to save and retrieve metadata.
20) How do you configure mapping in informatica
Ans) You should configure the mapping with the least number of transformations and
expressions to do the most amount of work possible. You should minimize the amount
of
data moved by deleting unnecessary links between transformations.
For transformations that use data cache (such as Aggregator, Joiner, Rank, and
Lookup
transformations), limit connected input/output or output ports. Limiting the number
of
connected input/output or output ports reduces the amount of data the
transformations
store in the data cache.
You can also perform the following tasks to optimize the mapping:
Configure single-pass reading.
Optimize datatype conversions.
Eliminate transformation errors.
Optimize transformations.
Optimize expressions.
21) How can you create or import flat file definition in to the warehouse designer?
Ans) You can not create or import flat file defintion in to warehouse designer
directly.Instead you must analyze the file in source analyzer,then drag it into the
warehouse
35

designer.
When you drag the flat file source defintion into warehouse desginer workspace,the
warehouse designer creates a relational target defintion not a file defintion.If
you want to
load to a file,configure the session to write to a flat file.When the informatica
server runs
the session,it creates and loads the flat file.
22) When we create a target as flat file and source as oracle.. how can i specify
first rows as column names in flat files...
Ans) use a pre sql statement....but this is a hardcoding method...if you change the
column
names or put in extra columns in the flat file, you will have to change the insert
statement.
You can also achive this by changing the setting in the Informatica Repository
manager to
display the columns heading. The only disadvantage of this is that it will be
applied on all
the files that will be generated by This server
When importing a flat file into target designer a flat file import wizard appears.
In this there
is an option as 'import field names from first line'. Just check this option so
integration
server treats first row values as column names.
23) Discuss the advantages & Disadvantages of star & snowflake schema?
Ans) In a STAR schema there is no relation between any two dimension tables,
whereas in
a SNOWFLAKE schema there is a possible relation between the dimension tables.In
star
schema there is no relationship between two relational tables. All dimensions are
denormalized
and query performence is degrades. In this snow flake schema dimensions are
normalized. In this SF schema table space is increased.Maintenence cost is
high.Query
performence is increaced.
24) Difference between Rank and Dense Rank?
Ans) Rank:
1
2<--2nd position
2<--3rd position
4
5
Same Rank is assigned to same totals/numbers. Rank is followed by the Position.
Golf game
ususally Ranks this way. This is usually a Gold Ranking.
Dense Rank:
1
2<--2nd position
2<--3rd position
3
4
Same ranks are assigned to same totals/numbers/names. the next rank follows the
serial
number.
25) Can anyone explain error handling in Informatica with examples so that it will
be easy to explain the same in the interview.
Ans) Go to the session log file there we will find the information regarding to the
session initiation process,
errors encountered.
load summary.
so by seeing the errors encountered during the session running, we can resolve the
errors.
35

There is one file called the bad file which generally has the format as *.bad and
it contains
the records rejected by informatica server. There are two parameters one fort the
types of
row and other for the types of columns. The row indicators signifies what operation
is going
to take place ( i.e. insertion, deletion, updation etc.). The column indicators
contain
information regarding why the column has been rejected.( such as violation of not
null
constraint, value error, overflow etc.) If one rectifies the error in the data
preesent in the
bad file and then reloads the data in the target,then the table will contain only
valid data.
26) What is the difference between connected and unconnected stored
procedures.
Ans) Unconnected:
The unconnected Stored Procedure transformation is not connected directly to the
flow of
the mapping. It either runs before or after the session, or is called by an
expression in
another transformation in the mapping.
connected:
The flow of data through a mapping in connected mode also passes through the Stored
Procedure transformation. All data entering the transformation through the input
ports
affects the stored procedure. You should use a connected Stored Procedure
transformation
when you need data from an input port sent as an input parameter to the stored
procedure,
or the results of a stored procedure sent as an output parameter to another
transformation.
by using unconnected stored procedure
reusability is possible
in connected only one time is possible
27) Which tasks can be performed on port level(using one specific port)?
Ans) I think unconnected Lookup or expression transformation can be used for single
port
for a row.
28) What are main advantages and purpose of using Normalizer Transformation in
Informatica?
Ans) Narmalizer Transformation is used mainly with COBOL sources where most of the
time
data is stored in de-normalized format. Also, Normalizer transformation can be used
to
create multiple rows from a single row of data
29) What is the difference between constraind base load ordering and target load
plan
Ans) Constraint based load ordering
example:
Table 1---Master
Table 2---Detail
If the data in table1 is dependent on the data in table2 then table2 should be
loaded first.In
such cases to control the load order of the tables we need some conditional loading
which is
nothing but constraint based load
In Informatica this feature is implemented by just one check box at the session
level.
35

A CBl specifies the order in which data loads into the targets based on key
constraints
A target load plan defines the order in which data being extracted from the source
qualifier
30) What is difference between IIF and DECODE function
Ans) You can use nested IIF statements to test multiple conditions. The following
example
tests for various conditions and returns 0 if sales is zero or negative:
IIF( SALES > 0, IIF( SALES < 50, SALARY1, IIF( SALES < 100, SALARY2, IIF( SALES <
200,
SALARY3, BONUS))), 0 )
You can use DECODE instead of IIF in many cases. DECODE may improve readability.
The
following shows how you can use DECODE instead of IIF :
SALES > 0 and SALES < 50, SALARY1,
SALES > 49 AND SALES < 100, SALARY2,
SALES > 99 AND SALES < 200, SALARY3,
SALES > 199, BONUS)
Decode function can used in sql statement. where as if statment cant use with SQL
statement.
31) How can u work with remote database in informatica?did u work directly by
using remote connections?
Ans) To work with remote datasource u need to connect it with remote
connections.But it is
not preferable to work with that remote source directly by using remote connections
.Instead u bring that source into U r local machine where informatica server
resides.If u
work directly with remote source the session performance will decreases by passing
less
amount of data across the network in a particular time.
32) How to import oracle sequence into Informatica.
Ans) CREATE ONE PROCEDURE AND DECLARE THE SEQUENCE INSIDE THE
PROCEDURE,FINALLY CALL THE PROCEDURE IN INFORMATICA WITH THE HELP OF STORED
PROCEDURE TRANSFORMATION.
33) Identifying bottlenecks in various components of Informatica and resolving
them.
Ans) The best way to find out bottlenecks is writing to flat file and see where the
bottle
neck is .
34) What is parameter file?
Ans) For UNIX shell users, enclose the parameter file name in single quotes:
-paramfile '$PMRootDir/myfile.txt'
For Windows command prompt users, the parameter file name cannot have beginning or
trailing spaces. If the name includes spaces, enclose the file name in double
quotes:
-paramfile ?$PMRootDirmy file.txt?
Note: When you write a pmcmd command that includes a parameter file located on
another
machine, use the backslash () with the dollar sign ($). This ensures that the
machine where
35

the variable is defined expands the server variable.


pmcmd startworkflow -uv USERNAME -pv PASSWORD -s SALES:6258 -f east -w wSalesAvg
-paramfile '$PMRootDir/myfile.txt'
35)What is the difference between summary filter and detail filter
Ans) Summary filter can be applied on a group of rows that contain a common value.
whereas detail filters can be applied on each and every rec of the data base.
36) What is the difference between Narmal load and Bulk load?
Ans) Normal Load: Normal load will write information to the database log file so
that if any
recorvery is needed it is will be helpful. when the source file is a text file and
loading data to
a table,in such cases we should you normal load only, else the session will be
failed.
Bulk Mode: Bulk load will not write information to the database log file so that if
any
recorvery is needed we can't do any thing in such cases.
compartivly Bulk load is pretty faster than normal load.
37) How u will create header and footer in target using informatica?
Ans) If you are focus is about the flat files then one can set it in file
properties while
creating a mapping or at the session level in session properties
38) What are two types of processes that informatica runs the session?
Ans) Load manager Process: Starts the session, creates the DTM process, and sends
postsession
email when the session completes.
The DTM process. Creates threads to initialize the session, read, write, and
transform data,
and handle pre- and post-session operations.
39) What r the types of groups in Router transformation?
Ans) A Router transformation has the following types of groups:
Input
Output
Input Group
The Designer copies property information from the input ports of the input group to
create a
set of output ports for each output group.
Output Groups
There are two types of output groups:
User-defined groups
Default group
You cannot modify or delete output ports or their properties.
40) What are the real time problems generally come up while doing/running
mapping/any transformation?can any body explain with example.
Ans) Here are few real time examples of problems while running informatica
mappings:
1) Informatica uses OBDC connections to connect to the databases.
The database passwords (production ) is changed in a periodic
manner and the same is not updated at the Informatica side.
Your mappings will fail in this case and you will get database connectivity error.
2) If you are using Update strategy transformation in the mapping, in the session
properties
35

you have to select Treat Source Rows : Data Driven. If we do not select this
Informatica
server will ignore updates and it only Inserts rows.
3) If we have mappings loading multiple target tables we have to provide the Target
Load
Plan
in the sequence we want them to get loaded.
4) Error:Snapshot too old is a very common error when using Oracle tables. We get
this
error
while using too large tables. Idealy we should schelude these loads when server is
not very
busy (meaning when no other loads are running).
5) We might get some poor performance issues while reading from large tables. All
the
source tables
should be indexed and updated regularly.
41) What is difference between maplet and reusable transformation?
Ans) mapplet:-
--contains input and output transformations.
--designed in mapplet designer.
--reusable.
--contains multiple transformations.
--we use it to reuse multiple tr for a task to be done.
Reusable transformation:-
--no input and output transformation is needed.
--designed in mapping designer.
--reusable.
--It is a singl transformation
--we create it to reuse a single transformation in future
42) How many types of facts and what are they?
Ans) There are
Factless Facts:Facts without any measures.
Additive Facts:Fact data that can be additive/aggregative.
Non-Additive facts: Facts that are result of non-additon
Semi-Additive Facts: Only few colums data can be added.
Periodic Facts: That stores only one row per transaction that happend over a period
of time.
Accumulating Fact: stores row for entire lifetime of event.
43) what are load types in informatica and what is delta load
Ans) There are two types of load i) Normal Load ii) Bulk Load
Normal Load-
The integration service writes to the database log then it enters into target.
a)performance of loading to target decreases but session recovery occurs.
b)Rollback and commit possible
Bulk Load:
The integration service bypasses the database log without writing into it and
directly loaded
into target.
a)Performance increases but session recovery doesnot occur.
b) Rollback and commit also not possible.
In bulk loading we need to consider the following:
1)Without creating any primary and foreign key at database level but however in
target
definition.
2)drop index before loading into target and create index after loading.
3)disable enable parallel mode option
35

44) What are the session parameters?


Ans) Session parameters are like maping parameters,represent values you might want
to
change between sessions such as database connections or source files.
Server manager also allows you to create userdefined session parameters.Following
are
user defined session parameters:-
Database connections
Source file names: use this parameter when you want to change the name or location
of
session source file between session runs.
Target file name : Use this parameter when you want to change the name or location
of
session target file between session runs.
Reject file name : Use this parameter when you want to change the name or location
of
session reject files between session runs.By using Session parameters we can reuse
the
session how many times you want. The main purpose of session parameter it
represents
connection path to a database system. we can reuse the different types of
databases.
If you want to use session parameter we have to follow the following procedure
1. Double click the session and select mapping tab in the mapping tab select target
connection, in the target connection select writer property click on Radio Button
USE
Connection Variable and write the connection variable $$<Connection Variable>.
After that create a parameter file with .txt or .prm using the following syntax
[folder.session]
$$<Connection Variable=<databse name>
45) What are the methods for creating reusable transforamtions?
Ans) Two methods:-
1.Design it in the transformation developer.
2.Promote a standard transformation from the mapping designer.After you add a
transformation to the mapping , You can promote it to the status of reusable
transformation.
Once you promote a standard transformation to reusable status,You can demote it to
a
standard transformation at any time.
If you change the properties of a reusable transformation in mapping,You can revert
it to
the original reusable transformation properties by clicking the revert button.
46) What does the expression n filter transformations do in Informatica Slowly
growing target wizard?
Ans) Filter transformation filters the rows that are not flagged and passes the
flagged rows
to the Update strategy transformation
EXP is used to perform record level operations and is a passive transformation.
like op_col1=ip_col1*10+ip_col2
for all the records same operation will be performed on values of 2 i/p fields -
ip_col1,ip_col2 and o/p will pass through o/p field-op_col1
FIL is used to filter some records based on any condition.(the way we write
condition in
where clause,we can simply put the condition in FIL transformation)...records not
matching
the condition will be DROPPED(not rejected)from the mapping flow and there is no
way to
capture dropped rows(unlike rejected rows in UPD can be captured in reject file iff
forward
reject rows option is not ticked)...so FIL is activetransformation...
FIL-Filter Transformation
EXP-Expression Transformation
UPD-Update Strategy Transformation
35

47) Where to store informatica rejected data? How to extract the informatica
rejected data ?
Ans) The reject rows say for example due to unique key constrain is all pushed by
session
into the $PMBadFileDir (default relative path is
<INFA_HOME/PowerCenter/server/infa_shared/BadFiles) which is configured with path
at
Integration Service level. Every Target will have property saying Reject filename
which
gives the file in which rejects rows are stored.
48)How to use the unconnected lookup i.e., from where the input has to be taken
and the output is linked? What condition is to be given?
Ans) The unconnected lookup is used just like a function call. in an expression
output/variable port or any place where an expression is accepted(like condition in
update
strategy etc..), call the unconnected lookup...something like
:LKP.lkp_abc(input_port).......
(lkp_abc is the name of the unconnected lookup...(plz check the exact
syntax)).....give the
input value just like we pass parameters to functions, and it'll return the output
after
looking up.
49) What is the Rankindex in Ranktransformation?
Ans) The Designer automatically creates a RANKINDEX port for each Rank
transformation.
The Informatica Server uses the Rank Index port to store the ranking position
for<br>each
record in a group. For example, if you create a Ranktransformation that ranks the
top 5
salespersons for each quarter, the rank index numbers the salespeople from 1<br>to
5.
50) What is difference between partioning of relatonal target and partitioning of
file targets?
Ans) Partition's can be done on both relational and flat files.
Informatica supports following partitions
1.Database partitioning
2.RoundRobin
3.Pass-through
4.Hash-Key partitioning
5.Key Range partitioning
All these are applicable for relational targets.For flat file only database
partitioning is not
applicable.
Informatica supports Nway partitioning.U can just specify the name of the target
file and
create the partitions, rest will be taken care by informatica session.
51) Why did u use update stategy in your application?
Ans) Update Strategy is used to drive the data to be Inert, Update and Delete
depending
upon some condition. You can do this on session level tooo but there you cannot
define any
condition.For eg: If you want to do update and insert in one mapping...you will
create two
flows and will make one as insert and one as update depending upon some
condition.Refer :
Update Strategy in Transformation Guide for more information
52) What is IQD file?
Ans) IQD file is nothing but Impromptu Query Definetion,This file is maily used in
Cognos
Impromptu tool after creating a imr( report) we save the imr as IQD file which is
used while
creating a cube in power play transformer.In data source type we selectImpromptu
Query
Definetion.
53) What r the mapings that we use for slowly changing dimension table?
35

Ans) We can use the following Mapping for slowly Changing dimension table.
? Expression
? Lookup
? Filter
? Sequence Generator
? Update Strategy
54) How do I import VSAM files from source to target. Do I need a special plugin
Ans) As far my knowledge by using power exchange tool convert vsam file to oracle
tables
then do mapping as usual to the target table.
55) What is meant by aggregate fact table and where is it used?
Ans) Basically fact tables are two kinds 1. Aggregated factable and Factless fact
table.
Agregated factable has aggregarted columns. for eg. Total-Sal, Dep-Sal. where as in
factless factable will not have aggregated colums and it only has FK to the
Dimension
tables.
56) What are Target Types on the Server?
Ans) Target Types are File, Relational and ERP.
57) What are mapping parameters and varibles in which situation we can use it
Ans) If we need to change certain attributes of a mapping after every time the
session is
run, it will be very difficult to edit the mapping and then change the attribute.
So we use
mapping parameters and variables and define the values in a parameter file. Then we
could
edit the parameter file to change the attribute values. This makes the process
simple.
Mapping parameter values remain constant. If we need to change the parameter value
then
we need to edit the parameter file .
But value of mapping variables can be changed by using variable function. If we
need to
increment the attribute value by 1 after every session run then we can usemapping
variables .
In a mapping parameter we need to manually edit the attribute value in the
parameter file
after every session run.
58) How do you create single lookup transformation using multiple tables?
Ans) Write a override sql query. Adjust the ports as per the sql query.
59) Why is meant by direct and indirect loading options in sessions?
Ans) when we use multiple source files, we create a file containing the names and
directories of each source file we want the PowerCenter Server to use. This file is
referred to
as a file list.
when configuring the session properties,choose Indirect in the Source Filetype
field,enter
the file name of the file list in the Source Filename field and enter the location
of the file list
in the Source File Directory field. When the session starts, thePowerCenter Server
reads the
file list, then locates and reads the first file source in the list. After the
PowerCenter Server
reads the first file, it locates and reads the next file in the list.
60) What are Target Options on the Servers?
Ans) Target Options for File Target type are FTP File, Loader and MQ.
There are no target options for ERP target type.
Target Options for Relational are Insert, Update (as Update), Update (as Insert),
Update
(else Insert), Delete, and Truncate Table.
35

61) what are the difference between view and materialized view?
Ans) Materialized views are schema objects that can be used to summarize,
precompute,
replicate, and distribute data. E.g. to construct a data warehouse.
A materialized view provides indirect access to table data by storing the results
of a query in
a separate schema object. Unlike an ordinary view, which does not take up any
storage
space or contain any data
62) To achieve the session partition what are the necessary tasks you have to do?
Ans) Configure the session to partition source data.
Install the informatica server on a machine with multiple CPU?s.
63) On a day, I load 10 rows in my target and on next day if I get 10 more rows to
be added to my target out of which 5 are updated rows how can I send them to
target? How can I insert and update the record?
Ans) We can achieve this task by SCD(slowly changing dimensions) type 1.
1. have a lookup on target and check for the primary key values, if the record is
new, insert
the record and if the record has changed, then update the record.
2. for this u have to create a update strategy transformation inside the mapping.
64) Can you generate reports in Informatcia?
Ans) Yes. By using Metadata reporter we can generate reports in
informatica.Informatica is
tool to support data extracting ,transforming and loading.i am not sure informatica
support
for reporting.my experience is concern informatica doesn't support reporting.
65) Explain use of update strategy transformation
Ans) This is the important transformation,is used to maintain the history data or
just most
recent changes into the target table.
We can set or flag the records by using these two levels.
1) Within a session:
When you configure the session,you can instruct the informatica server to either
treat all
the records in the same way.
2) Within a mapping:
within a mapping we use update strategy transformation to flag the records like
insert,update,delete or reject.
66) The designer includes a "Find" search tool as part of the standard tool bar.
What can it be used to find?
Ans) Search for two things:
1. Transformations
2. Ports in the Transformation
67) If you have four lookup tables in the workflow. How do you troubleshoot to
improve performance?
Ans) There r many ways to improve the mapping which has multiple lookups.
1) we can create an index for the lookup table if we have permissions(staging
area).
2) divide the lookup mapping into two (a) dedicate one for insert means: source -
target,,
these r new rows . only the new rows will come to mapping and the process will be
fast . (b)
35

dedicate the second one to update : source=target,, these r existing rows. only the
rows
which exists allready will come into the mapping.
3)we can increase the chache size of the lookup.
68) How to recover sessions in concurrent batches?
Ans) If multiple sessions in a concurrent batch fail, you might want to truncate
all targets
and run the batch again. However, if a session in a concurrent batch fails and the
rest of
the sessions complete successfully, you can recover the session as a standalone
session.
To recover a session in a concurrent batch:
1.Copy the failed session using Operations-Copy Session.
2.Drag the copied session outside the batch to be a standalone session.
3.Follow the steps to recover a standalone session.
4.Delete the standalone copy.
69) Briefly explian the Versioning Concept in Power Center 7.1.
Ans) When you create a version of a folder referenced by shortcuts, all shortcuts
continue
to reference their original object in the original version. They do not
automatically update to
the current folder version.
For example, if you have a shortcut to a source definition in the Marketing folder,
version
1.0.0, then you create a new folder version, 1.5.0, the shortcut continues to point
to the
source definition in version 1.0.0.
Maintaining versions of shared folders can result in shortcuts pointing to
different versions
of the folder. Though shortcuts to different versions do not affect the server,
they might
prove more difficult to maintain. To avoid this, you can recreate shortcuts
pointing to earlier
versions, but this solution is not practical for much-used objects. Therefore, when
possible,
do not version folders referenced by shortcuts.
70) Why we use lookup transformations?
Ans) Get a related value-Get the Employee Name from Employee table based on the
Employee IDPerform Calculation.
Update slowly changing dimension tables - We can use unconnected lookup
transformation
to determine whether the records already exist in the target or not.
71) What is Datadriven?
Ans) The informatica server follows instructions coded into update strategy
transformations
with in the session maping determine how to flag records for insert, update, delete
or
reject. If you do not choose data driven option setting,the informatica server
ignores all
update strategy transformations in the mapping.If the data driven option is
selected in the
session properties,it follows the instructions in the update strategy
transformation in the mapping o.w it follows instuctions specified in the session.
72) What is batch and describe about types of batches?
Ans) Batch--- is a group of any thing
Different batches ----Different groups of different things.
There are two types of batches
1. Concurrent
2. Sequential
73) Can Informatica be used as a Cleansing Tool? If Yes, give example of
transformations that can implement a data cleansing routine.
35

Ans) Yes, we can use Informatica for cleansing data. some time we use stages to
cleansing
the data. It depends upon performance again else we can use expression to cleasing
data.
For example an feild X have some values and other with Null values and assigned to
target
feild where target feild is notnull column, inside an expression we can assign
space or some
constant value to avoid session failure.
The input data is in one format and target is in another format, we can change the
format in
expression.
we can assign some default values to the target to represent complete set of data
in the
target.
74) Differences between connected and unconnected lookup?
Ans) Connected lookup:-
1> Receives input values diectly from the pipe line.
2> You can use a dynamic or static cache.
3> Cache includes all lookup columns used in the maping.
4> Support user defined default values.
Unconnected lookup:-
1> Receives input values from the result of a lkp expression in a another
transformation.
2> You can use a static cache.
3> Cache includes all lookup out put ports in the lookup condition and the
lookup/return
port.
4> Does not support user defiend default values.
75) How to read rejected data or bad data from bad file and reload it to target?
Ans) Correction the rejected data and send to target relational tables using
loadorder
utility. Find out the rejected data by using column indicatior and row indicator.
76) What are the various test procedures used to check whether the data is loaded
in the backend, performance of the mapping, and quality of the data loaded in
INFORMATICA. 2) What are the common problems developers face while ETL
development
Ans) If you want to know the performance of a mapping at transformation level, then
select
the option in the session properties-> collect performance data. At the run time in
the
monitor you can see it in the?performance tab or you can get it from a file.
The PowerCenter Server names the file session_name.perf, and stores it in the same
directory as the session log. If there is no session-specific directory for the
session log,
thePowerCenter Server saves the file in the default log files directory.
Quality of the data loaded depends on the quality of data in the source. If
cleansing is
required then have to perform some data cleansing operations in informatica. Final
data will
always be clean if followed.
77) What are the types of data that passes between informatica server and stored
procedure?
Ans) Three types of data:-
Input/Out put parameters
35

Return Values
Status code.
78) What are the types of metadata that stores in repository?
Ans) Following are the types of metadata that stores in the repository:-
Database connections
Global objects
Mappings
Mapplets
Multidimensional metadata
Reusable transformations
Sessions and batches
Short cuts
Source definitions
Target defintions
Transformations.
79) How to move the mapping from one database to another?
Ans) 1.? Open the mapping you want to migrate.? Go to File Menu - Select 'Export
Objects'
and give a name - an XML file will be generated.? Connect to the repository where
you want
to migrate and then select File Menu - 'Import Objects' and select theXML file
name.
2.? Connect to both the repositories.??Go to the source folder and select mapping
name
from the?object navigator and select?'copy' from 'Edit' menu.? Now, go to the
target folder
and select 'Paste' from 'Edit' menu.? Be sure you open the target folder.
80) What is the target load order?
Ans) The Integration Service reads sources in a target load order group
concurrently, and it
processes target load order groups sequentially.
To specify the order in which the Integration Service sends data to targets, create
one
source qualifier for each target within a mapping. To set the target load order,
you then
determine in which order the Integration Service reads each source in the mapping.
To set the target load order:
1.Create a mapping that contains multiple target load order groups.
2.Click Mappings > Target Load Plan.
The Target Load Plan dialog box lists all Source Qualifier transformations in the
mapping
and the targets that receive data from each source qualifier.
3.Select a source qualifier from the list.
4.Click the Up and Down buttons to move the source qualifier within the load order.
5.Repeat steps 3 to 4 for other source qualifiers you want to reorder.
6.Click OK.
81) Can we eliminate duplicate rows by using filter and router transformation ?if
so explain me in detail .
Ans) U can use SQL query for uniqness if the source is Relational
But if the source is Flat file then u should use Shorter or Aggregatot
transformation
82) What is parameter file?
Ans) Parameter file is to define the values for parameters and variables used in a
session.A
parameter
35

file is a file created by text editor such as word pad or notepad.


You can define the following values in parameter file:-
Maping parameters
Maping variables
session parameters.
83) Can you use the maping parameters or variables created in one maping into
another maping?
Ans) No
84) How do u check the source for the latest records that are to be loaded into the
target. i.e i have loaded some records yesterday, today again the file has been
populated with some more records today, so how do i find the records populated
today.
Ans) a) Create a lookup to target table from Source Qualifier based on primary Key.
b) Use and expression to evaluate primary key from target look-up. ( If a new
source record
look-up primary key port for target table should return null). Trap this with
decode and
proceed.
85) What is the default join that source qualifier provides?
Ans) Inner equi join. cross join
86) Why did you use stored procedure in your ETL Application?
Ans) usage of stored procedure has the following advantages
1) checks the status of the target database
2) drops and recreates indexes
3) determines if enough space exists in the database
4) performs aspecilized calculation
87) What is the diff b/w Stored Proc (DB level) & Stored proc trans
(INFORMATICA level) ? again why should we use SP trans ?
Ans) Stored Procedure tr:-
===========
In database :-
we execute it using "EXECUTE ST_PRO_NAME" command
In Informatica:-
when we create stored procedure tr it contains a return port by default.
this port is already assigned to the st pro which we selected while creating this
tr.
just we need to connect it to output port in target and connect input ports to it.
Uses:-
1)used to populate and maintain database.
2)allow user defined variables, conditional statements and other powerful
programmimg
features.
3)very useful as they are flexible than SQL statements.
4)provide error handling and logging necessary for critical tasks.
5)used for many other tasks.
88) What is difference between stored procedure transformation and external
procedure transformation?
Ans) In case of storedprocedure transformation procedure will be compiled and
executed in
a relational data source.U need data base connection to import the stored procedure
35

in to u?r maping.Where as in external procedure transformation procedure or


function will
be executed out side of data source.Ie u need to make it as a DLL to access in u r
maping.No need to have data base connection in case of external procedure
transformation.
89) What is the procedure to load the fact table.Give in detail?
Ans) your business needs. For the fact table, you need a primary key so use a
sequence
generator transformation to generate a unique key and pipe it to the target (fact)
table with
the foreign keys from the source tables.
90) What is the status code?
Ans) Status code provides error handling for the informatica server during the
session.The
stored procedure issues a status code that notifies whether or not stored procedure
completed sucessfully.This value can not seen by the user.It only used by the
informatica
server to determine whether to continue running the session or stop.
91) What are variable ports and list two situations when they can be used?
Ans) We have mainly tree ports Inport, Outport, Variable port. Inport represents
data is
flowing into transformation. Outport is used when data is mapped to next
transformation.
Variable port is used when we mathematical calculations are required. If any
addition i will
be more than happy if you can share.We can use variable ports to store values of
previous
records which is not otherwise possible in Informatica.
92) While importing the relational source defintion from database, what are the
meta data of source you import?
Ans) Source name
Database location
Column names
Datatypes
Key constraints.
93) What is Transaction?
Ans) A transaction can be define as DML operation.
means it can be insertion,modification or deletion of data performed by users/
analysts/applicators
Transaction is a logical unit of work that comprises one or more sql statements
executed by
a single user
94) How can you access the remote source into your session?
Ans) Relational source: To acess relational source which is situated in a remote
place ,u
need to configure database connection to the datasource.
FileSource : To access the remote source file you must configure the FTP connection
to the
host machine before you create the session.
Hetrogenous : When U?r maping contains more than one source type,the server manager
creates a hetrogenous session that displays source options for all types.
95) What are the basic needs to join two sources in a source qualifier?
Ans) Basic need to join two sources using source qualifier:
1) Both sources should be in same database
2) The should have at least one column in common with same data types
96) What are the diffrence between joiner transformation and source qualifier
transformation?
35

Ans) Joiner Transformation can be used to join tables from hetrogenious (different
sources), but we still need a common key from both tables. If we join two tables
without a
common key we will end up in a Cartesian Join. Joiner can be used to join tables
from
difference source systems where as Source qualifier can be used to join tables in
the same
database.
We definitely need a common key to join two tables no mater they are in same
database or
difference databases.
97) With out using Updatestretagy and sessons options, how we can do the
update our target table?
Ans) n the target definition there is an option to write the update override query,
here we
can specify the update query and this will update the rows.
98) What are the types of maping in Getting Started Wizard?
Ans) Simple Pass through maping :
Loads a static fact or dimension table by inserting all rows. Use this mapping when
you
want to drop all existing data from your table before loading new data.
Slowly Growing target :
Loads a slowly growing fact or dimension table by inserting new rows. Use this
mapping to
load new data when existing data does not require updates.
99) in the concept of mapping parameters and variables, the variable value will be
saved to the repository after the completion of the session and the next time when
u run the session, the server takes the saved variable value in the repository and
starts assigning the next value of the saved value. for example i ran a session and
in the end it stored a value of 50 to the repository.next time when i run the
session, it should start with the value of 70. not with the value of 51. how to do
this.
Ans) u can do one thing after running the mapping,, in workflow manager
start-------->session.
right clickon the session u will get a menu, in that go for persistant values,
there u will find
the last value stored in the repository regarding to mapping variable. then remove
it and
put ur desired one, run the session... i hope ur task will be done
100) What are the joiner caches?
Ans)
master rows.
After building the caches, the Joiner transformation reads records from the detail
source
and perform joins.
101) What transformation you can use inplace of lookup?
Ans) Look-up transformation can serve in so many situations.
So, if you can a bit particular about the scenarioo that you are talking about, it
will be easy
to interpret.
102) How to define Informatica server?
Ans) Informatica server is the main server component in informatica product
family..Which
is resonsible for reads the data from various source system and tranforms the data
according to business rule and loads the data into the target table
103) How can u complete unrcoverable sessions?
35

Ans) Under certain circumstances, when a session does not complete, you need to
truncate
the target tables and run the session from the beginning. Run the session from the
beginning when the Informatica Server cannot run recovery or when running recovery
might result in inconsistent data.
104) How to lookup the data on multiple tabels.
Ans) if the two tables are relational, then u can use the SQL lookup over ride
option to join
the two tables in the lookup properties.u cannot join a flat file and a relatioanl
table.
eg: lookup default query will be select lookup table column_names from
lookup_table. u can
now continue this query. add column_names of the 2nd table with the qualifier, and
a where
clause. if u want to use a order by then use -- at the end of the order by.
105) What is the default source option for update stratgey transformation?
Ans) default option for update stratgey t/r is dd_insert or we can put '0'.
in session level data driven
106)What is pushdown optimizations in pc 8.x with example?
Ans) Use pushdown optimization to push transformation logic to the source or target
database. The Integration Service analyzes the transformation logic, mapping, and
session
configuration to determine the transformation logic it can push to the database. At
run time,
the IntegrationService executes any SQL statement generated against the source or
target
tables, and it processes any transformation logic that it cannot push to the
database.
Select one of the following values:
- None. The Integration Service does not push any transformation logic to the
database.
- To Source. The Integration Service pushes as much transformation logic as
possible to the
source database.
- To Target. The Integration Service pushes as much transformation logic as
possible to the
target database.
- Full. The Integration Service pushes as much transformation logic as possible to
both the
source database and target database.
- $$PushdownConfig. The $$PushdownConfig mapping parameter allows you to run the
same session with different pushdown optimization configurations at different
times. For
more information about configuring the $$PushdownConfig mapping parameter and
parameter file, see Using the $$PushdownConfig Mapping Parameter.
107) In a scenario I have col1, col2, col3, under that 1,x,y, and 2,a,b and I want
in
this form col1, col2 and 1,x and 1,y and 2,a and 2,b, what is the procedure?
Ans) Use Normalizer :
create two ports -
first port occurs = 1
second make occurs = 2
two output ports are created and
connect to target
108) If u had to split the source level key going into two seperate tables. One as
surrogate and other as primary. Since informatica does not gurantee keys are
loaded properly(order!) into those tables. What are the different ways you could
handle this type of situation?
35

Ans) foreign key


109) What are the transformations that restricts the partitioning of sessions? Ans)
Advanced External procedure tranformation and External procedure transformation:
This
transformation contains a check box on the properties tab to allow partitioning.
Aggregator Transformation: If u use sorted ports You can not parttion the
assosiated source
Joiner Transformation : You can not partition the master source for a joiner
transformation
Normalizer Transformation
XML targets.
POWER EXCHANGE SOURCE and TARGETS
Advanced External procedure tranformation and External procedure transformation:
This
transformation contains a check box on the properties tab to allow partitioning.
Aggregator Transformation: If u use sorted ports You can not parttion the
assosiated source
Joiner Transformation : You can not partition the master source for a joiner
transformation
Normalizer Transformation
XML targets.
110) Can u explain one critical mapping? 2.performance issue which one is better?
whether connected lookup tranformation or unconnected one?
Ans) it depends on your data and the type of operation u r doing.
If u need to calculate a value for all the rows or for the maximum rows coming out
of the
source then go for a connected lookup.
Or,if it is not so then go for unconnectd lookup.
Specially in conditional case like,
we have to get value for a field 'customer' from order tabel or from customer_data
table,on
the basis of following rule:
If customer_name is null then ,customer=customer_data.ustomer_Id
otherwise
customer=order.customer_name.
so in this case we will go for unconnected lookup
Dimesions are
1.SCD
2.Rapidly changing Dimensions
3.junk Dimensions
4.Large Dimensions
5.Degenerated Dimensions
6.Conformed Dimensions.
111) What is hash table informatica?
35

Ans) In hash partitioning, the Informatica Server uses a hash function to group
rows of
data among partitions. The Informatica Server groups the data based on a partition
key.Use
hash partitioning when you want the Informatica Server to distribute rows to the
partitions
by group. For example, you need to sort items by item ID, but you do not know how
many
items have a particular ID number.
112) In a joiner transformation, you should specify the source with fewer rows as
the master source. Why?
Ans) Joiner transformation compares each row of the master source against the
detail
source. The fewer unique rows in the master, the fewer iterations of the join
comparison
occur, which speeds the join process.Joiner Transformation will cache Master
table's data
hence it is advised to define table with less #of rows as master.
113) what is difference between lookup cashe and unchashed lookup? Can i run
the mapping with out starting the informatica server?
Ans) The difference between cache and uncacheed lookup iswhen you configure the
lookup
transformation cache lookup it stores all the lookup table data in the cache when
the first
input record enter into the lookup transformation, in cache lookup the select
statement
executes only once and compares the values of the input record with the values in
the
cachebut in uncache lookup the the select statement executes for each input record
entering into the lookup transformation and it has to connect to database each time
entering the new record
114) What are the tasks that Loadmanger process will do?
Ans) Manages the session and batch scheduling: Whe you start the informatica server
the
load maneger launches and queries the repository for a list of sessions configured
to run
on the informatica server.When you configure the session the loadmanager maintains
list of
list of sessions and session start times.When you sart a session loadmanger fetches
the
session information from the repository to perform the validations and
verifications prior to
starting DTM process.
Locking and reading the session: When the informatica server starts a session
lodamaager
locks the session from the repository.Locking prevents you starting the session
again and
again.
Reading the parameter file: If the session uses a parameter files,loadmanager reads
the
parameter file and verifies that the session level parematers are declared in the
file
Verifies permission and privelleges: When the sesson starts load manger checks
whether or
not the user have privelleges to run the session.
Creating log files: Loadmanger creates logfile contains the status of session.
115) How can we join the tables if the tables have no primary and forien key
relation and no matchig port to join?
Ans) without common column or common data type we can join two sources using dummy
ports.
1.Add one dummy port in two sources.
2.In the expression trans assing '1' to each port.
2.Use Joiner transformation to join the sources using dummy port(use join
conditions).
116) In a sequential Batch how can we stop single session?
35

Ans) We can stop it using PMCMD command or in the monitor right click on that
perticular
session and select stop.this will stop the current session and the sessions next to
it.
117) How to create the staging area in your database
Ans) A Staging area in a DW is used as a temporary space to hold all the records
from the
source system. So more or less it should be exact replica of the source systems
except for
the laod startegy where we use truncate and reload options.
So create using the same layout as in your source tables or using the Generate SQL
option
in the Warehouse Designer tab.
118) What is the logic will you implement to laod the data in to one factv from 'n'
number of dimension tables.
Ans) Noramally evey one use
!)slowly changing diemnsions
2)slowly growing dimensions
119) What r the basic needs to join two sources in a source qualifier?
Ans) The both the table should have a common field with same data type.
Its not necessary both should follow primary and foreign relationship. If any
relation ship
exists that will help u in performance point of view.The two sources should be a
relational
and homogeneous
120) What are various types of Aggregation?
Ans) Various types of aggregation are SUM, AVG, COUNT, MAX, MIN, FIRST, LAST,
MEDIAN, PERCENTILE, STDDEV, and VARIANCE.
121) If you want to create indexes after the load process which transformation
you choose?
Ans) Its usually not done in the mapping(transformation) level. Its done in session
level.
Create a command task which will execute a shell script (if Unix) or any other
scripts which
contains the create index command. Use this command task in the workflow after the
session or else, You can create it with a post session command.
122) How the informatica server increases the session performance through
partitioning the source?
Ans) For a relational sources informatica server creates multiple connections for
each
parttion of a single source and extracts seperate range of data for each
connection.
Informatica server reads multiple partitions of a single source
concurently.Similarly for
loading also informatica server creates multiple connections to the target and
loads
partitions of data concurently.
For XML and file sources,informatica server reads multiple files concurently.For
loading the
data informatica server creates a seperate file for each partition(of a source
file). You can
choose to merge the targets.
123) How can you improve the performance of Aggregate transformation?
Ans) we can improve the agrregator performence in the following ways
1.send sorted input.
2.increase aggregator cache size.i.e Index cache and data cache.
35

3.Give input/output what you need in the transformation.i.e reduce number of input
and
output ports.
Use Sorter Transformation to sort input in aggregrator properties
filter the records before
124) What r the unsupported repository objects for a mapplet?
Ans) Source definitions. Definitions of database objects (tables, views, synonyms)
or files
that provide source data.
Target definitions. Definitions of database objects or files that contain the
target data.
Multi-dimensional metadata. Target definitions that are configured as cubes and
dimensions.
Mappings. A set of source and target definitions along with transformations
containing
business logic that you build into the transformation. These are the instructions
that the
Informatica Server uses to transform and move data.
Reusable transformations. Transformations that you can use in multiple mappings.
Mapplets. A set of transformations that you can use in multiple mappings.
Sessions and workflows. Sessions and workflows store information about how and when
the
Informatica Server moves data. A workflow is a set of instructions that describes
how and
when to run tasks related to extracting, transforming, and loading data. A session
is a type
of task that you can put in a workflow. Each session corresponds to a single
mapping.
125) What r the types of lookup caches?
Ans) 1)Static Cache
2)Dynamic Cache
3)Persistent Cache
4)Reusable Cache
5)Shared Cache
126) What r the tasks that source qualifier performs?
Ans) Join data originating from the same source database. You can join two or more
tables
with primary-foreign key relationships by linking the sources to one Source
Qualifier.
Filter records when the Informatica Server reads source data. If you include a
filter
condition, the Informatica Server adds a WHERE clause to the default query.
Specify an outer join rather than the default inner join. If you include a user-
defined join,
the Informatica Server replaces the join information specified by the metadata in
the SQL
query.
Specify sorted ports. If you specify a number for sorted ports, the Informatica
Server adds
an ORDER BY clause to the default SQL query.
Select only distinct values from the source. If you choose Select Distinct, the
Informatica
Server adds a SELECT DISTINCT statement to the default SQL query.
Create a custom query to issue a special SELECT statement for the Informatica
Server to
read source data. For example, you might use a custom query to perform aggregate
calculations or execute a stored procedure.
127) If a session fails after loading of 10,000 records in to the target.How can u
load the records from 10001 th record when u run the session next time in
informatica 6.1?
Ans) Running the session in recovery mode will work, but the target load type
should be
normal. If its bulk then recovery wont work as expected
128) Why dimenstion tables are denormalized in nature ?
35

Ans) Because in Data warehousing historical data should be maintained, to maintain


historical data means suppose one employee details like where previously he worked,
and
now where he is working, all details should be maintain in one table, if u maintain
primary
key it won't allow the duplicate records with same employee id. so to maintain
historical
data we are all going for concept data warehousing by using surrogate keys we can
achieve
the historical data(using oracle sequence for critical column).
so all the dimensions are marinating historical data, they are de normalized,
because of
duplicate entry means not exactly duplicate record with same employee number
another
record is maintaining in the table.
129) What is polling?
Ans) It displays the updated information about the session in the monitor window.
The
monitor window displays the status of each session when you poll the informatica
server.
130) In which condtions we can not use joiner transformation(Limitaions of joiner
transformation)?
Ans) Both pipelines begin with the same original data source.
Both input pipelines originate from the same Source Qualifier transformation.
Both input pipelines originate from the same Normalizer transformation.
Both input pipelines originate from the same Joiner transformation.
Either input pipelines contains an Update Strategy transformation.
Either input pipelines contains a connected or unconnected Sequence Generator
transformation.
131) What r the active and passive transforamtions?
Ans) Transformations can be active or passive. An active transformation can change
the
number of rows that pass through it, such as a Filter transformation that removes
rows that
do not meet the filter condition.
A passive transformation does not change the number of rows that pass through it,
such as
an Expression transformation that performs a calculation on data and passes all
rows
through the transformation.
132) What is the maplet?
Ans) Maplet is a set of transformations that you build in the maplet designer and
You can
use in multiple mapings.
A Mapplet is a reusable object defined with business logic using set of
transformations. It is
created using Mapplet designer tool.
133) What is surrogatekey ? In ur project in which situation u has used ? explain
with example ?
Ans) A surrogate key is system genrated/artificial key /sequence number or A
surrogate key is a
substitution for the natural primary key.It is just a unique identifier or number
for each row that
can be used for the primary key to the table. The only requirement for a surrogate
primary key is
that it is unique for each row in the tableI it is useful because the natural
primary key (i.e.
Customer Number in Customer table) can change and this makes updates more
difficult.but In
35

my project, I felt that the primary reason for the surrogate keys was to record the
changing
context of the dimension attributes.(particulaly for scd )The reason for them being
integer and
integer joins are faster. Unlike other
134) Partitioning, Bitmap Indexing (when to use), how will the bitmap indexing
will effect the performance
Ans) Bitmap indexing a indexing technique to tune the performance of SQL queries.
The default type is B-Tree indexers which is of high cardinality (normalized data).
You can use bitmap indexers for de-normalized data or low cardinalities. The
condition is the amount of DISTINCT rows should be less than 4% of the total rows.
If it satisfies the given condition then bitmap indexers will optimize the
performance
for this kind of tables.
135) What is difference between dimention table and fact table and what are
different dimention tables and fact tables
Ans) In the fact table contain measurable data and less columns and meny rows,
It's contain primarykey
Diffrent types of fact tables:
additive,non additive, semi additive
In the dimensions table contain textual descrption of data and also contain meny
columns,less
rows
Its contain primary key
Both contains primary keys
Fact tables are which are measurable and have less columns and more rows
But in dimension which are not measurable
136) What are cost based and rule based approaches and the difference
Ans) Cost based and rule based approaches are the optimization techniques which
are used in related to databases, where we need to optimize a sql query.
Basically Oracle provides Two types of Optimizers (indeed 3 but we use only these
two techniques., bcz the third has some disadvantages.)
When ever you process any sql query in Oracle, what oracle engine internally does
is, it reads the query and decides which will the best possible way for executing
the
query. So in this process, Oracle follows these optimization techniques.
1. cost based Optimizer(CBO): If a sql query can be executed in 2 different ways
( like may have path 1 and path2 for same query),then What CBO does is, it
basically calculates the cost of each path and the analyses for which path the cost
of execution is less and then executes that path so that it can optimize the quey
execution.
35

2. Rule base optimizer(RBO): this basically follows the rules which are needed for
executing a query. So depending on the number of rules which are to be applied,
the optimzer runs the query.
Use:
If the table you are trying to query is already analysed, then oracle will go with
CBO.
If the table is not analysed , the Oracle follows RBO.
For the first time, if table is not analysed, Oracle will go with full table scan.
137) What will happen if you are using Update Strategy Transformation and your
session is configured for "insert"? What are the types of External Loader available
with Informatica? If you have rank index for top 10. However if you pass only 5
records, what will be the output of such a Rank Transformation?
Ans) if u r using a update strategy in any of ur mapping, then in session
properties u have to set
treat source rows as Data Driven. if u select insert or udate or delete, then the
info server will not
consider UPD for performing any DB operations.
ELSE
u can use the UPD session level options. instead of using a UPD in mapping just
select the
update in treat source rows and update else insert option. this will do the same
job as UPD. but
be sure to have a PK in the target table.
2) for oracle : SQL loader
for teradata:tpump,mload.
3) if u pass only 5 rows to rank, it will rank only the 5 records based on the rank
port.
138) What is aggregate cache in aggregator transforamtion?
Ans) When you run a workflow that uses an Aggregator transformation, the
Informatica Server creates index and data caches in memory to process the
transformation. If the Informatica Server requires more space, it stores overflow
values in cache files.
139) Which transformation should we use to normalize the COBOL and relational
sources?
Ans) The Normalizer transformation normalizes records from COBOL and relational
sources, allowing you to organize the data according to your own needs. A
Normalizer transformation can appear anywhere in a data flow when you normalize
a relational source. Use a Normalizer transformation instead of the Source
Qualifier
transformation when you normalize a COBOL source. When you drag a COBOL
source into the Mapping Designer workspace, the Normalizer transformation
automatically appears, creating input and output ports for every column in the
source
140) What are the measure objects
Ans) Aggregate calculation like sum,avg,max,min these are the measure objetcs.
141) What is DTM process?
Ans) After the loadmanger performs validations for session,it creates the DTM
process.DTM is to create and manage the threads that carry out the session tasks.I
creates the
35
master thread.Master thread creates and manges all the other threads.DTM means
data transformation manager.in informatica this is main back ground process.it run
after complition of load manager.in this process informatica server search source
and tgt connection in repository if it correct then informatica server fetch the
data
from source and load it to target.
142)What are the options in the target session of update strategy transformation?
Ans) Insert
Delete
Update
Update as update
Update as insert
Update esle insert
Truncate table
143) What are the designer tools for creating tranformations?
Ans) Mapping designer
Tansformation developer
Mapplet designer.
144) What is Code Page used for?
Ans) Code Page is used to identify characters that might be in different languages.
If you are importing Japanese data into mapping, you must select the Japanese code
page of source data.
145) Can i start and stop single session in concurent bstch?
Ans) Just right click on the particular session and going to recovery option or by
using event wait and event rise
146) What are the rank caches?
Ans) During the session ,the informatica server compares an inout row with rows in
the datacache.If the input row out-ranks a stored row,the informatica server
replaces the
stored row with the input row.The informatica server stores group information in an
index cache and row data in a data cache.
147) Why and where we are using factless fact table?
Ans) Factless Fact Tables are the fact tables with no facts or measures(numerical
data). It contains only the foriegn keys of corresponding Dimensions. Factless fact
is
used to track the events by using the key values
148) How can you delete duplicate rows with out using Dynamic Lookup? Tell me
any other ways using lookup delete the duplicate rows?
Ans) For example u have a table Emp_Name and it has two columns Fname, Lname
in the source table which has douplicate rows. In the mapping Create Aggregator
transformation. Edit the aggregator transformation select Ports tab select Fname
then click the check box on GroupBy and uncheck the (O) out port. select Lname
then uncheck the (O) out port and click the check box on GroupBy. Then create 2
new ports Uncheck the (I) import then click Expression on each port. In the first
new
35

port Expression type Fname. Then second Newport type Lname. Then close the
aggregator transformation link to the target table.
149) What are the different options used to configure the sequential batches?
Ans) Two options
Run the session only if previous session completes sucessfully. Always runs the
session.
150) How to Generate the Metadata Reports in Informatica?
Ans) You can generate PowerCenter Metadata Reporter from a browser on any
workstation, even a workstation that does not have PowerCenter tools installed.
151) How do we estimate the number of partitons that a mapping really requires?
Is it dependent on the machine configuration?
Ans) It depends upon the informatica version we r using. suppose if we r using
informatica 6 it supports only 32 partitions where as informatica 7 supports 64
partitions.
152) How the informatica server sorts the string values in Ranktransformation?
Ans) When the informatica server runs in the ASCII data movement mode it sorts
session data using Binary sortorder.If you configure the seeion to use a binary
sort
order,the
informatica server caluculates the binary value of each string and returns the
specified number of rows with the higest binary values for the string.
153) How can U create or import flat file definition in to the warehouse designer?
Ans) U can create flat file definition in warehouse designer.in the warehouse
designer,u can create new target: select the type as flat file. save it and u can
enter
various columns for that created target by editing its properties.Once the target
is
created, save it. u can import it from the mapping designer.
154) To provide support for Mainframes source data,which files r used as a source
definitions?
Ans) COBOL Copy-book files
155) Can u copy the session to a different folder or repository?
Ans) In addition, you can copy the workflow from the Repository manager. This will
automatically copy the mapping, associated source,targets and session to the
target folder.Yes it is possible. For copying a session to a folder in the same
repository or to another in a different repository, we can use the repository
manager ( which is client sid etool).Simply by just dragging the session to the
target
destination, the session will be copied.
156) How to get two targets T1 containing distinct values and T2 containing
duplicate values from one source S1.
Ans) Use filter transformation for loading the target with no duplicates. and for
the
other transformation load it directly from source.The above requirement can be
achived
using Lookup transformation in Dynamic mode
157) What is worklet and what use of worklet and in which situation we can use it
35

Ans) A set of worlflow tasks is called worklet,


Workflow tasks means
1)timer2)decesion3)command4)eventwait5)eventrise6)mail etc......
158) We are using Update Strategy Transformation in mapping how can we know
whether insert or update or reject or delete option has been selected during
running of sessions in Informatica.
Ans) In Designer while creating Update Strategy Transformation uncheck "forward
to next transformation". If any rejected rows are there automatically it will be
updated to the session log file.
Update or insert files are known by checking the target file or table only.
159) What are the different types of Type2 dimension maping?
Ans) Type2 Dimension/Version Data Maping: In this maping the updated dimension
in the source will gets inserted in target along with a new version number. Newly
added
dimension in source will inserted into target with a primary key.
Type2 Dimension/Flag current Maping: This maping is also used for slowly changing
dimensions.In addition it creates a flag value for changed or new dimension.
Flag indiactes the dimension is new or newlyupdated.Recent dimensions will gets
saved with cuurent flag value 1. And updated dimensions are saved with the value
0.
Type2 Dimension/Effective Date Range Maping: This is also one flavour of Type2
maping used for slowly changing dimensions.This maping also inserts both new and
changed dimensions in to the target. And changes are tracked by the effective date
range for each version of each dimension.
160) Can you use the maping parameters or variables created in one maping into
any other reusable transformation?
Ans) Yes.Because reusable tranformation is not contained with any maplet or
maping.
161) What is tracing level?
Ans) Ya its the level of information storage in session log.
The option comes in the properties tab of transformations. By default it remains
"Normal". Can be
Verbose Initialisation
Verbose Data
Normal
or Terse.
162) What is meant by EDW?
Ans) EDW is Enterprise Datawarehouse which means that its a centralised DW for
the whole organization.
this apporach is the apporach on Imon which relies on the point of having a single
warehouse/centralised where the kimball apporach says to have seperate data
marts for each vertical/department.
35

Advantages of having a EDW:


1. Golbal view of the Data
2. Same point of source of data for all the users acroos the organization.
3. able to perform consistent analysis on a single Data Warehouse.
to over come is the time it takes to develop and also the management that is
required to build a centralised database.
163) There are 1000 source tables containing the same data with different file
formats,now i want to load into a single target table ..how to achieve ?
Ans) first u should convert diff. file format to one format then create 1 to 1
mapping,run it and see the o/p in unix whether file is posted or not.
164) Where is the cache stored in informatica?
Ans) Cache is stored in the Informatica server memory and over flowed data is
stored on the disk in file format which will be automatically deleted after the
successful completion of the session run. If you want to store that data you have
to
use a persistant cache.
165) Can you start a batches with in a batch?
Ans) You can not. If you want to start batch that resides in a batch,create a new
independent batch and copy the necessary sessions into the new batch.
166) What is a command that used to run a batch?
Ans) pmcmd is used to start a batch.
167) What are the unsupported repository objects for a mapplet?
Ans) COBOL source definition
Joiner transformations
Normalizer transformations
Non reusable sequence generator transformations.
Pre or post session stored procedures
Target defintions
Power mart 3.5 style Look Up functions
XML source definitions
IBM MQ source defintions.
168) What r the types of metadata that stores in repository?
Ans) Source definitions. Definitions of database objects (tables, views, synonyms)
or
files that provide source data.
Target definitions. Definitions of database objects or files that contain the
target
data.
Multi-dimensional metadata. Target definitions that are configured as cubes and
dimensions.
Mappings. A set of source and target definitions along with transformations
containing business logic that you build into the transformation. These are the
instructions that the Informatica Server uses to transform and move data.
35

Reusable transformations. Transformations that you can use in multiple mappings.


Mapplets. A set of transformations that you can use in multiple mappings.
Sessions and workflows. Sessions and workflows store information about how and
when the Informatica Server moves data. A workflow is a set of instructions that
describes how and when to run tasks related to extracting, transforming, and
loading data. A session is a type of task that you can put in a workflow. Each
session corresponds to a single mapping.
169) How do we analyse the data at database level?
Ans) Data can be viewed using Informatica's designer tool.
If you want to view the data on source/target we can preview the data but with
some limitations.
We can use data profiling too.
170) In my source table 1000 rec's r there.I want to load 501 rec to 1000 rec into
my Target table ? how can u do this ?
Ans) You can overide the sql Query in Wofkflow Manager. LIke
select * from tab_name where rownum<=1000
minus
select * from tab_name where rownum<=500;
This will work fine. Try it and get back to me if u have any issues about the
same.Use the below
query:
select * from (select rownum r, from test_table) where r between 3 and 6
will fetch rows between a range.
171) Can any one explain real time complain mappings or complex
transformations in Informatica. Specially in Sales Domain.
Ans) Most complex logic we use is denormalization. We dont have any Denormalizer
transformation in INformatica. So we will have to use an aggregator followed by an
expression. Apart from this, we use most of the complexicity in expression
transformation involving lot of nested IIF's and Decode statements...another one is
the union tranformation and joiner.
172) Could anyone please tell me what are the steps required for type2
dimension/version data mapping. how can we implement it
Ans) 1. Determine if the incoming row is 1) a new record 2) an updated record or 3)
a record that already exists in the table using two lookup transformations. Split
the
mapping into 3 seperate flows using a router transformation.
2. If 1) create a pipe that inserts all the rows into the table.
3. If 2) create two pipes from the same source, one updating the old record, one to
insert the new.
173) If you are workflow is running slow in informatica. Where do you start
trouble shooting and what are the steps you follow?
Ans) When the work flow is running slowly u have to find out the bottlenecks
in this order
target
source
35

mapping
session
system
174) Identifying bottlenecks in various components of Informatica and resolving
them.
Ans) The best way to find out bottlenecks is writing to flat file and see where the
bottle neck is .
175) Can we lookup a table from a source qualifer transformation-unconnected
lookup
Ans) No. we can't do.
I will explain you why.
1) Unless you assign the output of the source qualifier to another transformation
or
to target no way it will include the feild in the query.
2) source qualifier don't have any variables feilds to utalize as expression.
176) What r the tasks that Loadmanger process will do?
Ans) Manages the session and batch scheduling: Whe u start the informatica server
the load maneger launches and queries the repository for a list of sessions
configured to run on the informatica server.When u configure the session the
loadmanager maintains list of list of sessions and session start times.When u sart
a
session loadmanger fetches the session information from the repository to perform
the validations and verifications prior to starting DTM process.
Locking and reading the session: When the informatica server starts a session
lodamaager locks the session from the repository.Locking prevents U starting the
session again and again.
Reading the parameter file: If the session uses a parameter files,loadmanager reads
the parameter file and verifies that the session level parematers are declared in
the
file
Verifies permission and privelleges: When the sesson starts load manger checks
whether or not the user have privelleges to run the session.
Creating log files: Loadmanger creates logfile contains the status of session. The
LM
also sends the 'failure mails' in case of failure in execution of the Subsequent
DTM process.
177) How can you stop a batch?
Ans) By using server manager or pmcmd.
178) What is metadata reporter?
Ans) It is a web based application that enables you to run reports againist
repository metadata.
With a meta data reporter,You can access information about U?r repository with out
having knowledge of sql,transformation language or underlying tables in the
repository.
179) Suppose session is configured with commit interval of 10,000 rows and
source has 50,000 rows. Explain the commit points for Source based commit and
Target based commit. Assume appropriate value wherever required.
35

Ans) Source based commit will commit the data into target based on commit
interval.so,for every 10,000 rows it will commit into target.
Target based commit will commit the data into target based on buffer size of the
target.i.e., it commits the data into target when ever the buffer fills.Let us
assume
that the buffer size is 6,000.So,for every 6,000 rows it commits the data.
180) What is the default source option for update stratgey transformation?
Ans) Data driven.
181) Difference between summary filter and details filter?
Ans) Summary Filter --- we can apply records group by that contain common
values.
Detail Filter --- we can apply to each and every record in a database.
182) What are the reusable transforamtions?
Ans) Reusable transformations can be used in multiple mappings.When you need to
incorporate this transformation into maping,U add an instance of it to maping.Later
if you change the definition of the transformation ,all instances of it inherit the
changes.Since the instance of reusable transforamation is a pointer to that
transforamtion,You can change the transforamation in the transformation
developer,its instances automatically reflect these changes.This feature can save
you great deal of work.A reusable Transformation is a reusable metadata object ,
defined with
business logic using single Transformation.
183) What r the types of maping wizards that r to be provided in Informatica?
Ans) Simple Pass through
Slowly Growing Target
Slowly Changing the Dimension
Type1
Most recent values
Type2
Full History
Version
Flag
Date
Type3
Current and one previous
184) After draging the ports of three sources(sql server,oracle,informix) to a
single source qualifier, can u map these three ports directly to target?
Ans) if u drag three hetrogenous sources and populated to target without any join
means you are entertaining Carteisn product. If you don't use join means not only
diffrent sources but homegeous sources are show same error.
If you are not interested to use joins at source qualifier level u can add some
joins
sepratly. In Source qualifier we can join the tables from same database only.
185) What is difference between partioning of relatonal target and partitioning of
file targets?
35

Ans) If u parttion a session with a relational target informatica server creates


multiple connections to the target database to write target data concurently.If u
partition a session with a file target the informatica server creates one target
file for
each partition.You can configure session properties to merge these target files.
186) What is aggregate cache in aggregator transforamtion?
Ans) The aggregator stores data in the aggregate cache until it completes
aggregate calculations.When you run a session that uses an aggregator
transformation,the informatica server creates index and data caches in memory to
process the transformation.If the informatica server requires more space,it stores
overflow values in cache files.
187) What are the properties should be notified when we connect the flat file
source definition to relational database target definition?
Ans) 1.File is fixed width or delimited
2.Size of the file.
If its can be executed without performance issues then normal load will work
If its huge in GB they NWAY partitions can be specified at the source side and the
target side.
3.File reader,source file name etc .....
187) Why we use stored procedure transformation?
Ans) A Stored Procedure transformation is an important tool for populating and
maintaining databases. Database administrators create stored procedures to
automate time-consuming tasks that are too complicated for standard SQL
statements.
188) Which objects are required by the debugger to create a valid debug session?
Ans) Intially the session should be valid session.
source, target, lookups, expressions should be availble, min 1 break point should
be
available for debugger to debug your session.
189) How do you decide whether you need ti do aggregations at database level or
at Informatica level?
Ans) It depends upon our requirment only.If you have good processing database
you can create aggregation table or view at database level else its better to use
informatica. Here i'm explaing why we need to use informatica.
what ever it may be informatica is a thrid party tool, so it will take more time to
process aggregation compared to the database, but in Informatica an option we
called "Incremental aggregation" which will help you to update the current values
with current values +new values. No necessary to process entire values again and
again. Unless this can be done if nobody deleted that cache files. If that happend
total aggregation we need to execute on informatica also.
In database we don't have Incremental aggregation facility.
190) How is the union transformation active transformation?
Ans) Active Transformation: the transformation that change the no. of rows in the
Target.
Source (100 rows) ---> Active Transformation ---> Target (< or > 100 rows)
Passive Transformation: the transformation that does not change the no. of rows in
35

the Target.
Source (100 rows) ---> Passive Transformation ---> Target (100 rows)
Union Transformation:Here Union Transformation acts like a UnionAll in SQl.i.e.,it
wil
also include duplicates while concatinating two tables.bt,we were provided with a
option to eliminate duplicates also..dats y it's become as an active transformation
191) How to get the first 100 rows from the flat file into the target?
Ans) 1. Use test download option if you want to use it for testing.
2. Put counter/sequence generator in mapping and perform it. Its simple.take a
filter
drag all ports from source qualifier to filter. in filter write the condition
columname<101 and
drag ports to the target
192) What is meant by complex mapping
Ans) Complex maping means involved in more logic and more business rules.
Actually in my project complex mapping is
In my bank project, I involved in construct a 1 dataware house
Meny customer is there in my bank project, They r after taking loans relocated in
to
another place
that time i feel to diffcult maintain both prvious and current adresses
in the sense i am using scd2
This is an simple example of complex mapping
193) Can you start a session inside a batch idividually?
Ans) We can start our required session only in case of sequential batch.in case of
concurrent batch we cant do like this.
194) Can we use aggregator/active transformation after update strategy
transformation
Ans) You can use aggregator after update strategy. The problem will be, once you
perform the update strategy, say you had flagged some rows to be deleted and you
had performed aggregator transformation for all rows, say you are using SUM
function, then the deleted rows will be subtracted from this aggregator
transformation.
195) Can you copy the batches?
Ans) NO.
196) Explain the informatica Architecture in detail
Ans) informatica server connects source data and target data using native
odbc drivers
again it connect to the repository for running sessions and retriveing metadata
information
source------>informatica server--------->target
REPOSITORY
The PowerCenter Server is a repository client application. It connects to the
Repository Server and Repository Agent to retrieve workflow and mapping
metadata from the repository database. When the PowerCenter Server requests a
repository connection from the Repository Server, the Repository Server starts and
35

manages the Repository Agent. The Repository Server then re-directs the
PowerCenter Server to connect directly to the Repository Agent.
197) What is Load Manager?
Ans) The load Manager is the Primary Informatica Server Process. It performs the
following tasks:-
Manages session and batch scheduling.
Locks the session and read session properties.
Reads the parameter file.
Expand the server and session variables and parameters.
Verify permissions and privileges.
Validate source and target code pages.
Create the session log file.
Create the Data Transformation Manager which execute the session.
198) In which circumstances that informatica server creates Reject files?
Ans) When it encounters the DD_Reject in update strategy transformation.
Violates database constraint
Filed in the rows was truncated or overflowed.
199) Describe two levels in which update strategy transformation sets?
Ans) Within a session: When you configure a session, yoYou can instruct the
Informatica Server to either treat all records in the same way (for example, treat
all
records as inserts), or use instructions coded into the session mapping to flag
records for different database operations.
Within a mapping: Within a mapping, you use the Update Strategy transformation to
flag records for insert, delete, update, or reject.
200) Can U use the maping parameters or variables created in one maping into
another maping?
Ans) NO. You might want to use a workflow parameter/variable if you want it to be
visible with other mappings/sessions
201) What is Partitioning ? where we can use Partition? wht is advantages?Is it
nessisary?
Ans) Partitions are used to optimize the session performance
we can select in sesstion propetys for partiotions
types
default----passthrough partition
key range partion
round robin partion
hash partiotion
202) In realtime which one is better star schema or snowflake star schema the
surrogate will be linked to which columns in the dimension table.
Ans) In real time only star schema will implement because it will take less time
and
surrogate key will there in each and every dimension table in star schema and this
surrogate key will assign as foreign key in fact table.
203) What is the exact meaning of domain?
35

Ans) Domain is nothing but give a comlete information on a particular subject


area..
like sales domain,telecom domain..etc,. The PowerCenter domain is the fundamental
administrative unit in PowerCenter. The domain supports the administration of the
distributed
services. A domain is a collection of nodes and services that you can group in
folders based on
administration ownership.
204) How can u work with remote database in informatica?did u work directly by
using remote connections?
Ans) You can work with remote,
But you have to
Configure FTP
Connection details
IP address
User authentication
205) What is lookup transformation and update strategy transformation and
explain with an example.
Ans ) Look up transformation is used to lookup the data in a relationa
table,view,Synonym and Flat file.
The informatica server queries the lookup table based on the lookup ports used in
the transformation.
It compares the lookup transformation port values to lookup table column values
based on the lookup condition
By using lookup we can get the realated value,Perform a caluclation and Update
SCD.
Two types of lookups
Connected
Unconnected
Update strategy transformation
This is used to control how the rows are flagged for insert,update ,delete or
reject.
To define a flagging of rows in a session it can be insert,Delete,Update or Data
driven.
In Update we have three options
Update as Update
Update as insert
Update else insert
206) What is batch and describe about types of batches?
Ans) Grouping of session is known as batch. Batches are two types:-
Sequential: Runs sessions one after the other
Concurrent: Runs session at same time.
If you have sessions with source-target dependencies you have to go for sequential
batch to start the sessions one after another.If you have several independent
sessions You can use concurrent batches which runs all the sessions at the same
time.
207) What is meant by lookup caches?
Ans) The informatica server builds a cache in memory when it processes the first
row af a data in a cached look up transformation.It allocates memory for the cache
based on the
35

amount you configure in the transformation or session properties.The informatica


server stores condition values in the index cache and output values in the data
cache.
208) How many ways you can update a relational source defintion and what are
they?
Ans) Two ways:-
1. Edit the definition
2. Reimport the defintion.
209) What is exact use of 'Online' and 'Offline' server connect Options while
defining Work flow in Work flow monitor? . The system hangs when 'Online' server
connect option. The Informatica is installed on a Personal laptop.
Ans) When the repo is up and the PMSERVER is also up, workflow monitor always
will be connected on-line.
When PMserver is down and the repo is still up we will be prompted for an off-line
connection with which we can just monitor the workflows
210) Which is better among connected lookup and unconnected lookup
transformations in informatica or any other ETL tool?
Ans) If you are having defined source you can use connected, source is not well
defined or from different database you can go for unconnected
211) Why you use repository connectivity?
Ans) When you edit,schedule the sesion each time,informatica server directly
communicates the repository to check whether or not the session and users are
valid.All the metadata of sessions and mappings will be stored in repository.
212) What is Session and Batches?
Ans) Session - A Session Is A set of instructions that tells the Informatica Server
How And When To Move Data From Sources To Targets. After creating the session,
we can use
either the server manager or the command line program pmcmd to start or stop the
session.
Batches - It Provides A Way to Group Sessions For Either Serial Or Parallel
Execution By The Informatica Server.
There Are Two Types Of Batches :
Sequential - Run Session One after the Other.
Concurrent - Run Session At The Same Time.
213) Can u generate reports in Informatcia?
Ans) It is a ETL tool, you could not make reports from here, but you can generate
metadata report, that is not going to be used for business analysis
214) How to recover the standalone session?
Ans) A standalone session is a session that is not nested in a batch. If a
standalone
session fails, you can run recovery using a menu command or pmcmd. These
options are
not available for batched sessions.
To recover sessions using the menu:
1. In the Server Manager, highlight the session you want to recover.
35

2. Select Server Requests-Stop from the menu.


3. With the failed session highlighted, select Server Requests-Start Session in
Recovery Mode from the menu.
To recover sessions using pmcmd:
1.From the command line, stop the session.
2. From the command line, start recovery.
215) How do you create a mapping using multiple lookup transformation?
Ans) Use unconnected lookup if same lookup repeats multiple times.
216) What is the procedure to write the query to list the highest salary of three
employees?
Ans) The following is the query to find out the top three salaries
in ORACLE:--(take emp table)
select * from emp e where 3>(select count (*) from emp where
e.sal>emp.sal) order by sal desc.
in SQL Server:-(take emp table)
select top 10 sal from emp
[ORACLE] - select * from (SELECT emp_name,salary from bbc order by area salary
desc)
where rownum<=3
217) Is a fact table normalized or de-normalized?
Ans) A fact table is always DENORMALISED table. It consists of data from dimension
table (Primary Key's) and Fact table has Foreign keys and measures.
218) What is the mapping for unit testing in Informatica, are there any other
testings in Informatica, and how we will do them as a etl developer. how do the
testing people will do testing are there any specific tools for testing
Ans) In informatica there is no method for unit testing. There are two methods to
test the mapping.
1. But we have data sampling. set the ata sampling properties for session in
workflow manager for specified number of rows and test the mapping.
2. Use the debugger and test the mapping for sample records.
219) Can batches be copied/stopped from server manager?
Ans) Yes, we can stop the batches using server manager or pmcmd commnad
220) Can i use a session Bulk loading option that time can i make a recovery to the
session?
Ans) If the session is configured to use in bulk mode it will not write recovery
information to recovery tables. So Bulk loading will not perform the recovery as
required.
221) Can we add different work flows in to one batch and run sequentially? If pos
how do we do that?
Ans) For simulating a batch we can create a unix script and write pmcmd
commands for running different workflows one after other into that unix script .
like
workflowrun.sh(Giving example for informatica 8 )
pmcmd startworkflow -sv IS_INTEGRATION_SERVICE -d DOMAIN1 -u myuser -p
mypass -f folder1 workflow1
pmcmd startworkflow -sv IS_INTEGRATION_SERVICE -d DOMAIN1 -u myuser -p
35

mypass -f folder1 workflow2


Save this file and run the same .
This will trigger workflow1 and then workflow2.
222) How to retrive the records from a rejected file. explane with syntax or
example
Ans) During the execution of workflow all the rejected rows will be stored in bad
files(where your informatica server get installed;C:Program FilesInformatica
PowerCenter 7.1Server) These bad files can be imported as flat a file in source
then
thro' direct maping we can load these files in desired format.
223) If the workflow has 5 session and running sequentially and 3rd session
hasbeen failed how can we run again from only 3rd to 5th session?
Ans) If multiple sessions in a concurrent batch fail, you might want to truncate
all
targets and run the batch again. However, if a session in a concurrent batch fails
and the rest of the sessions complete successfully, you can recover the session as
a
standalone session.To recover a session in a concurrent batch:1.Copy the failed
session using Operations-Copy Session.2.Drag the copied session outside the batch
to be a standalone session.3.Follow the steps to recover a standalone
session.4.Delete the standalone copy.Hi, as per the questions all the sessions are
serial. So
you can start the session as "start workflow from task" from there it wil continue
to run the rest
of the tasks.
224) How i can do incremental aggregation in real time?
Ans) For incremental Aggregation.. We need to use Aggregations + Look up on
Target + Expression to SUM up Count obtained from New Aggregations and Lookup
on target.
For one record already present in Aggregations table.. count is also there..
It will be available in lookup.. new count will be available through AGG.. Sum then
up and update that record in target..
225) If i done any modifications for my table in back end does it reflect in
informatca warehouse or maping desginer or source analyzer?
Ans) NO. Informatica is not at all concern with back end data base.It displays u
all
the information that is to be stored in repository.If want to reflect back end
changes
to informatica screens, again u have to import from back end to informatica by
valid
connection.And u have to replace the existing files with imported files.
226) How the informatica server increases the session performance through
partitioning the source?
Ans) For a relational sources informatica server creates multiple connections for
each parttion of a single source and extracts seperate range of data for each
connection.Informatica server reads multiple partitions of a single source
concurently.Similarly for loading also informatica server creates multiple
connections to the target and loads partitions of data concurently.
For XML and file sources,informatica server reads multiple files concurently.For
loading the data informatica server creates a seperate file for each partition(of a
source file).U can choose to merge the targets.
227) How to join two tables without using the Joiner Transformation.
35

Ans) Itz possible to join the two or more tables by using source qualifier.But
provided the tables should have relationship.
When u drag n drop the tables u will getting the source qualifier for each
table.Delete all the source qualifiers.Add a common source qualifier for all.Right
click on the source qualifier u will find EDIT click on it.Click on the properties
tab,u
will find sql query in that u can write ur sqls. You can also do it using
Session --- mapping---source--- there you have an option called User Defined Join
there you can
write your SQL
228) When we create a target as flat file and source as oracle.. how can i specify
first rows as column names in flat files...
Ans) Use a pre sql statement....but this is a hardcoding method...if you change the
column names or put in extra columns in the flat file, you will have to change the
insert statement
229) What happens if you try to create a shortcut to a non-shared folder?
Ans) It only creates a copy of it..
230) Explain about Recovering sessions?
Ans) If you stop a session or if an error causes a session to stop, refer to the
session
and error logs to determine the cause of failure. Correct the errors, and then
complete the
session. The method you use to complete the session depends on the properties of
the mapping, session, and Informatica Server configuration.
Use one of the following methods to complete the session:
? Run the session again if the Informatica Server has not issued a commit.
? Truncate the target tables and run the session again if the session is not
recoverable.
? Consider performing recovery if the Informatica Server has issued at least one
commit.
231) Can Informatica load heterogeneous targets from heterogeneous sources?
Ans) Yes it can. For example...Flat File and Relations sources are joined in the
mapping, and later, Flat File and relational targets are loaded.
232) While running multiple session in parallel which loads data in the same table,
throughput of each session becomes very less and almost same for each session.
How can we improve the performance (throughput) in such cases?
Ans) I think this will be handled by the database we use.
When the operations/loading on the table is in progress the table will be locked.
If we are trying to load the same table with different partitions then we run into
rowID erros if the database is 9i and we can apply a patch to reslove this issue
233) What is data merging, data cleansing, sampling?
Ans) Cleansing:---TO identify and remove the retundacy and inconsistency
sampling: just smaple the data throug send the data from source to target
Data merging: It is a process of combining the data with similar structures in to a
single output.
Data Cleansing: It is a process of identifying and rectifying the inconsistent and
inaccurate data
35

into consistent and accurate data


Data Sampling:It is the process of sample by sending the data from source to target
234) What is Code Page Compatibility?
Ans) Compatibility between code pages is used for accurate data movement when
the Informatica Sever runs in the Unicode data movement mode. If the code pages
are
identical, then there will not be any data loss. One code page can be a subset or
superset of another. For accurate data movement, the target code page must be a
superset of the source code page.
Superset - A code page is a superset of another code page when it contains the
character encoded in the other code page, it also contains additional characters
not
contained in the other code page.
Subset - A code page is a subset of another code page when all characters in the
code page are encoded in the other code page.
235) There are 3 depts in dept table and one with 100 people and 2nd with 5 and
3rd with some 30 and so. i want to diplay those deptno where more than 10
people exists
Ans) If you want to perform it thru informatica, the Fire the same query in the SQL
Override of Source qualifier transformation and make a simple pass thru mapping.
Other wise, you can also do it by using a Filter.Router transformation by giving
the
condition there deptno>=10.
236) How to load the data from people soft hrm to people soft erm using
informatica?
Ans) Following are necessary
1.Power Connect license
2.Import the source and target from people soft using ODBC connections
3.Define connection under "Application Connection Browser" for the people soft
source/target in workflow manager .
select the proper connection (people soft with oracle,sybase,db2 and informix)
and execute like a normal session.
237) Somebody ca explain me the 3 points:I want to Know : 1) the differences
between using native and ODBC server-side databaseConnections 2)Know the
reason why to register a server to the repository is necessary 3)Know the rules
associated with transferring and sharing objects between folders. 4) Know the
rules associated with transferring and sharing objects between repositories
Ans) 1> Native connection is something which is provided by the same vendor for
that tool. eg: oracle warehouse builder has its own driver to connect to oracle DB
which does not use a ODBC driver. here connection will be fast and hence
performance.
ODBC is basically a third party driver like Microsoft driver for Oracle, which can
be
used by any tool to connect to oracle.
2> Registering a server to a repository is necessary because the sessions will be
using this server to run. If we have multiple servers, then we can use diff server
to
diff sessions to run.
238) I have an requirement where in the columns names in a table (Table A)
should appear in rows of target table (Table B) i.e. converting columns to rows. Is
it possible through Informatica? If so, how?
35

Ans) if data in tables as follows


Table A
Key-1 char(3);
table A values
_______
123
Table B
bkey-a char(3);
bcode char(1);
table b values
1 T
1 A
1 G
2 A
2 T
2 L
3 A
and output required is as
1, T, A
2, A, T, L
3, A
the SQL query in source qualifier should be
select key_1,
max(decode( bcode, 'T', bcode, null )) t_code,
max(decode( bcode, 'A', bcode, null )) a_code,
max(decode( bcode, 'L', bcode, null )) l_code
from a, b
where a.key_1 = b.bkey_a
group by key_1
/
239) Explain about perform recovery?
Ans) When the Informatica Server starts a recovery session, it reads the
OPB_SRVR_RECOVERY table and notes the row ID of the last row committed to the
target database.
The Informatica Server then reads all sources again and starts processing from the
next row ID. For example, if the Informatica Server commits 10,000 rows before the
session fails, when you run recovery, the Informatica Server bypasses the rows up
to 10,000 and starts loading with row 10,001.By default, Perform Recovery is
disabled in the Informatica Server setup. You must enable Recovery in the
Informatica Server setup before you run a session so the
Informatica Server can create and/or write entries in the OPB_SRVR_RECOVERY
table.
35

240) Can any one tell me how to run scd1 bec it create two target tables in
mapping window and there are only one table in warehouse designer(means
target).. so if we create one new table in target it gives error..
Ans) If so, create the target with the name u have given in wizard for
target(table).
No't create the target again for the second instance. It is just the virtual copy
of the
same target. i.e in warehouse designer create and execute the target definitions
and run the session containing the mapping again.define the source& target
locations in general properties of sessiontreat rows as: Data DrivenCheck this once
and let me know
241) i have source like 1;2:3.4 its flatfile. now i want in my target table as 1 2
3 4
plz can any one explain me the procedure how to get output like dat
Ans) No answer available currently. Be the first one to reply to this question by
submitting your answer from the form below.
242) In a sequential batch can you run the session if previous session fails?
Ans) Yes.By setting the option always runs the session.
243) In a filter expression we want to compare one date field with a db2 system
field CURRENT DATE. Our Syntax: datefield = CURRENT DATE (we didn't define it
by ports, its a system field ), but this is not valid (PMParser: Missing
Operator)..
Ans) The db2 date formate is "yyyymmdd" where as sysdate in oracle will give "ddmm-
yy" so conversion of db2 date formate to local database date formate is
compulsary. other wise u will get that type of error
244) How do you transfert the data from data warehouse to flatfile?
Ans) You can write a mapping with the flat file as a target using a
DUMMY_CONNECTION. A flat file target is built by pulling a source into target space
using Warehouse Designer tool.
245) What is the Rank index in Rank transformation?
Ans) Based on which port you want generate Rank is known as rank port, the
generated values are known as rank index.
246) Define Informatica Repository?
Ans) The Informatica repository is a relational database that stores information,
or
metadata, used by the Informatica Server and Client tools. Metadata can include
information
such as mappings describing how to transform source data, sessions indicating
when you want the Informatica Server to perform the transformations, and connect
strings
for sources and targets.
The repository also stores administrative information such as usernames and
passwords, permissions and privileges, and product version.
Use repository manager to create the repository.The Repository Manager connects
to the repository database and runs the code needed to create the repository
tables.Thsea tables
stores metadata in specific format the informatica server,client tools use.
247) What is change data capture?
35

Ans) Change data capture (CDC) is a set of software design patterns used to
determine the data that has changed in a database so that action can be taken
using the changed data.
248) Can any body write a session parameter file which will change the source
and targets for every session. i.e different source and targets for each session
run.
Ans) You are supposed to define a parameter file. And then in the Parameter file,
you can define two parameters, one for source and one for target.
Give like this for example:
$Src_file = c:program filesinformaticaserverinabc_source.txt
$tgt_file = c: argetsabc_targets.txt
Then go and define the parameter file:
[folder_name.WF:workflow_name.ST:s_session_name]
$Src_file =c:program filesinformaticaserverinabc_source.txt
$tgt_file = c: argetsabc_targets.txt
If its a relational db, you can even give an overridden sql at the session
level...as a
parameter. Make sure the sql is in a single line.
249) What is meant by Junk Attribute in Informatica?
Ans) Junk Dimension A Dimension is called junk dimension if it contains attribute
which are rarely changed ormodified. example In Banking Domain , we can fetch
four attributes accounting to a junk dimensions like from the
Overall_Transaction_master table tput flag tcmp flag del flag advance flag all
these
attributes can be a part of a junk dimensions.Grouping of random flags and text
attributes
in a dimension and moving them to a separate dimension is called as junk dimension
250) What are partition points?
Ans) Partition points mark the thread boundaries in a source pipeline and divide
the pipeline into stages.Partition points mark the thread boundaries in a pipeline
and
divide the pipeline into stages. The Informatica Server sets partition points at
several
transformations in a pipeline by default. If you use PowerCenter, you can define
other partition
points. When you add partition points, you increase the number of transformation
threads,
which can improve session performance. The Informatica Server can redistribute
rows of data at partition points, which can also improve session performance.
251) Where should U place the flat file to import the flat file defintion to the
designer?
Ans) There is no such restrication to place the source file. In performance point
of
view its better to place the file in server local src folder. if you need path
please
check the server properties availble at workflow manager.
It doesn't mean we should not place in any other folder, if we place in server src
folder by default src will be selected at time session creation.
252) I have flatfile it contains 'n' number of records. i need to load half of the
records to one target table another half to another target table. plz any one can
explain me the procedure.
35

Ans) There will be 2 pipelines


In first pipeline , read from source file , put an expression,in expression take a
variable and increment it by 1 ( v=v+1),then put a target T0,generates sequence in
T0 column.
Now after first pipeline gets executed we have
a) count of all the rows from the file
b) rank of all the records in T0 table and c1 column.
In second pipeline,take T0 as source and T1,T2 as target and router R1 as
transformation in between.
In R1 , put 2 groups ->
1st group c1<=v/2 - direct to T1
2nd group c2>v/2 - direct to T2
253) How can u recover the session in sequential batches?
Ans) If you configure a session in a sequential batch to stop on failure, you can
run
recovery starting with the failed session. The Informatica Server completes the
session and
then runs the rest of the batch. Use the Perform Recovery session property
To recover sessions in sequential batches configured to stop on failure:
1.In the Server Manager, open the session property sheet.
2.On the Log Files tab, select Perform Recovery, and click OK.
3.Run the session.
4.After the batch completes, open the session property sheet.
5.Clear Perform Recovery, and click OK.
If you do not clear Perform Recovery, the next time you run the session, the
Informatica Server attempts to recover the previous session.
If you do not configure a session in a sequential batch to stop on failure, and the
remaining sessions in the batch complete, recover the failed session as a
standalone
session.
254) How to use mapping parameters and what is their use
Ans) In designer u will find the mapping parameters and variables options.u can
assign a value to them in designer. comming to there uses suppose u r doing
incremental extractions daily. suppose ur source system contains the day column.
so every day u have to go to that mapping and change the day so that the
particular data will be extracted . if we do that it will be like a layman's work.
there
comes the concept of mapping parameters and variables. once if u assign a value to
a mapping variable then it will change between sessions.
255) Diff between informatica repositry server & informatica server
Ans) Informatica Repository Server:It's manages connections to the repository from
client application.
Informatica Server:It's extracts the source data,performs the data
transformation,and loads the transformed data into the target
256) What r the mapping paramaters and maping variables?
Ans) Please refer to the documentation for more understanding.
Mapping variables have two identities:
Start value and Current value
35

Start value = Current value ( when the session starts the execution of the
undelying
mapping)
Start value <> Current value ( while the session is in progress and the variable
value changes in one ore more occasions)
Current value at the end of the session is nothing but the start value for the
subsequent run of the same session.
257) In certain mapping there are four targets tg1,tg2,tg3 and tg4. tg1 has a
primary key,tg2 foreign key referencing the tg1's primary key,tg3 has primary key
that tg2 and tg4 refers as foreign key,tg2 has foreign key referencing primary key
of tg4 ,the order in which the informatica will load the target? 2]How can I detect
aggregate tranformation causing low performance?
Ans) To optimize the aggregator transformation, you can use the following options.
Use incremental aggregation
Sort the ports before you perform aggregation
Avoid using aggregator transformation after update strategy, since it might be
confusing.
Answer for the second query:
To get performance details for any aggregator transformation, we have to check some
parameters
in the .perf file named as Transformationname_writetodisk and
Transformationname_readfromdisk. If these two counters provide values which are not
zero then
we have to tune the aggregator transformation. The ways in which the aggregator
transformation
can be tuned:
1. Using incremental aggregation
2. By increasing the DATA cache and index cache sizes
3. Using a sorter transformation before the aggregator transformation
258) How many number of sessions that You can create in a batch?
Ans) Any number of sessions.It depends on the config settings of informatica
server. The
parameters for the maximum connections cant be exceeded. It depends on the overall
sessions
running per the server at a time. For eg, if the number of connxns rt now is 300
and if u have
batches running with 290+ sessions at a time, adding 15 more sessions to the time
frame will
cause the loads to fail
259) Compare Data Warehousing Top-Down approach with Bottom-up approach
Ans) Top down
ODS-->ETL-->Datawarehouse-->Datamart-->OLAP
Bottom up
ODS-->ETL-->Datamart-->Datawarehouse-->OLAP
260) What r the methods for creating reusable transforamtions?
Ans) You can design using 2 methods
using transformation developer
create normal one and promote it to reusable
261) How to export mappings to the production environment?
Ans) In the designer go to the main menu and one can see the export/import
options.
Import the exported mapping in to the production repository with replace
35

options.You will have to export as xml format using export option and then import
in
production environment.
262) where do we use MQ series source qualifier, application multi group source
qualifier. just give an example for a better understanding
Ans) We can use a MQSeries SQ when we have a MQ messaging system as
source(queue).
When there is need to extract data from a Queue, which will basically have
messages in XML format, we will use a JMS or a MQ SQ depending on the messaging
system. If you have a TIBCO EMS Queue, use a JMS source and JMS SQ and an XML
Parser, or if you have a MQ series queue, then use a MQ SQ which will be associated
with a Flat file or a Cobal file.
263) How do we estimate the depth of the session scheduling queue? Where do
we set the number of maximum concurrent sessions that Informatica can run at a
given time?
Ans) u set the max no of concurrent sessions in the info server.by default its 10.
u
can set to any no.
264) Discuss which is better among incremental load, Normal Load and Bulk load
Ans) It depends on the requirement. Otherwise Incremental load which can be
better as it takes onle that data which is not available previously on the target.
According to performence bulk is better than normal.
But bolh having some conditions in source data
Conditions are like
1)does not containn any constraint in data.
2)dont use the double datatype if neccesory to use then use it as last row of the
table.
3)it does not support the CHECK CONSTRAINT.
265) what is the best way to show metadata(number of rows at source, target and
each transformation level, error related data) in a report format
Ans) You can select these details from the repository table. you can use the view
REP_SESS_LOG to get these data
266) When the informatica server marks that a batch is failed?
Ans) If one of session is configured to "run if previous completes" and that
previous
session fails.
267) Which tool you use to create and manage sessions and batches and to
monitor and stop the informatica server?
Ans) Informatica Server Manager.Its the Integration Service in 8.x
268) What is the hierarchies in DWH
Ans) Data sources ---> Data acquisition ---> Warehouse ---> Front end tools --->
Metadata management ---> Data warehouse operation management
269) How can we store previous session logs
Ans) Just run the session in time stamp mode then automatically session log will
not
overwrite current session log.
35

270) my source is having 1000 rows. i have brought 300 records into my ODS. so
next time i want to load the remaining records. so i need to load from 301 th
record. when ever i start the work flow again it will load from the begining. how
do we solve this problem.
Ans) By using Sequence GeneratorTransformation u can do it
ie by chaging the RESET option in the properties tab of your
SequenceGeneratorTransformation.
then it will workwe can also use recover task so that ,when data is extracting
because of any problem while loading data if it's stop's loading at middle using
recover task we can get the records from where it's stoped previously.....
271) What is Dimension table Exactly?
Ans) Dimension tables gives description about something.
for eg. If we take Student as a dimention table, we have various attributes like
college name, age, gender,etc which gives some description about a student.
272) What are the different threads in DTM process?
Ans) Master thread: Creates and manages all other threads
Maping thread: One maping thread will be creates for each session.Fectchs session
and maping information.
Pre and post session threads: This will be created to perform pre and post session
operations.
Reader thread: One thread will be created for each partition of a source.It reads
data from source.
Writer thread: It will be created to load data to the target.
Transformation thread: It will be created to tranform data.
273) What is a junk dimension
Ans) A "junk" dimension is a collection of random transactional codes, flags and/or
text attributes that are unrelated to any particular dimension. The junk dimension
is
simply a structure that provides a convenient place to store the junk attributes. A
good example would be a trade fact in a company that brokers equity trades.
274) What r the circumstances that infromatica server results an unreciverable
session?
Ans) The source qualifier transformation does not use sorted ports.
If u change the partition information after the initial session fails.
Perform recovery is disabled in the informatica server configuration.
If the sources or targets changes after initial session fails.
If the maping consists of sequence generator or normalizer transformation.
If a concuurent batche contains multiple failed sessions.
275) How does the server recognise the source and target databases?
Ans) By using ODBC connection.if it is relational.if is flat file FTP
connection..see we
can make sure with connection in the properties of session both sources && targets.
276) Whats the diff between Informatica powercenter server, repositoryserver
and repository?
Ans) By using ODBC connection.if it is relational.if is flat file FTP
connection..see we
can make sure with connection in the properties of session both sources && targets.
35
277) About Informatica Power center 7: 1) I want to Know which mapping
properties can be overridden on a Session Task level. 2)Know what types of
permissions are needed to run and schedule Work flows.
Ans) 1.(Ans) You can override any properties other than the source and targets.
Make sure the source and targets exists in ur db if it is a relational db. If it is
a flat
file, you can override its properties. You can override sql if its a relational db,
session log, DTM buffer size, cache sizes etc.
2.(Ans) You need execute permissions on the folder to run/schedule a workflow. You
may have read and write. But u need execute permissions as well.
278) Two relational tables are connected to SQ Trans,what are the possible errors
it will be thrown?
Ans) The only two possibilities as of I know is
Both the table should have primary key/foreign key relation ship
Both the table should be available in the same schema or same database
279) What r the options in the target session of update strategy transsformatioin?
Ans) Update as Insert:
This option specified all the update records from source to be flagged as inserts
in
the target. In other words, instead of updating the records in the target they are
inserted as new records.
Update else Insert:
This option enables informatica to flag the records either for update if they are
old
or insert, if they are new records from source.
insert,update,delete,insert as update,update else insert.update as update.
280) Why we use partitioning the session in informatica?
Ans) Performance can be improved by processing data in parallel in a single session
by creating multiple partitions of the pipeline.
Informatica server can achieve high performance by partitioning the pipleline and
performing the extract , transformation, and load for each partition in parallel.
281) I have a source column data with names like ravi kumar.i want to insert the
ravi in one column and kumar in another coliumn of target table.how do u
implement in informatica?
Ans) i can able to give solution for this Question , in Exp Transformation using
syntax of "substr" and "instr". use the syntax of this to identity the string when
source is having multiple string. i have given that Syntax below:
SUBSTR(char as char, m as numeric, [n as numeric])
//Returns n characters of char, beginning at character m.
INSTR(char1 as char, char2 as char, [n as integer, [m as integer, [comparisonType
as integer]]])
//Searches char1 beginning with its nth character for the mth occurance of char2
and returns the position of the character in char1 that is the first character of
this
occurrence. Linguistic comparison is done when comparisonType is 0 and binary
comparison is done when comparisonType is any non-zero value. By default
comparisonType is 0 i.e.linguistic comparison
and use the link to get how achieved that Q in the below Link which contain
Source,Target code,mapping.
35

282) DOubts regarding rank transformation: CAn we do ranking using two ports?
can we rank all the rows coming from source, how?
Ans) Rank port. Use to designate the column for which you want to rank values. You
can designate only one Rank port in a Rank transformation. The Rank port is an
input/output port. You must link the Rank port to another transformation.
So you can not use two ports for ranking in the rank transformation.
Note:you can achieve this question using Aggregate and Expression transformation.
283) what is the diff b/w rowid and row? 2.diff b/w rowid and row number?
Ans) Every row is identified by a rowid. ROWID is pseudo column in every table. The
physical address of the rows is use to for the ROWID.IN HEXADECIMAL
representation, ROWID is shown as 18 character string of the following format
BBBBBBBBB.RRRR.FFFF (block,row,file)
And row is a piece of record or simple a record.

----------------------------------------------------

248036211-129775529-Informatica-Scenario-Based-Interview-Questions-With-Answers-
1.pdf

Informatica Scenario Based Interview Questions with Answers


1. How to generate sequence numbers using expression transformation?
Solution: In the expression transformation, create a variable port and increment it
by 1. Then assign the variable port to an output port. In the expression
transformation, the ports are: V_count=V_count+1 O_count=V_count
2. Design a mapping to load the first 3 rows from a flat file into a target?
Solution: You have to assign row numbers to each record. Generate the row numbers
either using the expression transformation as mentioned above or use sequence
generatortransformation. Then pass the output to filter transformation and specify
the filter condition as O_count <=3
3. Design a mapping to load the last 3 rows from a flat file into a target?
Solution: Consider the source has the following data. col a b c d e
Step1: You have to assign row numbers to each record. Generate the row numbers
using the expression transformation as mentioned above and call the row number
generated port as O_count. Create a DUMMY output port in the same expression
transformation and assign 1 to that port. So that, the DUMMY output port always
return 1 for each row.
In the expression transformation, the ports are V_count=V_count+1 O_count=V_count
O_dummy=1
The output of expression transformation will be col, o_count, o_dummy a, 1, 1 b, 2,
1 c, 3, 1 d, 4, 1 e, 5, 1
Step2: Pass the output of expression transformation to aggregator and do not
specify any group by condition. Create an output port O_total_records in the
aggregator and assign O_count port to it. The aggregator will return the last row
by default. The output of aggregator contains the DUMMY port which has value 1 and
O_total_records port which has the value of total number of records in the source.
In the aggregator transformation, the ports are O_dummy O_count
O_total_records=O_count
The output of aggregator transformation will be O_total_records, O_dummy 5, 1
Step3: Pass the output of expression transformation, aggregator transformation to
joinertransformation and join on the DUMMY port. In the joiner transformation check
the property sorted input, then only you can connect both expression and aggregator
to joinertransformation.
In the joiner transformation, the join condition will be O_dummy (port from
aggregator transformation) = O_dummy (port from expressiontransformation)
The output of joiner transformation will be col, o_count, o_total_records a, 1, 5
b, 2, 5 c, 3, 5 d, 4, 5 e, 5, 5
Step4: Now pass the ouput of joiner transformation to filter transformation and
specify the filter condition as O_total_records (port from aggregator)-O_count(port
from expression) <=2
In the filter transformation, the filter condition will be O_total_records -
O_count <=2
The output of filter transformation will be col o_count, o_total_records c, 3, 5 d,
4, 5 e, 5, 5
4. Design a mapping to load the first record from a flat file into one table A, the
last record from a flat file into table B and the remaining records into table C?
Solution: This is similar to the above problem; the first 3 steps are same. In the
last step instead of using the filter transformation, you have to use router
transformation. In the routertransformation create two output groups.
In the first group, the condition should be O_count=1 and connect the corresponding
output group to table A. In the second group, the condition should be
O_count=O_total_records and connect the corresponding output group to table B. The
output of default group should be connected to table C.
5. Consider the following products data which contain duplicate rows. A B C C
B D B
Q1. Design a mapping to load all unique products in one table and the duplicate
rows in another table. The first table should contain the following output A D
The second target should contain the following output B B B C C
Solution: Use sorter transformation and sort the products data. Pass the output to
an expressiontransformation and create a dummy port O_dummy and assign 1 to that
port. So that, the DUMMY output port always return 1 for each row.
The output of expression transformation will be Product, O_dummy A, 1 B, 1 B, 1 B,
1 C, 1 C, 1 D, 1
Pass the output of expression transformation to an aggregator transformation. Check
the group by on product port. In the aggreagtor, create an output port
O_count_of_each_product and write an expression count(product).
The output of aggregator will be Product, O_count_of_each_product A, 1 B, 3 C, 2 D,
1
Now pass the output of expression transformation, aggregator transformation to
joinertransformation and join on the products port. In the joiner transformation
check the property sorted input, then only you can connect both expression and
aggregator to joinertransformation.
The output of joiner will be product, O_dummy, O_count_of_each_product A, 1, 1 B,
1, 3 B, 1, 3 B, 1, 3
C, 1, 2 C, 1, 2 D, 1, 1
Now pass the output of joiner to a router transformation, create one group and
specify the group condition as O_dummy=O_count_of_each_product. Then connect this
group to onetable. Connect the output of default group to another table.
Q2. Design a mapping to load each product once into one table and the remaining
products which are duplicated into another table. The first table should contain
the following output A B C D
The second table should contain the following output B B C
Solution: Use sorter transformation and sort the products data. Pass the output to
an expressiontransformation and create a variable port,V_curr_product, and assign
product port to it. Then create a V_count port and in the expression editor write
IIF(V_curr_product=V_prev_product, V_count+1,1). Create one more variable port
V_prev_port and assign product port to it. Now create an output port O_count port
and assign V_count port to it.
In the expression transformation, the ports are Product V_curr_product=product
V_count=IIF(V_curr_product=V_prev_product,V_count+1,1) V_prev_product=product
O_count=V_count
The output of expression transformation will be Product, O_count A, 1 B, 1 B, 2 B,
3 C, 1 C, 2 D, 1
Now Pass the output of expression transformation to a router transformation, create
one group and specify the condition as O_count=1. Then connect this group to one
table. Connect the output of default group to another table.
Informatica Scenario Based Questions
1. Consider the following employees data as source
employee_id, salary 10, 1000 20, 2000 30, 3000 40, 5000 Q1. Design a mapping to
load the cumulative sum of salaries of employees into targettable? The target table
data should look like as employee_id, salary, cumulative_sum 10, 1000, 1000 20,
2000, 3000 30, 3000, 6000 40, 5000, 11000 Solution: Connect the source Qualifier to
expression transformation. In the expressiontransformation, create a variable port
V_cum_sal and in the expression editor write V_cum_sal+salary. Create an output
port O_cum_sal and assign V_cum_sal to it. Q2. Design a mapping to get the pervious
row salary for the current row. If there is no pervious row exists for the current
row, then the pervious row salary should be displayed as null. The output should
look like as employee_id, salary, pre_row_salary 10, 1000, Null 20, 2000, 1000 30,
3000, 2000 40, 5000, 3000 Solution: Connect the source Qualifier to expression
transformation. In the expressiontransformation, create a variable port V_count and
increment it by one for each row entering the expression transformation. Also
create V_salary variable port and assign the expression
IIF(V_count=1,NULL,V_prev_salary) to it . Then create one more variable port
V_prev_salary and assign Salary to it. Now create output port O_prev_salary and
assign V_salary to it. Connect the expression transformation to the target ports.
In the expression transformation, the ports will be employee_id salary
V_count=V_count+1 V_salary=IIF(V_count=1,NULL,V_prev_salary) V_prev_salary=salary
O_prev_salary=V_salary Q3. Design a mapping to get the next row salary for the
current row. If there is no next row for
the current row, then the next row salary should be displayed as null. The output
should look like as employee_id, salary, next_row_salary 10, 1000, 2000 20, 2000,
3000 30, 3000, 5000 40, 5000, Null Solution: Step1: Connect the source qualifier to
two expression transformation. In each expressiontransformation, create a variable
port V_count and in the expression editor write V_count+1. Now create an output
port O_count in each expression transformation. In the first expression
transformation, assign V_count to O_count. In the second expressiontransformation
assign V_count-1 to O_count. In the first expression transformation, the ports will
be employee_id salary V_count=V_count+1 O_count=V_count In the second expression
transformation, the ports will be employee_id salary V_count=V_count+1
O_count=V_count-1 Step2: Connect both the expression transformations to joiner
transformation and join them on the port O_count. Consider the first expression
transformation as Master and second one as detail. In the joiner specify the join
type as Detail Outer Join. In the joinertransformation check the property sorted
input, then only you can connect both expression transformations to joiner
transformation. Step3: Pass the output of joiner transformation to a target table.
From the joiner, connect the employee_id, salary which are obtained from the first
expression transformation to the employee_id, salary ports in target table. Then
from the joiner, connect the salary which is obtained from the second expression
transformaiton to the next_row_salary port in the target table. Q4. Design a
mapping to find the sum of salaries of all employees and this sum should repeat for
all the rows. The output should look like as employee_id, salary, salary_sum 10,
1000, 11000 20, 2000, 11000 30, 3000, 11000 40, 5000, 11000
Solution: Step1: Connect the source qualifier to the expression transformation. In
the expressiontransformation, create a dummy port and assign value 1 to it. In the
expression transformation, the ports will be employee_id salary O_dummy=1 Step2:
Pass the output of expression transformation to aggregator. Create a new port
O_sum_salary and in the expression editor write SUM(salary). Do not specify group
by on any port. In the aggregator transformation, the ports will be salary O_dummy
O_sum_salary=SUM(salary) Step3: Pass the output of expression transformation,
aggregator transformation to joinertransformation and join on the DUMMY port. In
the joiner transformation check the property sorted input, then only you can
connect both expression and aggregator to joinertransformation. Step4: Pass the
output of joiner to the target table. 2. Consider the following employees table as
source department_no, employee_name 20, R 10, A 10, D 20, P 10, B 10, C 20, Q 20, S
Q1. Design a mapping to load a target table with the following values from the
above source? department_no, employee_list 10, A 10, A,B 10, A,B,C 10, A,B,C,D 20,
A,B,C,D,P 20, A,B,C,D,P,Q 20, A,B,C,D,P,Q,R 20, A,B,C,D,P,Q,R,S
Solution: Step1: Use a sorter transformation and sort the data using the sort key
as department_no and then pass the output to the expression transformation. In the
expressiontransformation, the ports will be department_no employee_name
V_employee_list = IIF(ISNULL(V_employee_list),employee_name,V_employee_list||','||
employee_name) O_employee_list = V_employee_list Step2: Now connect the expression
transformation to a target table. Q2. Design a mapping to load a target table with
the following values from the above source? department_no, employee_list 10, A 10,
A,B 10, A,B,C 10, A,B,C,D 20, P 20, P,Q 20, P,Q,R 20, P,Q,R,S Solution: Step1: Use
a sorter transformation and sort the data using the sort key as department_no and
then pass the output to the expression transformation. In the
expressiontransformation, the ports will be department_no employee_name
V_curr_deptno=department_no V_employee_list = IIF(V_curr_deptno! =
V_prev_deptno,employee_name,V_employee_list||','||employee_name)
V_prev_deptno=department_no O_employee_list = V_employee_list Step2: Now connect
the expression transformation to a target table. Q3. Design a mapping to load a
target table with the following values from the above source? department_no,
employee_names 10, A,B,C,D 20, P,Q,R,S Solution: The first step is same as the
above problem. Pass the output of expression to an aggregator
transformation and specify the group by as department_no. Now connect the
aggregator transformation to a target table.
..................................... 1. Consider the following product types data
as the source. Product_id, product_type 10, video 10, Audio 20, Audio 30, Audio 40,
Audio 50, Audio 10, Movie 20, Movie 30, Movie 40, Movie 50, Movie 60, Movie Assume
that there are only 3 product types are available in the source. The sourcecontains
12 records and you dont know how many products are available in each product type.
Q1. Design a mapping to select 9 products in such a way that 3 products should be
selected from video, 3 products should be selected from Audio and the remaining
3products should be selected from Movie. Solution: Step1: Use sorter transformation
and sort the data using the key as product_type. Step2: Connect the sorter
transformation to an expression transformation. In the expression transformation,
the ports will be product_id product_type V_curr_prod_type=product_type
V_count = IIF(V_curr_prod_type = V_prev_prod_type,V_count+1,1)
V_prev_prod_type=product_type O_count=V_count Step3: Now connect the expression
transformaion to a filter transformation and specify the filter condition as
O_count<=3. Pass the output of filter to a target table. Q2. In the above problem
Q1, if the number of products in a particular product type are less than 3, then
you wont get the total 9 records in the target table. For example, see the videos
type in the source data. Now design a mapping in such way that even if the number
of products in a particular product type are less than 3, then you have to get
those lessnumber of records from another porduc types. For example: If the number
of products in videos are 1, then the reamaining 2 records should come from audios
or movies. So, the total number of records in the target table should always be 9.
Solution: The first two steps are same as above. Step3: Connect the expression
transformation to a sorter transformation and sort the data using the key as
O_count. The ports in soter transformation will be product_id product_type O_count
(sort key) Step3: Discard O_count port and connect the sorter transformation to an
expressiontransformation. The ports in expression transformation will be product_id
product_type V_count=V_count+1 O_prod_count=V_count
Step4: Connect the expression to a filter transformation and specify the filter
condition as O_prod_count<=9. Connect the filter transformation to a target table.
2. Design a mapping to convert column data into row data without using the
normalizertransformation. The source data looks like col1, col2, col3 a, b, c d, e,
f The target table data should look like Col a b c d e f Solution: Create three
expression transformations with one port each. Connect col1 from SourceQualifier to
port in first expression transformation. Connect col2 from Source Qualifier to port
in second expression transformation. Connect col3 from source qualifier to port in
third expression transformation. Create a union transformation with three input
groups and each input group should have one port. Now connect the expression
transformations to the input groups and connect the union transformation to the
target table. 3. Design a mapping to convert row data into column data. The source
data looks like id, value
10, a 10, b 10, c 20, d 20, e 20, f The target table data should look like id,
col1, col2, col3 10, a, b, c 20, d, e, f Solution: Step1: Use sorter transformation
and sort the data using id port as the key. Then connect the sorter transformation
to the expression transformation. Step2: In the expression transformation, create
the ports and assign the expressions as mentioned below. id value V_curr_id=id
V_count= IIF(v_curr_id=V_prev_id,V_count+1,1) V_prev_id=id O_col1=
IIF(V_count=1,value,NULL) O_col2= IIF(V_count=2,value,NULL) O_col3=
IIF(V_count=3,value,NULL) Step3: Connect the expression transformation to
aggregator transformation. In the aggregator transforamtion, create the ports and
assign the expressions as mentioned below. id (specify group by on this port)
O_col1 O_col2 O_col3
col1=MAX(O_col1) col2=MAX(O_col2) col3=MAX(O_col3) Stpe4: Now connect the ports id,
col1, col2, col3 from aggregator transformation to the target table.
..................................
Take a look at the following tree structure diagram. From the tree structure, you
can easilyderive the parent-child relationship between the elements. For example, B
is parent of D and E.
The above tree structure data is represented in a table as shown below.
c1, c2, c3, c4 A, B, D, H A, B, D, I A, B, E, NULL A, C, F, NULL A, C, G, NULL
Here in this table, column C1 is parent of column C2, column C2 is parent of column
C3, column C3 is parent of column C4.
Q1. Design a mapping to load the target table with the below data. Here you need to
generate sequence numbers for each element and then you have to get the parent id.
As the element "A" is at root, it does not have any parent and its parent_id is
NULL.
id, element, parent_id 1, A, NULL 2, B, 1 3, C, 1 4, D, 2 5, E, 2 6, F, 3 7, G, 3
8, H, 4 9, I, 4
I have provided the solution for this problem in Oracle Sql query. If you are
interested you can Click Here to see the solution.
Q2. This is an extension to the problem Q1. Let say column C2 has null for all the
rows, then C1 becomes the parent of C3 and c3 is parent of C4. Let say both columns
c2 and c3 has null for all the rows. Then c1 becomes the parent of c4. Design a
mapping to accommodate these type of null conditions.
.....................................
Q1. The source data contains only column 'id'. It will have sequence numbers from 1
to 1000. The source data looks like as Id 1 2 3 4 5 6 7 8 .... 1000
Create a workflow to load only the Fibonacci numbers in the target table. The
target tabledata should look like as Id 1 2 3 5 8 13 .....
In Fibonacci series each subsequent number is the sum of previous two numbers. Here
assume that the first two numbers of the fibonacci series are 1 and 2.
Solution:
STEP1: Drag the source to the mapping designer and then in the Source Qualifier
Transformation properties, set the number of sorted ports to one. This will sort
the source data in ascending order. So that we will get the numbers in sequence as
1, 2, 3, ....1000
STEP2: Connect the Source Qualifier Transformation to the Expression
Transformation. In the Expression Transformation, create three variable ports and
one output port. Assign the expressions to the ports as shown below.
Ports in Expression Transformation: id
v_sum = v_prev_val1 + v_prev_val2 v_prev_val1 = IIF(id=1 or id=2,1, IIF(v_sum = id,
v_prev_val2, v_prev_val1) ) v_prev_val2 = IIF(id=1 or id =2, 2, IIF(v_sum=id,
v_sum, v_prev_val2) ) o_flag = IIF(id=1 or id=2,1, IIF( v_sum=id,1,0) )
STEP3: Now connect the Expression Transformation to the Filter Transformation and
specify the Filter Condition as o_flag=1
STEP4: Connect the Filter Transformation to the Target Table.
Q2. The source table contains two columns "id" and "val". The source data looks
like as below id val 1 a,b,c 2 pq,m,n 3 asz,ro,liqt
Here the "val" column contains comma delimited data and has three fields in that
column. Create a workflow to split the fields in “val” column to separate rows. The
output should look like as below. id val 1 a 1 b 1 c 2 pq 2 m 2 n 3 asz 3 ro 3 liqt
Solution:
STEP1: Connect three Source Qualifier transformations to the Source Definition
STEP2: Now connect all the three Source Qualifier transformations to the Union
Transformation. Then connect the Union Transformation to the Sorter Transformation.
In the sorter transformation sort the data based on Id port in ascending order.
STEP3: Pass the output of Sorter Transformation to the Expression Transformation.
The ports in Expression Transformation are:
id (input/output port) val (input port) v_currend_id (variable port) = id v_count
(variable port) = IIF(v_current_id!=v_previous_id,1,v_count+1) v_previous_id
(variable port) = id o_val (output port) = DECODE(v_count, 1, SUBSTR(val, 1,
INSTR(val,',',1,1)-1 ), 2, SUBSTR(val, INSTR(val,',',1,1)+1, INSTR(val,',',1,2)-
INSTR(val,',',1,1)-1),
3, SUBSTR(val, INSTR(val,',',1,2)+1), NULL )
STEP4: Now pass the output of Expression Transformation to the Target definition.
Connect id, o_val ports of Expression Transformation to the id, val ports of
TargetDefinition.
For those who are interested to solve this problem in oracle sql, Click Here. The
oracle sqlquery provides a dynamic solution where the "val" column can have varying
number of fields in each row.

-----------------------------------------------------------

143365737-Informatica-Scenarios.pdf

Q1) I have a flat file, want to reverse the contents of the flat file which means
the first record should come as last record and last record should come as first
record and load into the target file. As an example consider the source flat file
data as Informatica Enterprise Solution Informatica Power center Informatica Power
exchange Informatica Data quality
The target flat file data should look as Informatica Data quality Informatica Power
exchange Informatica Power center Informatica Enterprise Solution
Solution: Follow the below steps for creating the mapping logic
 Create a new mapping.
 Drag the flat file source into the mapping.
 Create an expression transformation and drag the ports of source qualifier
transformation into the expression transformation.
 Create the below additional ports in the expression transformation and assign the
corresponding expressions
Variable port: v_count = v_count+1 Output port o_count = v_count
 Now create a sorter transformation and drag the ports of expression
transformation into it.
 In the sorter transformation specify the sort key as o_count and sort order as
DESCENDING.
 Drag the target definition into the mapping and connect the ports of sorter
transformation to the target.
Q2) Load the header record of the flat file into first target, footer record into
second target and the remaining records into the third target. The solution to this
problem I have already posted by using aggregator and joiner. Now we will see how
to implement this by reversing the contents of the file. Solution:
 Connect the source qualifier transformation to the expression transformation. In
the expression transformation create the additional ports as mentioned above.
 Connect the expression transformation to a router. In the router transformation
create an output group and specify the group condition as o_count=1. Connect this
output group to a target and the default group to sorter transformation.
 Sort the data in descending order on o_count port.
 Connect the output of sorter transformation to expression transformation (don’t
connect o_count port).
 Again in the expression transformation create the same additional ports mentioned
above.
 Connect this expression transformation to router and create an output group. In
the output group specify the condition as o_count=1 and connect this group to
second target. Connect the default group to the third group.
*********
INFORMATICA SCENARIO BASED INTERVIEW QUESTIONS WITH ANSWERS - PART 1
1. How to generate sequence numbers using expression transformation? Solution: In
the expression transformation, create a variable port and increment it by 1. Then
assign the variable port to an output port. In the expression transformation, the
ports are: V_count=V_count+1 O_count=V_count 2. Design a mapping to load the first
3 rows from a flat file into a target? Solution: You have to assign row numbers to
each record. Generate the row numbers either using the expression transformation as
mentioned above or use sequence generator transformation. Then pass the output to
filter transformation and specify the filter condition as O_count <=3 3. Design a
mapping to load the last 3 rows from a flat file into a target? Solution: Consider
the source has the following data. col a b c d e Step1: You have to assign row
numbers to each record. Generate the row numbers using the expression
transformation as mentioned above and call the row number generated port as
O_count.
Create a DUMMY output port in the same expression transformation and assign 1 to
that port. So that, the DUMMY output port always return 1 for each row. In the
expression transformation, the ports are V_count=V_count+1 O_count=V_count
O_dummy=1 The output of expression transformation will be col, o_count, o_dummy a,
1, 1 b, 2, 1 c, 3, 1 d, 4, 1 e, 5, 1 Step2: Pass the output of expression
transformation to aggregator and do not specify any group by condition. Create an
output port O_total_records in the aggregator and assign O_count port to it. The
aggregator will return the last row by default. The output of aggregator contains
the DUMMY port which has value 1 and O_total_records port which has the value of
total number of records in the source. In the aggregator transformation, the ports
are O_dummy O_count O_total_records=O_count The output of aggregator transformation
will be O_total_records, O_dummy 5, 1 Step3: Pass the output of expression
transformation, aggregator transformation to joiner transformation and join on the
DUMMY port. In the joiner transformation check the property sorted input, then only
you can connect both expression and aggregator to joiner transformation. In the
joiner transformation, the join condition will be O_dummy (port from aggregator
transformation) = O_dummy (port from expression transformation) The output of
joiner transformation will be col, o_count, o_total_records a, 1, 5 b, 2, 5 c, 3, 5
d, 4, 5 e, 5, 5 Step4: Now pass the ouput of joiner transformation to filter
transformation and specify the filter condition as O_total_records (port from
aggregator)-O_count(port from expression) <=2 In the filter transformation, the
filter condition will be O_total_records - O_count <=2 The output of filter
transformation will be
col o_count, o_total_records c, 3, 5 d, 4, 5 e, 5, 5
INFORMATICA SCENARIO BASED INTERVIEW QUESTIONS WITH ANSWERS - PART 1
1. How to generate sequence numbers using expression transformation? Solution: In
the expression transformation, create a variable port and increment it by 1. Then
assign the variable port to an output port. In the expression transformation, the
ports are: V_count=V_count+1 O_count=V_count 2. Design a mapping to load the first
3 rows from a flat file into a target? Solution: You have to assign row numbers to
each record. Generate the row numbers either using the expression transformation as
mentioned above or use sequence generator transformation. Then pass the output to
filter transformation and specify the filter condition as O_count <=3 3. Design a
mapping to load the last 3 rows from a flat file into a target? Solution: Consider
the source has the following data. col a b c d e Step1: You have to assign row
numbers to each record. Generate the row numbers using the expression
transformation as mentioned above and call the row number generated port as
O_count. Create a DUMMY output port in the same expression transformation and
assign 1 to that port. So that, the DUMMY output port always return 1 for each row.
In the expression transformation, the ports are V_count=V_count+1 O_count=V_count
O_dummy=1 The output of expression transformation will be col, o_count, o_dummy a,
1, 1 b, 2, 1 c, 3, 1 d, 4, 1 e, 5, 1 Step2: Pass the output of expression
transformation to aggregator and do not specify any group by condition. Create an
output port O_total_records in the aggregator and assign O_count port to it. The
aggregator will return the last row by default. The output of aggregator contains
the DUMMY
port which has value 1 and O_total_records port which has the value of total number
of records in the source. In the aggregator transformation, the ports are O_dummy
O_count O_total_records=O_count The output of aggregator transformation will be
O_total_records, O_dummy 5, 1 Step3: Pass the output of expression transformation,
aggregator transformation to joiner transformation and join on the DUMMY port. In
the joiner transformation check the property sorted input, then only you can
connect both expression and aggregator to joiner transformation. In the joiner
transformation, the join condition will be O_dummy (port from aggregator
transformation) = O_dummy (port from expression transformation) The output of
joiner transformation will be col, o_count, o_total_records a, 1, 5 b, 2, 5 c, 3, 5
d, 4, 5 e, 5, 5 Step4: Now pass the ouput of joiner transformation to filter
transformation and specify the filter condition as O_total_records (port from
aggregator)-O_count(port from expression) <=2 In the filter transformation, the
filter condition will be O_total_records - O_count <=2 The output of filter
transformation will be col o_count, o_total_records c, 3, 5 d, 4, 5 e, 5, 5 4.
Design a mapping to load the first record from a flat file into one table A, the
last record from a flat file into table B and the remaining records into table C?
Solution: This is similar to the above problem; the first 3 steps are same. In the
last step instead of using the filter transformation, you have to use router
transformation. In the router transformation create two output groups. In the first
group, the condition should be O_count=1 and connect the corresponding output group
to table A. In the second group, the condition should be O_count=O_total_records
and connect the corresponding output group to table B. The output of default group
should be connected to table C. 5. Consider the following products data which
contain duplicate rows. A
B C C B D B Q1. Design a mapping to load all unique products in one table and the
duplicate rows in another table. The first table should contain the following
output A D The second target should contain the following output B B B C C
Solution: Use sorter transformation and sort the products data. Pass the output to
an expression transformation and create a dummy port O_dummy and assign 1 to that
port. So that, the DUMMY output port always return 1 for each row. The output of
expression transformation will be Product, O_dummy A, 1 B, 1 B, 1 B, 1 C, 1 C, 1 D,
1 Pass the output of expression transformation to an aggregator transformation.
Check the group by on product port. In the aggreagtor, create an output port
O_count_of_each_product and write an expression count(product). The output of
aggregator will be Product, O_count_of_each_product A, 1 B, 3 C, 2 D, 1 Now pass
the output of expression transformation, aggregator transformation to joiner
transformation and join on the products port. In the joiner transformation check
the property sorted input, then only you can connect both expression and aggregator
to joiner transformation. The output of joiner will be product, O_dummy,
O_count_of_each_product A, 1, 1
B, 1, 3 B, 1, 3 B, 1, 3 C, 1, 2 C, 1, 2 D, 1, 1 Now pass the output of joiner to a
router transformation, create one group and specify the group condition as
O_dummy=O_count_of_each_product. Then connect this group to one table. Connect the
output of default group to another table. Q2. Design a mapping to load each product
once into one table and the remaining products which are duplicated into another
table. The first table should contain the following output A B C D The second table
should contain the following output B B C Solution: Use sorter transformation and
sort the products data. Pass the output to an expression transformation and create
a variable port,V_curr_product, and assign product port to it. Then create a
V_count port and in the expression editor write IIF(V_curr_product=V_prev_product,
V_count+1,1). Create one more variable port V_prev_port and assign product port to
it. Now create an output port O_count port and assign V_count port to it. In the
expression transformation, the ports are Product V_curr_product=product
V_count=IIF(V_curr_product=V_prev_product,V_count+1,1) V_prev_product=product
O_count=V_count The output of expression transformation will be Product, O_count A,
1 B, 1 B, 2 B, 3 C, 1 C, 2 D, 1 Now Pass the output of expression transformation to
a router transformation, create one group and specify the condition as O_count=1.
Then connect this group to one table. Connect the output of default group to
another table.
1. Consider the following employees data as source employee_id, salary 10, 1000 20,
2000 30, 3000 40, 5000 Q1. Design a mapping to load the cumulative sum of salaries
of employees into target table? The target table data should look like as
employee_id, salary, cumulative_sum 10, 1000, 1000 20, 2000, 3000 30, 3000, 6000
40, 5000, 11000 Solution: Connect the source Qualifier to expression
transformation. In the expression transformation, create a variable port V_cum_sal
and in the expression editor write V_cum_sal+salary. Create an output port
O_cum_sal and assign V_cum_sal to it. Q2. Design a mapping to get the pervious row
salary for the current row. If there is no pervious row exists for the current row,
then the pervious row salary should be displayed as null. The output should look
like as employee_id, salary, pre_row_salary 10, 1000, Null 20, 2000, 1000 30, 3000,
2000 40, 5000, 3000 Solution: Connect the source Qualifier to expression
transformation. In the expression transformation, create a variable port V_count
and increment it by one for each row entering the expression transformation. Also
create V_salary variable port and assign the expression
IIF(V_count=1,NULL,V_prev_salary) to it . Then create one more variable port
V_prev_salary and assign Salary to it. Now create output port O_prev_salary and
assign V_salary to it. Connect the expression transformation to the target ports.
In the expression transformation, the ports will be employee_id
salary V_count=V_count+1 V_salary=IIF(V_count=1,NULL,V_prev_salary)
V_prev_salary=salary O_prev_salary=V_salary Q3. Design a mapping to get the next
row salary for the current row. If there is no next row for the current row, then
the next row salary should be displayed as null. The output should look like as
employee_id, salary, next_row_salary 10, 1000, 2000 20, 2000, 3000 30, 3000, 5000
40, 5000, Null Solution: Step1: Connect the source qualifier to two expression
transformation. In each expression transformation, create a variable port V_count
and in the expression editor write V_count+1. Now create an output port O_count in
each expression transformation. In the first expression transformation, assign
V_count to O_count. In the second expression transformation assign V_count-1 to
O_count. In the first expression transformation, the ports will be employee_id
salary V_count=V_count+1 O_count=V_count In the second expression transformation,
the ports will be employee_id salary V_count=V_count+1 O_count=V_count-1 Step2:
Connect both the expression transformations to joiner transformation and join them
on the port O_count. Consider the first expression transformation as Master and
second one as detail. In the joiner specify the join type as Detail Outer Join. In
the joiner transformation check the property sorted input, then only you can
connect both expression transformations to joiner transformation. Step3: Pass the
output of joiner transformation to a target table. From the joiner, connect the
employee_id, salary which are obtained from the first expression transformation to
the employee_id,
salary ports in target table. Then from the joiner, connect the salary which is
obtained from the second expression transformaiton to the next_row_salary port in
the target table. Q4. Design a mapping to find the sum of salaries of all employees
and this sum should repeat for all the rows. The output should look like as
employee_id, salary, salary_sum 10, 1000, 11000 20, 2000, 11000 30, 3000, 11000 40,
5000, 11000 Solution: Step1: Connect the source qualifier to the expression
transformation. In the expression transformation, create a dummy port and assign
value 1 to it. In the expression transformation, the ports will be employee_id
salary O_dummy=1 Step2: Pass the output of expression transformation to aggregator.
Create a new port O_sum_salary and in the expression editor write SUM(salary). Do
not specify group by on any port. In the aggregator transformation, the ports will
be salary O_dummy O_sum_salary=SUM(salary) Step3: Pass the output of expression
transformation, aggregator transformation to joiner transformation and join on the
DUMMY port. In the joiner transformation check the property sorted input, then only
you can connect both expression and aggregator to joiner transformation. Step4:
Pass the output of joiner to the target table. 2. Consider the following employees
table as source department_no, employee_name 20, R 10, A
10, D 20, P 10, B 10, C 20, Q 20, S Q1. Design a mapping to load a target table
with the following values from the above source? department_no, employee_list 10, A
10, A,B 10, A,B,C 10, A,B,C,D 20, A,B,C,D,P 20, A,B,C,D,P,Q 20, A,B,C,D,P,Q,R 20,
A,B,C,D,P,Q,R,S Solution: Step1: Use a sorter transformation and sort the data
using the sort key as department_no and then pass the output to the expression
transformation. In the expression transformation, the ports will be department_no
employee_name V_employee_list =
IIF(ISNULL(V_employee_list),employee_name,V_employee_list||','||employee_name)
O_employee_list = V_employee_list Step2: Now connect the expression transformation
to a target table. Q2. Design a mapping to load a target table with the following
values from the above source? department_no, employee_list 10, A 10, A,B 10, A,B,C
10, A,B,C,D 20, P 20, P,Q 20, P,Q,R 20, P,Q,R,S
Solution: Step1: Use a sorter transformation and sort the data using the sort key
as department_no and then pass the output to the expression transformation. In the
expression transformation, the ports will be department_no employee_name
V_curr_deptno=department_no V_employee_list = IIF(V_curr_deptno! =
V_prev_deptno,employee_name,V_employee_list||','||employee_name)
V_prev_deptno=department_no O_employee_list = V_employee_list Step2: Now connect
the expression transformation to a target table. Q3. Design a mapping to load a
target table with the following values from the above source? department_no,
employee_names 10, A,B,C,D 20, P,Q,R,S Solution: The first step is same as the
above problem. Pass the output of expression to an aggregator transformation and
specify the group by as department_no. Now connect the aggregator transformation to
a target table.

-----------------------------------------------------------

51975127-Excelent-scenarios-and-faq-s-of-informatica.pdf

Informatica Interview Questions


ALL-Interview.com
Click on the Question to read answer and report incorrect category/incorrect
Questions or answers
How to list Top 10 salary, without using Rank Transmission?
what is the economic comparison of all the Informatica versions?
what is Data Ware Housing?
How to create slowly changing dimension in informatica?
How do you handle two sessions in Informatica
How to implement de-normalization concept in Informatica Mappings?
Do v use any tool for Handling Performance tuning in real time?
What's the difference between source and target object definitions in Informatica?
What is the advantages of converting stored procedures into Informatica mappings?
What is the purpose of using UNIX commands in informatica. Which UNIX commands are
generally used with informatica?
Explain about scheduling real time in informatica
What is the difference between SQL Overriding in Source qualifier and Lookup
transformation?
How do you define fact less Fact Table in Informatica?
Does Informatica provide a SAS exit in any of its products?
What is the role/use of unix in informatica
What is informatica
What is informatica power
how much hike in the 6th pay commission for my basic pay of rs.5500?
Difference between data mining and data warehousing
What is MODEL is Data mining world?
How can we load the normalized data ( Vertical data) to (Horizontal data)with out
using decode in the expression
transformation and the aggregator transformation.
how to do estimation before staring development in project. Here estimation in the
sense how many associates are
required, etc to complete the project.
What are the ETL tools available in DWH?
Diff b/w Shortcut and reusable Object ?
how we do performance tuning in informatics
What is 'Power Center Push down Optimization Option' in INFORMATICA?
How to deal with it without changing mapping?
What is "Change cache" in Informatics?
can we create index and drop index in existing table while using informatica
If we are using an aggregate but forget to mention the group by port .what will be
the output??
Informatics software installation 8.1/7.1.3/7.1 with oracle 10g database (optional
Teradata v2R6)
How/where can i install Informatics software with oracle or teradata as database
How to transform normalized data to denormalized form in informatics? Is there any
logic or any transformations to achieve
this?
why do we go for update strategy tr in SCD rather using the session properties?
How many mapplets u have created? and what is the logic used
LOOKUP Condition is nothing but a Join condition? What type of join condition it,by
default?Using the Look UP Condition
How many types of relational conditions we can make
Dependency Errors in Informatica?Do u got any dependency problems while running
session?
What is Throughput in Informatics,How it works, Where I can find this option to
check?
Please explain in detail with example about 1.Confirmed Dimension. 2.Junk
Dimension. 3.Degenerated Dimension.
4.Slowly changing Dimensions
How can i catch the Duplicate Rows From Sorter Trans in a Separate Target Table ?
how can we perform incremental aggregation?explain with example?
difference between shortcut and reusable transformation
difference between source based commit? and target based commit? which is better
with respect to performance?
What is Target Update Override? What is the Use ?
What is a Shortcut and What is the difference between a Shortcut and a Reusable
Transformation?
why sequence generator should not directly connected to joiner transformation ?
How to create a mapping ? id date 101 2/4/2008 101 4/4/2008 102 6/4/2008 102
4/4/2008 103 4/4/2008 104 8/4/2008 O/P
- shuould have only one id with the min(date) How to create a mapping for this
How Schedule the Informatica job in "Unix Corn Scheduling tool" ?
How upload the Mainframe source For Informatics ?
what is the difference between persistence and dynamic caches? On which conditions
we are using these caches?
Change Data Capture in Informatics,using Incremental Aggregation.How we identify
the these data in Target table.
What are set operators in Oracle
what is the Source File limitation in Informatica?how many flat file we can use as
a source in a mapping if.. 1) The Structure
of flat files are same & 2) If the Structure of flat files are different
How generate Sequence Numbers to Target Table (with out using Sequence Gen
Trans,Rank Trans).
How scd will work ?
When we load flat files into target tables how do we identify duplicates? and where
do load the duplicate records for further
reference?
How do we do change data capture? Is this Slowly changing Dimension technique?
What is the difference between Connected and Unconnected Look up
Transformation.Give one or two examples
what are the difference between active and passive transformation with some
examples.
What is a flat file and how to use flat file in informatica
What is a surrogate key?Why we use it in a mapping?
How Union Transformation is an Active Trans
What are partitions in informatics and which one is used for better performance?
when we develop a project what are the performance issue will raise??
if a table have INDEX and CONSTRAINT why it raise the performance issue bcoz when
we drop the index and disable the
constraint it performed better?
what are UNIX commands frequently used in informatics?
what is diff between grep and find
Can we use different look up transformations for a same look up table (look up
condition may or may not be same)with
different output ports?How the cache files will be affected?
Diff B/W MAP Parameter,SESSION Parameter, Database connection session parameters.?
Its possible to Create
3parameters at a time?If Possible which one will fire FIRST?
Which is costliest transformation? costly means occupying more memory?
how to run two work flow(not a sessions) sequentially, what is the process. explain
detailed information.
how to run 2 work flows sequentially. respond what is the process?
what is the monster dimension give me one example
Which gives the more performance when compare to fixed width and delimited file ?
and why?
What are the challenges of Data Warehousing in the future?
why we use source qualifier transformation?
when we go for unconnected look up transformation? and why?
installation procedure for power center 8.1.1 especially domain_con fig how to use
parameter files
Is it passive or active when check and unchecked the box of DISTINCT in Sorter
transformation? why?
explain about sales project in informatica
After we make a folder shared can it be reversed?Why?
Transformer is a __________ stage option1:Passive 2.Active 3.Dynamic 4.Static
need for registering a repository server
How to display First letter of Names in Caps?
How to merge First Name & Last Name
How to retrieve last two days updated records?
How to get EVEN & ODD numbers separately?
How to extract original records at one target & Duplicate records at one target?
How to display null values on a target & non-null values on a target?
How to update records in Target, without using Update Strategy?
Is it possible to have "5 source & 5 Target" in single mapping?
Without using Look up & Sequence Generator, How to generate Sequence?
In joiner, how to load Master table in Target?
How to compare Source and Target table, without using dynamic look up?
How to join 2 tables, without using any condition?
Without source how to insert record to target?
How to find from a source which has 10,000 records, find the average between 500th
to 600th record?
how will you remove the duplicate records from flat file without using sorter?
how will you get 21 to 30 record from 50 records?
How will you combine 3 different sources with a single source?
How will you display 10-15 letters from a name? (for ex:
name="sivasubram'aniam'ramakrishnan". o/p wanted="aniam")
How will you display "Mr" for male & "Mrs" for female in target table?
how to join the two flat files using the joiner t/r if there is no matching port?
How many cubes create from a single model?
What is the difference between Oracle performance and Informatics Performance?
Which performance is better?
how to declare array in plsql?
what is plsql table?
what is the difference between lookup override and joiner?
How to send duplicates to one target and unique rows to one target?target is empty
How to delete duplicate records in a flat file source?
What is a poling?
How to load a relational source into file target?
What are the phases in SDLC?
How the Informatica Server reads parameter file?
How to load relational source into file target?
What is A complex mapping?
What is a data modeling?
how to move the mappings from your local machine to Clint place?
Difference between filter and router?
what are the parameter and variable
What is the difference between procedure and stored procedure?
what is the diff b/w target load plan and cbl?
how can u load the data in time dimension?
what is the diff b/w union and joiner and look up?
what is the process we used in joiner transformation,there is no matching column in
sources?
what is the process of target load planing?
what is lookup override?
how many types transformations supported by sorted input?
How to delete duplicate record in Informatics?
What is the difference between Junk and Confirmed Dimension? where can be used that
one in Informatica?
What is the difference between Bitmap and Btree index?
how to calculate the optimum cache size in aggregate transformation?
what target override?what advantages it has compare to target update
what is cdc? how to use it in creation of mappings?
how do we create datamart?
we r using aggregator with out using group by?
daily how much amount of data send to production?
How to write a procedure for a date which is in three different formats,and you
want to load into data warehouse in any
single date format
Ho to handle changing source file counts in a mapping?
what is mapping optimization? what are the techniques for that
how to obtain performance data for individual transformations.
can a port in expression transfer be given the name DISTINCT
task is running successfully but data is not loaded why?
what is galaxy repository?
what is mean by grouping of condition column in look up transformation?
in reporting we add some new objects,how we get the count of the newly added
objects to the report
how can u generate sequence of values in which target has more than 2billion of
records.
how can u connect client to Ur informatica sever if server is located at different
place( not local to the client)
what is upstream and downstream transformation?
what are testing in a mapping level please give brief explanation
how to load duplicate row in a target
how can u tune u r informatica mappings
how can u approach u r client
which quality process u can approach in ur project
what is difference b/w joiner and union transformation
What is the "File Repository" and how can we use that in the Informatica ? give one
example of the Process ?
how to run the batch using pmcmd command
What is the main data object present in between source and target
What is the term PIPELINE in informatica ?
What is checksum terminology in informatica? Where do you use it ?
how to load only the first and last record of a flat file into the target?
All active transformations r passive or not?
which T/r we can use it mapping parameter and mapping variable? and which one is
reusable for any mapping mapping
parameter or mapping variable?
what r the properties of work flow? and write query to select dept more ten
employees in dept.?
what is file list concept in informatica
how can we update without using update transformation.wt is push down operation in
informatica.which look up gives more
tuning performance. if so why
what we require for D.modeling?
what transformations are used for Variable port?
how tokens will generate?
.prm with replace .txt is possible?
what is Dynamic look up Transformation? when we use?how we use?
what is inline view? when and why we Use
How to generate the HTML output using Informatica.
how to work with mapplet designer in informatica?
what is full process of Information source to target just like stg to production
and development
what are the differences between power center 8.1 and power center 8.5?
What will happen when Mapping variable and Mapping parameter is not defined or
given? Where do you use mapping
variable and mapping parameter?
how can we load 365 flat file to a single fact table (target) as a history load in
single mapping?
differences between Informatica 7.1 and 8.1?
why u go for dimensions ?
how many tasks are there in informatica ?
what is incremental loading ?
how do u get the first record from 50,000 records ?
in which situations do u go for sequence generator ?
in which situations do u go for scds ?
in which situations do u go for starflake schema ?
in which situations do u go for snowflake schema ?
What is Use of Factless Fact Table ? Why we use Factless Fact Schema in the
Projects?
WHAT IS TEXT LOAD?
HOW DO U IMPLEMENT SCHEDULING IN INFORMATICA?
WHAT IS THE MEANING OF UPGRADATION OF REPOSITORY?
What is the actual work done in Development and in the production depts in building
a data warehouse. Which dept is more
interesting and career oriented
how to duplicates from expression transformation without using sorter before that
difference between stop and abort
how can v eliminate duplicate values from look up without overriding sql?
can we update records in target using update strategy without generating primary
key ? explain
if we have duplicate records in a table temp n now i want to pass unique values to
t1 n duplicate values to t2 in single
mapping?how
how do u use sequence created in oracle in informatics? Explain with an simple
example
what is dimension table?
what is fact table?
whether Sequence generator T/r uses Caches? then what type of Cache it is
what is shared Cache. when we will use shared Cache?
explain different types of modeling.
how u know when to use a static cache and dynamic cache in look up transformation.
what is the tracing level? and difference between trace in normal and verbose and
non verbose?
how much memory (size) occupied by a session at run time
how DTM buffer size and buffer block size are related
why cant we put a sequence generator or up strategy transformation before joiner
transformation?
In Real Time what are the scenarios u faced, what r the tough situations u have
overcome, and explain about sessions
In Real Time what are the scenarios u faced, what r the tough situations u have
overcome, and explain about sessions.
What is difference between Informatica 6.2 Work flow and Informatica Work flow 7.1
What is a diff between joiner and look up transformation
Without using any transformations how u can load the data into target?
why we need to use unconnected transformation?
where we can static cache,dynamic cache
What is incremental aggregation and how it is done?
How could we generate the sequence of key values without using sequence generator
transformation in the target ??
In a scenario I want to change the dimensions of a table and normalize the
denomralized table which transformation can I
use?
Why is meant by direct and indirect loading options in sessions?
what is the logic will you implement to load data into a fact table from n
dimension tables?
How will u pass the data with out debugger?
How will u find weather dimension table is big in size of a fact table?
explain the scenario for bulk loading and the normal loading option in Informatica
Work flow manager ???
What is Fact less fact table ?
what is the (UTC) unit test cases with the examples in informatica
Whats the difference between $,$$,$$$
From where we can start or use pmcmd?
In real time scenario where can we use mapping parameters and variables?
By using Filter Transformation,How to pass rows that does not satisfy the
condition(discarded rows) to another target?
THREE DATE FORMATS R THERE . HOW TO CHANGE THESE THREE INTO ONE FORMAT WITHOUT
USING
EXPRESSION TRANSFORMATION ?
A TABLE CONTAINS SOME NULL VALUES.HOW TO GET (NOT APPLICABLE(NA)) IN PLACE OF THAT
NULL VALUE
IN TARGET?
ONE FLAT FILE IS THERE WHICH IS COMMA DELIMETED,HOW TO CHANGE THAT COMMA DELIMITER
TO ANY
OTHER AT THE TIME OF RUNNING?
two tables from two different databases r there,both having same structure but
different data,how to compare these two
tables ?
TWO FLAT FILES ARE THERE , EACH HAVING NO MATCHING COLUMNS . HOW CAN U JOIN THESE
TWO USING
JOINER TRANSFORMATION ?
IN SCD TYPE 1 WHAT IS THE ALTERNATIVE TO THAT LOOKUP TRANSFORMATION ?
explain how to use Normalizer transformation for the following scenario Source
table | Target Table | Std_name ENG MAT
ART | Subject Ramesh Himesh Mahesh Ramesh 68 82 78 | ENG 68 73 81 Himesh 73 87 89 |
MAT 82 87 79 Mahesh 81 79
64 | ART 78 89 64 | explain what should be the normalizer column(s) The GCID column
2)Also explain the Ni-or-1 rule.
what is Fact table Partitioning?
How to use the Oracle Analytic functions in Informatica
WHAT IS FACT TABLES?
WHAT IS UPDATE OVERRIDE,DIFFERENCE BETWEEN SQL OVERRIDE AND UPDATE OVERRIDE ?
HOW DO YOU CONNECT TO REMOTE SERVER?
WHAT IS THE NAME OF THAT PORT IN DYNAMIC CACHE WHICH IS USED FOR INSERT,UPDATE
OPERATION ?
HOW DO YOU PERFORM INCREMENTAL LOAD?
what are the reusable tasks in informatica ?
if a session is failed after a transformation,from where that session will run
again , i.e . from beginning or from that
transformation
what is the function of 'F10' informatica?
HOW TO GET THE LATEST DATA IN SCD ?
two types of data are there ,one is mainframe and the other is ASCII format . in
informatica how can you get both the data in
a single format in ASCII .
in unconnected look up , what are the other transformations , that can be used in
place of that expression transformation ?
what are the transformations that are used in data cleansing ? and how data
cleansing takes place ?
how many Fact Tables and Dimensions Table you have used in the Project? Which one
is loaded first Fact Table or
Dimensions Table into the warehouse? What is the size of the Fact Table and
Dimension Table? what is the size of the
table and warehouse
Is Flat File Contains the Dynamic Cache
what r the transformations that r not involved in mapplet?
What are differences between Informatica 7.1 and 6.1
Can you use one mapping to populate two tables in different schemas
How do you take care of security using a repository manager
if the session fails after 100 records again we have to starts the session or we go
for recovery session
how the server recognizes , if the session fails after loading the 100 records in
to the target
what is the difference between onsite & client site?
what is unit testing & how it is done?
what is metadata?
how do u tune queries?
what is scd?
difference between connected and unconnected lookups?
what happens when a batch fails?
what is parallel querying and what r hints
what r the values that r passed between informatics server and stored procedure?
What is version controlling in informatica?
What is scd methodology?
surrogate keys usage in Oracle and Informatica?
What is star and snowflake schema?
what is Kimball and Inmon methodologies?
how do u move the code from development to production?
why do u use shortcuts in informatica?
What is the file name which you need to configure in UNIX while installing
infromatica?
What happens if you increase commit intervals and also decrease commit Explain
grouped cross tab?
What is hash partition?
What is the approximate size of data warehouse
What is data quality? How can a data quality solution be implemented into my
informatica transformations, even
internationally?
What is the difference between view and materialised view?
How is Data Models Used in Practice?
What is an MDDB? What is the difference between MDDBs and RDBMSs?
What is active and passive transformation?
Why do we use DSS database for OLAP tools?
What is up date strategy and what are the options for update strategy?
What is staging area?
What is a look up function? What is default transformation for the look up
function?
What is query panel?
How can you define a transformation? What are different types of transformations in
Informatica?
Which kind of index is preferred in DWH?
What is power play plug in?
What is difference macros and prompts?
What is Cognos script editor?
What is IQD file?
What is the Difference between Power Play transformer and power play reports?
What is the capacity of power cube?
What is fact less fact schema?
What is meta data and system catalog?
What is operational data source (ODS)?
What are the Advantages of denormalized data?
After dragging the ports of three sources(sql server,oracle,informix) to a single
source qualifier, can you map these three
ports directly to target?
What are the circumstances that informatica server results an unrecoverable
session?
How can you recover the session in sequential batches?
How to recover the standalone session?
What is difference between stored procedure transformation and external procedure
transformation?
What are the scheduling options to run a session?
what is incremental aggregation?
What are the new features in Informatica 5.0?
How can u work with remote database in informatica?did you work directly by using
remote connections?
What is power center repository?
What is Performance tuning in Informatica?
what are the transformations that restricts the partitioning of sessions?
What is difference between partitioning of relational target and partitioning of
file targets?
How can you access the remote source into your session?
What is parameter file?
What are the session parameters?
How can u stop a batch?
Can you start a session inside a batch individually?
Can you start a batches with in a batch?
In a sequential batch can u run the session if previous session fails?
What are the different options used to configure the sequential batches?
What is a command that used to run a batch?
When the informatica server marks that a batch is failed?
How many number of sessions that u can create in a batch?
Can you copy the batches?
What is batch and describe about types of batches?
What is polling?
In which circumstances that informatica server creates Reject files?
What are the out put files that the informatica server creates during the session
running?
What are the data movement modes in informatca?
What are the different threads in DTM process?
What is DTM process?
What are the tasks that Loadmanger process will do?
Why you use repository connectivity?
How the informatica server increases the session performance through partitioning
the source?
To achieve the session partition what r the necessary tasks u have to do?
Why we use partitioning the session in informatica?
Which tool you use to create and manage sessions and batches and to monitor and
stop the informatica server?
Define mapping and sessions?
What is meta data reporter?
What are the new features of the server manager in the informatica 5.0?
What are two types of processes that informatica runs the session?
How can you recognize whether or not the newly added rows in the source r gets
insert in the target?
What are the different types of Type2 dimension mapping?
What are the mappings that we use for slowly changing dimension table?
What are the types of mapping in Getting Started Wizard?
What are the types of mapping wizards that r to be provided in Informatica?
What are the options in the target session of update strategy transformation?
What is Datadriven?
What is the default source option for update strategy transformation?
Describe two levels in which update strategy transformation sets?
what is update strategy transformation ?
What are the basic needs to join two sources in a source qualifier?
What is the default join that source qualifier provides?
What is the target load order?
What are the tasks that source qualifier performs?
What is source qualifier transformation?
Why we use stored procedure transformation?
What are the types of groups in Router transformation?
What is the Router transformation?
What is the Rank index in Rank transformation?
What are the rank caches?
How the informatica server sorts the string values in Rank transformation?
Which transformation should we use to normalize the COBOL and relational sources?
What are the Differences between static cache and dynamic cache?
What are the types of look up caches?
what is meant by lookup caches?
Why use the lookup transformation ?
what is the look up transformation?
What are the joiner caches?
What are the join types in joiner transformation?
what are the settings that u use to configure the joiner transformation?
In which conditions we can not use joiner transformation (Limitations of joiner
transformation) ?
What are the differences between joiner transformation and source qualifier
transformation?
How can you improve session performance in aggregator transformation?
Can you use the mapping parameters or variables created in one mapping into another
mapping?
What are the mapping parameters and mapping variables?
What are the unsupported repository objects for a mapplet?
What are the methods for creating reusable transformations?
What are the reusable transformations?
How many ways you create ports?
What are the connected or unconnected transformations?
What are the designer tools for creating transformations?
what is a transformation?
How can you create or import flat file definition in to the warehouse designer?
Which transformation should u need while using the cobol sources as source
definitions?
To provide support for Mainframes source data,which files r used as a COBOL files
Where should you place the flat file to import the flat file definition to the
designer
How Many ways you can update a relational source definition and what are they?
While importing the relational source definition from database,what are the meta
data of source U import?
What are parallel queries and query hints?
Explain reference cursor?
What is difference between Mapplet and reusable transformation?
How many repositories can we create in Informatica?
What is the Hierarchy of DWH?
Explain grouped cross tab?
What is the Difference between DSS & OLTP?
What is source qualifier?
What are mapping parameters and variables in informatica?
what are precession,post session success and post session failure commands ?
How to identify bottlenecks in sources,targets,mappings,work flow,system and how to
increase the performance?
When do we use dynamic cache and static cache in connected and unconnected look up
transformations?
What are the different threads in DTM process?
How to enter same record twice in the target table,explain?
What is fact table granularity?
What are reusable transformations in how many ways we can create them?
What is confirmed dimension and fact?
What are the two modes of data movement in informatica sever?
What is the difference between OLTP and ODS?
AT the max how many transformations and mapplets can we use in a mapping ?
How can we eliminate duplicate rows from flatfile,explain?
If we have look up table in work flow how do you troubleshoot to increase
performance?
can we generate reports in informatica ? How?
How can we join the tables if they don't have primary and foreign key relationship
and no matching port?
Name 4 output files that informatica server creates during session running?
What is the functionality of update strategy?
What are the different tasks that can be created in workflow manager?
What are the new features of informatica 7.1?
Explain the flow of data in Informatica?
Explain one complicated mapping?
what are the real time problems generally come up while doing or running mapping or
any transformation?
What is exact use of 'Online' and 'Offline' server connect Options while defining
Work flow in Work flow ?
what is the difference between Informatica 7.1 and Ab Initio?
What is Micro Strategy? Why is it used for?
Two relational tables are connected to SQL Trans,what are the possible errors it
will be thrown?
what are cost based and rule based approaches and what is the difference?
what is mystery dimension?
Explain about the concept of mapping parameters and variables ?
Comment on significance of oracle 9i in informatica when compared to oracle 8 or
8i?
Can you generate reports in Informatcia?
If you done any modifications for a table in back end does it reflect in informatca
warehouse or mapping?
How to recover sessions in concurrent batches?
Explain about perform recovery?
What are Dimensions and various types of Dimensions?
What is Code Page Compatibility?
What are Target Options on the Servers?
What is tracing level and what are the types of tracing level?
What are the types of metadata that stores in repository?
Define informatica repository?
Can you copy the session to a different folder or repository?
What is aggregate cache in aggregator transformation?
what is a time dimension? give an example?
Discuss the advantages & Disadvantages of star & snowflake schema?
What is the difference between Normal load and Bulk load?
What is the procedure to load the fact table.Give in detail?
What is the use of incremental aggregation?
why dimension tables are denormalized in nature ?
What is the difference between Power Center and Power Mart?
what are the enhancements made to Informatica 7.1.1 version when compared to 6.2.2
version?
what is the exact meaning of domain?
How do you handle decimal places while importing a flat file into informatica?
What is data merging,data cleansing,sampling?
How to import oracle sequence into Informatica?
what is worklet and what use of worklet and in which situation we can use it?
what happens if you try to create a shortcut to a non-shared folder?
If you want to create indexes after the load process which transformation you
choose?
Where is the cache stored in informatica?
what is Partitioning? where we can use Partition?
what are the different types of transformation available in informatica and what
are the mostly used
what is surrogate key?In your project in which situation u has used?explain with
example?
HOW TO GET DUPLICATE RECORDS ARE ONE TABLE AND OTHER ARE ANOTHER TABLE
why sorter transformation acts as a active transformation
What is data sampling in informatica
How many Integration Service does Informatica Server contain?
what is optimization in informatica
what is pushdown optimization in informatica
what is data model in data warehouse
what is latest version of informatica power center
What is Sql override and lookup override
How to update a target table without primary key in informatica
how do you read xml files in informatica
In Real Time what are the scenarios u faced, what r the tough situations u have
faced in ur job
generally how many fact and dimension tables the banking project contains? please
send me
what are homogeneous sources and heterogeneous sources in informatica
What is the Architecture of informatica
How to create a session in informatica
what is the real scenario to use sql transformation ?
How to create a source definition in informatica
What is the role of integration services in informatica
What is SCD(slowly changing dimensions) in informatica and types of SCD
what is dense rank in informatica
What is mapplet and types of mapplets
What is worklet or worklets in informatica and use?
What is precision and scale in informatica
what is bottleneck in informatica?
How to identify the bottlenecks in informatica
How to eliminate/remove bottlenecks in informatica
What is custom transformation in informatica
What are the types of loading in informatica
What are decode function in informatica
what are the dimension and fact tables used in finance domain
what is staging area in informatica
i want to sample mappings,dimension and fact tables for banking project on
informatica
What is active transformation
What are Additive facts
why informatica
What are Fact And Fact Table Types
what is junk dimension with example
Why and when we use snowflake schema
What are Semi-Additive facts
What are Non-Additive facts
what is look up mapplet?
How do you handle two sessions in Informatica
What is intermediate task in informatica? In what situation this task will be
executed?
How to filter first or last 10 records using filter transformation?
what is sort key how it functions
how to create materialized view
COMMON INTERVIEW QUESTIONS AND ANSWERS
why do you want to work for us?
Why do you consider yourself suitable for this post?
what is your greatest strength
Why Do You Want To Work At Our Company?
What motivates you,Money or Success ?
why you choose our company?
what is your job strength ?
what is your ability?
briefly describe your ideal job?
what are your short term goals?
Why do you think you are the best for this post?
why u want to join this company?
Tell me something about your self
Why should we hire you?
What Are your strengths and weaknesses?
where do you see yourself in five(5) or ten(10) years from now?
Why did you leave your (current)present/previous job/Company?
what exactly do you look for a job?
what is your expected salary
how to impress the interviewer?
Describe any happy moment in your life?
What is your philosophy of life?
Why we should not hire you?
Why,you would like to join this organization?
What goals do you have in your career
what makes you stand out among other candidates?
Tell us about a time when you failed to meet a deadline. What were the
repercussions?
If you get a Opportunity to go with a reputed company which is ready to pay more
than what we r going to pay here.Will you
quit the job from here or what?
Can you work well under deadlines or pressure?How?
what is your goal of life?
WHAT IS YOUR SALARY EXPECTATION?
why do you want to work here?
what is your memorable day
tell me about your ideal person and why?
What would be your advantage in the future if you join now?
How much salary do you want?
What types of people do you get along with and why.
Why did you choose this career?
How do you plan to achieve these goals?
how do you manage your stress
Why do you think you do well?
What are things most important to you in your job?
What do you love and hate?
tell me some thing about global warming
what kind of salary are you looking for?
What is your favorite past time?
How do you see yourself?
Describe how your experience, qualifications and competencies match the position
for which you are applying
What is more important to you: the money or the work?
why did you apply for this job
What is your future Plans?
Why You Are Looking For A New Job
What job would you like to have in five years time?
what will you do if you dont get this job
What motivates you to do a good job?
Sometimes a person younger to you might be in a better position.Are you comfortable
taking directions from him?
Describe yourself as a third person?
what is the best way to start with the interview
what are the challenges you faced in previous job?
Tell me a situation you solve with creativity
tell about your favorite game
What have you learned from your previous jobs?
Describe your work ethic?
Why have you chosen the role you are applying for?
who is your inspiration and why
Briefly describe your duties and accomplishments
What is your greatest achievement?
Do you consider yourself successful?
Give me an example of any major problem you faced and how you solved it
What is your most significant experience?
how will u design a pipe
Explain how you would be an asset to this organization
Speak about city where you live
What would you consider as your biggest achievement and why?
Tell me about a time when you were able to identify a problem and resolve it before
it became a major issue
How long do you think it would be before you will make a significant contribution
to the team/company?
what do u preffer to be after 3years?
Tell me something about your happiest day of your life.
how can we justify yourself that you are fit for this job?
AT WHAT YOU ARE THE BEST?
What job position/s are you currently holding with your current employer?
describe a difficult situation?
tell about your most memorable movement
How would you spend a million pounds?
Tell me something about your worst day of your life.
How would you describe your ideal job?As per my interest so that I can work with
full devotion and can fulfill my professional
ethics.
why are you interested in our company.
tell me about a situation when your work was criticized?
what is your ideal boss?
Why cannot you clear the ijp(internal job posting)
what attracted you to this position?
what about your job profile
can u speak around for 5mins about {Bangalore/any} city?
What inspires you
tell something about your father?
Tell me about your family background.
What will be your expected Salary? Can you justify your Salary?
what is your career aspect & if they asked in general what should i answer?
What are your strengths and areas of improvement?
how long do you think you would want to be in the employment of the organization?
what is positive and negative in you?
what influenced you to choose this career??
Tell me about your dream job.
what motivates you and de-motivates you
What have you done to show initiative in your career?
what is the negative & positive thing in you?
How do you prioritize your personal matter over work?
What is your principles and values in life
Describe about your ideal company
what was your toughest/hardest decision you ever had to make?
what makes you the suitable candidate for this position
Have you worked with other on team endeavors
Tell me about how you have left a position better than you found it
what is your passion in life?
When do you get angry?
what makes you dissatisfied about your current job?
what are you doing before this job
What do you consider as the biggest challenge you have encounter in the work place
Please explain to us one conflict situation that you have experienced in your work
life and how did you solve it
why u want to come to lower pay scale
Tell me about an experience in which you had to use tact?
Who are our major competitors and what differences do you notice in our products?
How do you determine or evaluate success? According to your definition of success,
how successful have you been so far?
What activities have you done in the past?
What kind of person are you?
What is Your organizational structure?
what job profile are you interested in?
What were your greatest accomplishments and challenges?
if you were hiring a person for this job,what would look for?
As a MBA marketing student why do you want to pursue your career in banking sector
How to describe our city(Hyderabad) in a topic round during the interview time?
what is your dream vacation
Why do you think you are a qualified candidate for the job?
what are your hobbies?
how could i rate my self about my knowledge in programming
what is your name
how would you relate your key competencies to this position?
What is your career plan?
what would you want to be paid.
are you the right person for this job?
whats a prefect introduction
How do you handle pressures and deadlines and multi-tasking?
how do you plan your typical day at work
how to answer for gap between education and work
what are your job responsibility in your place of work?
tell me about Bangalore
what can you contribute to this company?
How much do you expect if we offer you this position
how do u rate yourself on the scale of one to ten?
what is the expansion of Philips?
how many degrees of comparisons in English and explain about it?
how did u spend your yesterday?
suppose you are asked a question concerning salary expectation, how do you tackle
that question?
what are you doing
why do you want a change?
in which field u r working now
How are you different from others?
dressing style for interview?
What impact would you make in our organization?
What is your aim
how to give self introduction
what is your weakness and why?
Describe yourself
what is your edge from the other applicants?
why would u like to choose this profession?
What attracted you to this job?
In your last job,what would be the one thing that your peers most disliked about
you?
What qualities do you think make someone successful in business?
Give us details of your present Employment Status.
How soon can you travel down to any location posted to?
how can you hire me when i have no experience?but i have ability to work for your
company
in your opinion,how many these weaknesses be addressed?
why should we give you this job?
why you would like to be considered for this role
how do you see yourself after two years from now
why you have left this job?
what are your current weaknesses?
why do you think you are fit for this job,you don't have experience,i don't think
you are fit for a job
What do you expect for your salary?Is this negotiable?
How would your friends describe you?
what is your favorite personality
Why are choosing this company instead of other company?
WRITE AT LEAST FOUR STRENGTHS
how long will u work here
what do you consider to be your most significant achievement?what difficulties did
you encounter in realizing the
achievement and how did you overcome them
how do you work under pressure and solve your problems
why do you want to join us?
How creative are you?Give an example.
Why should you be given the job?
What are the benefits of being a graduate of Associate of computer Studies?
What courses have you liked most?Least? Why?
What do you do in your spare time?
who serves as your inspiration while working?
how do you find your salary?
Can you give an advice to those students who are technician,taking computer
programs?
What does "success" mean to you?
when the interviewer asks, what is the Architecture of your project what we have to
answer...
plz provide the answer with the example
Tell me about your current job and responsibility
Why did you left the previous organization
what is the best dress code for interview
what is faithfulness in your point of view
Give me your Self introduction
why do you fit for the job
what you want to become in your life?
can you briefly describe your current position, its duties , and responsibilities?
what is your attitude towards adversity and temptation?
what sort of things motivates you?
Mention briefly organisation structure of the company indicating your position in
the hierarchy and the levels above and
below you
what examples can you give that emphasize your interest in this kind of work
What shall i do if i do not like some people at my office place . how shall i
handle this kind of situation ?
how long would you like to stay with this company?
how do you rate yourself as a professional?
tips for successful interviews
how did you spend your last weekend
What is the difference between doing a good job and doing a great job?
what is the best possible answer of "why do you want to change"
How shall i react if i do not like some people in our office
why do like to work with us
what is your daily routine
how can i explain my project
How long would you expect to work for us if hired?
what motivates you ,money or work?
Could you please introduce yourself?
What attributes can you bring to this position?
what initiative you did in last company as you worked for 3 years?
Hi, Recently i have resigned a job. what i have to tell for the new employer that
if he asks why you have resigned your
previous job.i have worked there for 10 months.
WHY DO YOU WANT TO LEAVE YOUR PRESENT JOB?
what value you will add to our company if you are selected
why looking for job change within 6 months
what makes you different from other candidates?
WHAT IS THE DIFFERENCE BETWEEN HARD WORK AND SMART WORK?
what kind of salary do you need?
Where do do you want to being five year?
why are you leaving this position?
how to perform first round in interview
What are the values & beliefs that have guided your life so far and how do you see
them influencing your future?
what do you expect to learn from this job
what motivates you to work hard?
Please tell us briefly about a situation in your life when you have had to stretch
yourself to meet a new situation or higher
standards?
give me directions on how to get here in this office from where you currently
reside
what would you do if your boss needs several things done at the same time?
introduce your self
did you leave your job voluntarily or were you fired?
Why are you looking for a new position (if you are currently employed)?
kind of cement kind of pipe kind of g.i sheet size of shuttering scaffolding pipe
How do you feel about the same job or work in progress right now.
what are challenges and benefits from effective communication in the workplace
Mention briefly organisation structure of the company
Why did you leave your earlier job without any new job in hand?
What are your short term and long term career goals?
what is personality
What will you do if your senior officer do not mind your work ?
what are some things about global warming
how you spend your yesterday from morning to till evening?
how long u will stay for us
tell me about a problem you solved in unique way

-------------------------------------------------------------------------

37597739-28531326-Informatica-Senarios-1.pdf

A:-In expression transformation use an output port ...in expression window write
emp_id||empname
2. How to join a Flat and Relational Source without using (Joiner, Update, Lookup)
transformations... is it possible? If yes i would like to know how?
A:- not possible
3. I have a source which relational, I am trying to populate to target flat file
with one
column for daily date which is sysdate, I want to populate the sysdate coulmn with
DD/MM/YYYY format. Kindly provide a solution for this. My clear that my target is
flat
file.
A: - In expression transformation create one out put port, and write like:
TO_DATE ('SYSDATE'.'DD/MM/YYYY').Connect this port to target.
4. If the source has duplicate records as id and name columns, values: 1 a, 1 b, 1
c,
2 a, 2 b, the target should be loaded as 1 a+b+c or 1 a||b||c, what transformations
should be used for this?
A: - We need to use sorter, Expression and aggregator transformations to do this…..
1. Sort by ID
2.Take 2 variable ports one for id and one for name and store the values of id and
keep
on comparing with current id i.e. variable is having previous id so if previous
id=current id
then (variable name)||name otherwise only name. Assign the variable name to output
port
3. Use aggregator and use last or max (len (name)) to get the result.
5. How many repositories can you create in informatica?
A-In Informatica 8.6.0 multiple repositories can be created under node. The domain
can
have multiple nodes.
6. Router T/R is active but some people are saying some times passive what is
Reason behind that?
A: - First of all Every Active transformation is a Passive transformation, But
every passive
not
Active.
In Router Transformation there is a special feature with Default group. Because of
Default
Group its passive. We can avoid this Default group by some transformation Settings,
Now
It’s Active.
7. I want to run an informatica workflow after completion of oracle procedure. That
Procedure is not running through informatica and can be run at any Time in
Database. Informatica is in windows environment. Is it possible? If yes please
Explain?
A: - This can be possible with UNIX. Create a shell scripts which first has to
Execute the
Stored procedure or Package and we have command to check the completion or
procedure
After that use Pmcmd command in the same UNIX to start the workflow.
Informatica Senarios-2
8. in a single mapping, more than 500 sources (legacy, VSAM, relational) will be
loading into only one target. Whenever I retrieve the data (any record) from
target, I
need to find the details that the record belongs to which source?
A: - After every Source qualifier transformation just keep an Expression with
flag.Dont go
Single Source qualifier transformation. Keep 1 source qualifier Trans for one
table.
9. Diff b/w Shortcut and reusable Object?
A: - A shortcut is created by assigning 'Shared' status to a folder within the
Repository
Manager and then dragging objects from this folder into another open folder
10. What is 'Power Center Pushdown Optimization Option' in INFORMATICA?(IMP)
A: - Pushdown optimization is used to push the complex logic to the database level.
This
will
Reduce the complexity of the Power center mappings and increases the performance.
11. If no. of source columns is changing every time (First time it is 10 next time
it is
20 so on). How to deal with it without changing mapping?
A: - If I understand this question properly, it says that the no. of "Source"
columns are
changing. I do not agree with this scenario. Probably in Data warehousing, you
won't find
such a design. As far as DWH is concerned, it takes the data from the OLTP systems
&
after performing some operations (E-extract, T- transform) it finally loads the
data in some
targets. Here, as per the question, the question itself arises for the OLTP design.
No any
OLTP (or database design principal) system suggests a varying number of columns.
So,
please do not get confused by such trivial kind of questions.DWH is a much
disciplined
subject & it follows a very good standards. Please go through the concepts first.
You will
get a clear picture of DWH then.
12. "Change cache" in Informatica?
A: - dynamic cache
`
13. Can we create index and drop index in existing table while using informatica?
A: - I know 4 ways in INFORMATICA
1) Source Analyzer window- (source table, Using key ports (enable, disable)).
2) Source qualifier Trans-(Sql override)
3) Target override
4) Pre sql, Post sql
14. If we are using an aggregator but forget to mention the group by port .what
will
be the Output?
A: - If we miss to enable any of the port as GROUP BY, the aggregator will write
the lat
row of
the table to the next transformation.
15. There are n numbers of flat file of exactly same format are placed in a folder.
Can we load these flat file’s data one by one to a single relational table by a
Single session?
A: - Use source type as Indirect File Type and source file name as a file having
the
names of
All the n flat files to be read.
Informatica Senarios-3
15. Why do we go for update strategy TR in SCD rather using the session
Properties?
A: - Session Properties like pre Source Rows INSERT, UPDATE, REJECT, DELETE,
Using
Session Properties we can do single flow only .SCD applicable for Insert, Update at
a
Time using Update Strategy Trans only. Using Update Trans we can create SCD
mapping
16. How many mapplets u have created? And what is the logic used?
A: - We can create any No of Mapplets for 1 mapping. There is no limit for
Mapplets.
Every mapplet can have a Logic or logics,,,,,, There is no limit for logics.
17. LOOKUP Condition is nothing but a Join condition? What type of join condition
it,
By Default? Using the Lookup Condition How many types of relational
conditions
We can make?
A: - as per my understanding. Lkp is always behave like left outer join. It will
give you all
Matched records as well as unmatched records which are not present in base
table...and
Those unmatched records are will be null in case of unconnected lkp trn.....
18. What is Target Update Override? What is the Use?
A: - When we don't have primary keys defined on database level. And still we need
update on
This target from Informatica. We need to define keys at informatica level and use
update
Override in target property. This way we can update the table.
19. Why sequence generator should not directly connected to joiner
transformation?
A:-Mainly sequence generator is used to generate a unique id dynamically. We can
not
join this number against any column in other tables...So. We can not connect
sequence
generator with joiner. And also, Main reason is joiner is an Active transformation.
Means it
can alter the number of rows. So, if u connect sequence generator with joiner the
outcome sequence will not be proper.
20. from Source 100 rows are coming, on target there are 5 m rows which options
is better to match data 1. Joiner 2 No cache 3. Static 4. Dynamic?
A: - Here we will use joiner for better performance. We will join the two sources
making
source table as master source. So only 100 comparisons will be done. So it will be
very
faster.
Whereas in static and dynamic we have to look up on the target which is very large
5m
rows. So caching will take more time.
21. How to create a mapping?
id date
101 2/4/2008
101 4/4/2008
102 6/4/2008
102 4/4/2008
103 4/4/2008
104 8/4/2008 O/P - should have only one id with the min (date) How to create a
mapping for this?
A: - I think its Simple, with Agg Transformation, First Group by ID, Then go with
min (date)
in
Same Agg Transformation.
Informatica Senarios-4
22. What are set operators in Oracle?
A: - UNION, UNION ALL, MINUS and INTERSECT
23. How I can Schedule the Informatica job in "Unix Corn scheduling tool”?
A: - we can do this by using crontab file in UNIX, for this we need to schedule the
Power
centre job. or we can use "at" command in UNIX to schedule the job.
24. How can I generate Sequence Numbers to Target Table (with out using
Sequence Gen Trans, Rank Trans).
A: - Use database Sequence generator call this from stored procedure or dummy
lookup
Query Or You can also use expression transformation. Create two ports one is
variable
And assign it to 0 and another one is output port and Write the logic to increment
it
(o_seq=v_Seq+1)
25. Can any one explain me step by step how scd will work?
Selects all rows. Caches the existing target as a lookup table. Compares logical
key
columns in the source against corresponding columns in the target lookup table.
Compares source columns against corresponding target columns if key columns match.
Flags new rows and changed rows. Creates two data flows: one for new rows, one for
changed rows. Generates a primary key for new rows. Inserts new rows to the target.
Updates changed rows in the target, overwriting existing rows.
26. When we load flat files into target tables how do we identify duplicates? And
where do load the duplicate records for further reference? How do we do chage
data capture? Is this slowly changing Dimension technique?
A: - I have an idea after sql transformation go thruogh 2 Agg Trans, 2 Router Trans
Agg1-gorup by col count=1 to router Trans
Agg2-group by col count<>1 to router Trans (I think “it will help u”)
From 2nd router Transformation we have a Separate Target Table.
Is this slowly changing Dimension technique?
Change data capture (CDC) Mean newly Inserts, Updates Based on Data loading
time,
This Inserts, Updates only slowly changing Dimension technique.
27.I have table name called Team and I have name and DOJ in that table in oracle,
when I retrieve the table in Informatica DOJ shows with date and time , I want want
to know is it possible to get only date(MMDDYYYY) in the date data type,
A: - TO_CHAR (DOJ_port, 'MMDDYYYY')
28. How Union Transformation is an Active Trans?
A: - The simple logic of Union is that, It capture all the unique records from both
the
source.
Suppose if you have 10 records in table A and 10 records in table B, in which 3
records
Are same in both the tables. Then after using the Union transformation you will get
17
Records, as the records which are getting duplicated will not be in the output.
29. How can we load first and last record from a flat file source to target?
A: - After sql Trans, go with AGG, RANK transformation, in RankTrans Properties set
the
rank
1 only---1 row returns, In AggTrans Don’t do any column for Group--Last row
returns,
We need 2 Target tables. 1 for 1st record,2 for last record, if u using the
UNIONtrans
one
Target table is enough,,,
Informatica Senarios-5
30. Diff B/W MAP Parameter, SESSION Parameter, Database connection session
parameters? It’s possible to create 3parameters at a time? If Possible which one
will fire FIRST?
A: - we can pass all these three types of parameters by using Perameterfile.we can
declare all in one parameter file.
A mapping parameter is set at the mapping level for values that do not change from
session to session for example tax rates.
Session parameter is set at the session level for values that can change from
sesion to
session, such as database connections for DEV, QA and PRD environments.
The database connection session parameters can be created for all input fields to
connection objects. For example, username, password, etc.
It is possible to have multiple parameters at a time?
The order of execution is wf/s/m.
31. How to run two workflow (not a sessions) sequentially, what is the process?
A: - The best way is obviously to run WF1 and then call WF2 using PMCMD command in
the last session of WF1 (as a post session task).If you absolutely want to ensure
that the
second wf starts only after graceful completion of wf1 then simply add a command
task
for the pmcmd and use the piple to validate that the previous task is completed
properly.
Or
We can run the workflow sequentially .for that we need to write a ksh shell scripts
or
batch command and use cmd command
32. Which is costliest transformation? Costly means occupying more memory?
A:-Look up Transformation only, its going to maintain existing data also in cache
memory
33. Which gives the more performance when compare to fixed width and delimited
file? And why?
A: - fixed width, because there are no delimiters to check so the performance will
increase.
34. How to list Top 10 salary, without using Rank Transmission?
A: - use sorter--> expression-->filter
1) Sorter descend 2) use sequence generator connected expression to generator
Sequence, 3) filter the value sequence number greater than 10
35. How to extract original records at one target & Duplicate records at one
target?
A:- Source -> SQ -> Sorter ->Expression ->Router(or 2 filters) ->Targets
36. Is it possible to have "5 source & 5 Target" in single mapping?
A:-single mapping we can have 5 sources and 5 target and we need to arrange target
load
Plan if dependency exists.
37. without using Lookup & Sequence Generator, How to generate Sequence?
A:-using set count variable in expression transformation.
38. How to join 2 tables, without using any condition?
A:-Add dummy column in expression or Source Qul. For both source and use that
column
in
Join condition.
Informatica Senarios-6
39. Without source how to insert record to target?
A: - Without source you can not create mapping...
39. How will you remove the duplicate records from flat file without using sorter?
A: - Use aggregator transformation and group by all ports and create one port for
checking
Count...and pass the results accordingly to target tables.
40. How to join the two flat files using the joiner t/r if there is no matching
port?
A:-Connect the source Qualifier of two different flat files to two different Exp
Trans. Create
a
Dummy output port in both the exp trans. then using that port connects the joiner
Tran.
41. What is the difference between Oracle performance and Informatica
Performance? Which performance is better?
A:-oracle performance deals with the source &targets. Informatica performance deals
with
the
Transformations. For efficiency result both are impotent...
42. How to run the batch using pmcmd command?
A:-Using Command task in the workflow
43. Suppose you have 2000 records in one table and 12000 in another which one
you will consider as master and detail?
A:-We will consider the one with lesser number of records as master as with this
approach the
Data to be cached would have to be less and hence the performance can be improved.
44 .What is the target load order?
A:-You specify the target load order based on source qualifiers in a maping.If you
have
the multiple source qualifiers connected to the multiple targets, you can designate
the
order in which informatica server loads data into the targets.
45. Explain use of update strategy transformation?
A:-To flag source records as INSERT, DELETE, UPDATE or REJECT for target
database. Default flag is Inserting. This is must for Incremental Data Loading.
Or
This is the important transformation, is used to maintain the history data or just
most
recent changes into the target table.
We can set or flag the records by using these two levels.
1) Within a session:-When you configure the session, you can instruct the
informatica
server to either treat all the records in the same way.
2) Within a mapping:-within a mapping we use update strategy transformation to flag
the records like insert, update, delete or reject.
46. This is a scenario in which the source has 2 cols -10 A ,10 A,20 C,30 D,40 E,20
C
And there should be 2 targets one to show the duplicate values and another
target for distinct rows.
T1 T2
10 A 10 A
20 C 20 C
30 D 40 E which transformation can be used to load data into target?
Informatica Senarios-7
A:- 1.Sorce - Source qualifier – Target Check- Select distinct option
2.Source --Source qualifier- aggregator – Target group by-empno
3.Source --Source qualifier- sorter – Target Check- Select distinct option
4.Source -Source qualifier -Expression –Target -Source qualifier: check sorted by
empno
Expression:- port expression
in_empno
var-empno=var_duplicate
var_duplicate=in_emp
flag= IIF(var_duplicate=var_emp,'Y','N')
5.Source -Source qualifier -Rank-Expression -Target
47. What is parameter file?
A:-When you start a workflow, you can optionally enter the directory and name of
a parameter file. The Informatica Server runs the workflow using the parameters in
the file
you specify. For UNIX shell users, enclose the parameter file name in single
quotes:
-paramfile '$PMRootDir/myfile.txt
48. Difference between Rank and Dense Rank?
Rank:
12
<--2nd position
2<--3rd position
45
Same Rank is assigned to same totals/numbers. Rank is followed by the Position.
Golf
game ususally Ranks this way. This is usually a Gold Ranking.
Dense Rank:
12
<--2nd position
2<--3rd position
34
Same ranks are assigned to same totals/numbers/names. the next rank follows the
serial
number.
49. What is the method of loading 5 flat files of having same structure to a single
target and which transformations I can use?
Two Methods.
1.write all files in one directory then use file repository concept(dont forget to
type source
file type as indirect in the session).
2.use union t/r to combine multiple input files into a single target.
50. Suppose session is configured with commit interval of 10,000 rows and source
has 50,000 rows. Explain the commit points for Source based commit and Target
based commit. Assume appropriate value wherever required.
Source based commit will commit the data into target based on commit
interval.so,for
every 10,000 rows it will commit into target.
Target based commit will commit the data into target based on buffer size of the
target.i.e., it commits the data into target when ever the buffer fills.Let us
assume that the
buffer size is 6,000.So,for every 6,000 rows it commits the data.

----------------------------------------------------------------

186656005-Informatica-Interview-Questions-Scenario-Based.pdf

INFORMATICA INTERVIEW
QUESTIONS, 25 Scenarios/Solutions
Informatica Interview Questions [ Version 1.1 ]
Compiled by - mahender, uma
2/1/2013

ETL Labs
Informatica Scenarios
Scenario1:
We have a target source table containing 3 columns: Col1, Col2 and Col3. There is
only 1 row in the table as follows:
Col1 Col2 Col3
-----------------
a b c
There is target table contain only 1 column Col. Design a mapping so that the
target table contains 3 rows as follows:
Col
-----
a
b
c
Solution:
Not using a Normalizer transformation:
Create 3 expression transformations exp_1,exp_2 and exp_3 with 1 port each. Connect
col1 from Source Qualifier to port in
exp_1.Connect col2 from Source Qualifier to port in exp_2.Connect col3 from source
qualifier to port in exp_3. Make 3
instances of the target. Connect port from exp_1 to target_1. Connect port from
exp_2 to target_2 and connect port from
exp_3 to target_3.
Scenario 2:
There is a source table that contains duplicate rows. Designs a mapping to load all
the unique rows in 1 target while
all the duplicate rows (only 1 occurrence) in another target.
Solution:
Bring all the columns from source qualifier to an Aggregator transformation. Check
group by on the key column. Create a new
output port COUNT_COL in aggregator transformation and write an expression COUNT
(KEY_COLUMN). Make a router
transformation with 2 GROUPS: Dup and Non-Dup. Check the router conditions
COUNT_COL>1 in Dup group while
COUNT_COL=1 in Non-dup group. Load these 2 groups in different targets.
Scenario 3:
There is a source table containing 2 columns Col1 and Col2 with data as follows:
Col1 Col2
------ ------
a l
b p
a m
a n
b q

ETL Labs
x y
Design a mapping to load a target table with following values from the above
mentioned source:
Col1 Col2
------ ------
a l, m, n
b p, q
x y
Solution:
Use a sorter transformation after the source qualifier to sort the values with col1
as key. Build an expression transformation
with following ports (order of ports should also be the same):
1. COL1_PREV: It will be a variable type port. Expression should contain a variable
example: VAL
2. COL1: It will be Input/output port from Sorter transformation
3. COL2: It will be input port from sorter transformation
4. VAL: It will be a variable type port. Expression should contain Col1
5. CONCATENATED_VALUE: It will be a variable type port. Expression should be decode
(Col1,Col1_prev,Concatenated_value||','||Col2,Col1)
6. CONCATENATED_FINAL: It will be an output port connecting the value of
CONCATENATED_VALUE.
After expression, build a Aggregator Transformation. Bring ports Col1 and
CONCATENATED_FINAL into aggregator. Group
by Col1. Don't give any expression. This effectively will return the last row from
each group.
Connect the ports Col1 and CONCATENATED_FINAL from aggregator to the target table.
<UMA> this can be achieved by using a database stored procedure also, but it might
end up parsing the statement every time.
So it would be always good to go with the above mentioned solution.
Scenario 4:
Design an Informatica mapping to load first half records to 1 target while other
half records to a separate target.
Solution:
You will have to assign a row number with each record. To achieve this, either use
Oracle's PSUDO column ROWNUM in
Source Qualifier query or use NEXTVAL port of a Sequence generator. Let’s name this
column as ROWNUMBER.
From Source Qualifier, create 2 pipelines:
First Pipeline:
Carry first port Col1 from SQ transformation into an aggregator transformation.
Create a new output port "tot_rec" and give the
expression as COUNT(Col1). Do not group by any port. This will give us the total
number of records in Source Table. Carry
this port tot_rec to an Expression Transformation. Add another port DUMMY in
expression transformation with default value 1.
Second Pipeline:
from SQ transformation, carry all the ports(including an additional port rownumber
generated by rownum or sequence
generator) to an Expression Transformation. Add another port DUMMY in expression
transformation with default value 1.
Join these 2 pipelines with a Joiner Transformation on common port DUMMY. carry all
the source table ports and 2 additional
ports tot_rec and rownumber to a router transformation. Add 2 groups in Router :
FIRST_HALF and SECOND_HALF. Give
condition rownumber<=tot_rec/2 in FIRST_HALF. Give condition rownumber>tot_rec/2 in
SECOND_HALF. Connect the 2
groups to 2 different targets.

ETL Labs
Scenario 5:
A source table contains emp_name and salary columns. Develop an Informatica mapping
to load all records with 5th
highest salary into the target table.
Solution:
The mapping will contain following transformations after the Source Qualifier
Transformation:
1. Sorter : It will contain 2 ports - emp_name and salary. The property 'Direction'
will be selected as 'Descending' on key
'Salary'
2. Expression transformation: It will 6 ports as follows -
a> emp_name : It will be an I/O port directly connected from previous sorter
transformation
b> salary_prev : It will be a variable type port. Give any vriable name e.g val in
its Expression column
c> salary : It will be an I/O port directly connected from previous transformation
d> val : It will be a variable port. The expression column of this port will
contain 'salary'
e> rank: It will be a variable type port. The expression column will contain decode
(salary,salary_prev,rank,rank+1)
f> rank_o : It will be an output port containg the value of 'rank'.
3. Filter Transformation : It will have 2 I/O ports emp_name and salary with a
filter condition rank_o = 5
The ports emp_name and salary from Filter Transformation will be connected to
target
Scenario 6:
Let’s say I have more than have record in source table and I have 3 destination
table A,B,C. I have to insert first 1 to 10
records in A then 11 to 20 in B and 21 to 30 in C.
Then again from 31 to 40 in A, 41 to 50 in B and 51 to 60 in C……So on up to last
record.
Solution:
Generate sequence number using informatica, add filter or router transformations
and define the conditions accordingly…
Define group condition as follows under router groups….
Group1 = mod(seq_number,30) >= 1 and mod(seq_number,30) <= 10
Group2 = mod(seq_number,30) >= 11 and mod(seq_number,30) <= 20
Group3 = (mod(seq_number,30) >=21 and mod(seq_number,30) <= 29 ) or
mod(seq_number,30) = 0
Connect Group1 to A, Group2 to B and Group3 to C

ETL Labs
Scenario 7:
Validation rules for connecting transformations in Informatica?
Solution:
Some validation rules:
1-You can only link ports with compatible datatypes.
2-You cannot connect an active transformation and a passive transformation to the
same downstream transformation.
3-You cannot connect more than one active transformation to the same downstream
transformation or transformation input
group.only way to do it using joiner with sorted ports.
Scenario 8:
Source is a flat file and want to load unique and duplicate records separately into
two separate targets; right??
Solutions:
Here comes the solution -
SRC - SQ_SRC - SRT - EXP - RTR - TGT
Try with the above and add logic of Sorter to identify duplicates, Use expression
to mark the duplicates and finally Router to
route to different targets.
Scenario 9:
Input file
---------
10
10
10
20
20
30
output file
------------
1
2
3
1
2
1
scenario-it will count the no of records for example in this above case first 10 is
there so it will count 1,den again 10 is there so
it will be 2, when 20 comes it will be 1 again.

ETL Labs
Solution:
First import source, then use a sorter transformation. sort it by your column, then
use an expression.
In expression make this column, like this
1. column_num(coming from sorter)
2. current_num= check if column_num=previous_num,then add (first_value +1),else 1
3. first_value=current_num.
4. previous_num(new column)= column_num
Pass current_num to target.
Scenario 10:
Input file
---------
10
10
10
20
20
30
output file
----------
1
2
3
4
5
6
Solution:
<UMA> Sequence Generator can be used
Scenario 11:
input file
---------
10
10
10

ETL Labs
20
20
30
output file
---------->
1
1
1
2
2
3
Solution:
Sort => Expr (%10)=>Target
Scenario 12:
There are 2 tables(input table)
table aa table bb
-------- ---------
id name id name
-- ----- -- ----
101 ramesh 106 harish
102 shyam 103 hari
103 ---- 104 ram
104 ----
output file
----------
id name
-- ----
101 ramesh
102 shyam
103 hari
104 ram
Solution:
One SQ => Exclude NULL value trx => Filter 106
Table bb is master and aa is detail table; If I do Master outer join, It will give
common records from Master and additional
records from detail – So output will be 101, 102, 103, 104
Use sorter and direct to target
SQL: SELECT * FROM BB LEFT OUTER JOIN AA ON (BB.ID = AA.ID)

ETL Labs
Scenario 13:
table aa(input file)
------------------
id name
-- ----
10 aa
10 bb
10 cc
20 aa
20 bb
30 aa
Output
-----
id name1 name2 name3
-- ------ ------ -----
10 aa bb cc
20 aa bb --
30 aa -- --
Solution:
Use Sorter => EXPR (val=in_name; out=if (out=val
Scenario 14:
table aa(input file)
------------------
id name
-- ----
10 a
10 b
10 c
20 d
20 e
output
-------
id name
-- ----
10 abc
20 de

ETL Labs
Scenario 15:
In the below scenario how can I split the row into multiple depending on date
range?
The source rows are as
ID Value from_date(mm/dd) To_date(mm/dd)
1 $10 1/2 1/3
2 $5 1/5 1/8
3 $20 1/9 1/11
The target should be
ID Value Date
1 $10 1/2
1 $10 1/3
2 $5 1/5
2 $5 1/6
2 $5 1/7
2 $5 1/8
3 $20 1/9
3 $20 1/10
3 $20 1/11
What is the informatica solution?
Solution:
Use a Normalizer transformation with 3 ports ID, Value, Date. Set the 'Occurs'
property of Normalizer to 2 for Date port and 1
for ID and Value ports. Now normalizer will be created with 4 input ports ID,
Value, Date1 and Date2 and there will be 3 output
ports ID, Value, Date. Connect from_date to Date1 and To_Date to Date2, and connect
the rest of the matching ports.
Connect the normalizer to your target.
Use datediff function to calculate no of days between dates.
Use that difference as the number of iteration .than use add_to_date function to
increment date till number of iteration and load
it to target.
Scenario 16:
How can the following be achieved in 1 single Informatica Mapping.
* If the Header table record has error value(NULL) then
those records and the corresponding child records in the
SUBHEAD and DETAIL tables should also not be loaded into
the target(TARGET1,TARGET 2 or TARGET3).
* If the HEADER table record is valid, but the SUBHEAD or
DETAIL table record has an error value (NULL) then the no
data should be loaded into the target TARGET1,TARGET 2 or
TARGET3.
* If the HEADER table record is valid and the SUBHEAD or
DETAIL table record also has valid records only then the

ETL Labs
data should be loaded into the target TARGET1,TARGET 2 and
TARGET3.
===================================================
HEADER
COL1 COL2 COL3 COL5 COL6
1 ABC NULL NULL CITY1
2 XYZ 456 TUBE CITY2
3 GTD 564 PIN CITY3
SUBHEAD
COL1 COL2 COL3 COL5 COL6
1 1001 VAL3 748 543
1 1002 VAL4 33 22
1 1003 VAL6 23 11
2 2001 AAP1 334 443
2 2002 AAP2 44 22
3 3001 RAD2 NULL 33
3 3002 RAD3 NULL 234
3 3003 RAD4 83 31
DETAIL
COL1 COL2 COL3 COL5 COL6
1 D001 TXX2 748 543
1 D002 TXX3 33 22
1 D003 TXX4 23 11
2 D001 PXX2 56 224
2 D002 PXX3 666 332
========================================================
TARGET1
2 XYZ 456 TUBE CITY2
TARGET2
2 2001 AAP1 334 443
2 2002 AAP2 44 22
TARGET3
2 D001 PXX2 56 224
2 D002 PXX3 666 332
Solution:
I don’t know. Let us know if you know this.
Scenario 17:
If i had source like unique & duplicate records like 1,1,2,3,3,4 then i want load
unique records in one target like 2,4
and i want load duplicate records like 1,1,3,3
Solution:
Source => SQ => Aggregator => Joiner => Router => Target1,2

ETL Labs
Scenario 18:
I Have 100 Records in a relational table and i want to load the record in 3 targets
, first records goes to target 1 and
second to target 2 and third to target 3 and so on ,what are the tx used in this.
Solution:
1) From source qualifier get the records to the Expression.
2) Use one Sequence generator in which set the max value as 3, enable cycle option.
Connect it to the expression.
3) Then use router & create 2 groups, 1st group condition as Next value = 1 another
as next value = 2 and default.
4) These should be connected to the 3 target tables.
Scenario 19:
There are three columns empid, salmonth, sal contains the values 101,jan,1000
101 feb 1000 …
like twelve rows are there then my required out put is like contains 13 columns
empid jan feb march ....... dec and the
values are 101 1000, 1000, 1000 etc
Make 13 columns and add expression transformation and as out and put the conditions
like MAX(if(month='jan',sal)), same for
other months.
Scenario 20:
I have a source either file or db table
Eno ename sal dept
101 sri 100 1
102 seeta 200 2
103 lax 300 3
104 ravam 76 1
105 soorp 120 2
Want to run a session 3 times.
First time: it should populate dept 1
Second time: dept 2 only
Third time: dept 3 only
How can we do this?
Solution:
Not sure how to do it.

ETL Labs
Scenario 21:
If I have a source as below:
Employeed, FamilyId, Contribution
1,A,10000
2,A,20000
3,A,10000
4,B,20
5,B,20
________________________________
And my desired target is as below:
EmployeeId,Contribution(In Percent)
1,25%
2,50%
3,25%
4,50%
5,50%
____________________________________________________________________________
Explanation: The contribution field in target is the individual Employee's share in
a family's contribution.Say if total family
contribution is 40000 then if A has contributed 10000 then target should have a
value of 25%.
____________________________________________________________________________
Can you please suggest me an approach to solve the specified problem?
Solution:
Here goes the sql override
SELECT A.empid(B.contribution/A.BB)*100 AS CONTRIBUT FROM (SELECT empid
SUM(contribution) OVER (PARTITION BY familyid) AS BB
FROM table1) A (SELECT empid contribution FROM table1) B WHERE A.empid B.Empid
Scenario 22:
Is it more Advantageous to use the Pre and Post SQL properties in the Workflow
Designer task properties or in the Mapping
Designer's? I am copying data from production to a staging table. As part of the
process, I need to drop the indexes and
triggers before the move, and recreate them after the move. What is the benefit (if
any) of using the pre and post sql property
in the WorkFlow rather than the Mapping?
Solution:
It’s good to go with Pre-SQL, Session, & Post SQL options with some modifications:
(Never validated)

ETL Labs
Scenario 23:
I have a session which has a truncate target table option enabled. When the session
fails for some reason the data in
truncated in the table. How can I avoid truncating a target table in case of
session failure?
Solution:
Step - 1:
In the Pre-SQL:
Statement sequence would be --
1. Write savepoint statement like "SAVEPOINT STARTLOAD"
2. DELETE STATEMENT like "DELETE FROM TABLE TABLE_NAME" .......
Step - 2:
At session level:
1. Uncheck the "Truncate table option"
2. Increase the commit interval to max value that informatica allows i.e,
2,147,483647 as the commit interval considering that
during that load you will have source records coming less than 2,147,483647 record
count.
3. Enable the option "Rollback Transactions on Errors" This will help to rollback
the operations till the savepoint set in
the Pre-SQL section.
Step -3:
In Post-SQL: Target Definition -
Write a Comit statement. "COMMIT". : This will help to commit the operations done
on the target table after savepoint set in
the Pre-SQL.
Scenario 24:
How to pass one mapping variable / parameter to another with in the same workflow?
Solution:
To pass a mapping variable or parameter value from one session to another in a
workflow, do the following:
Create two consecutive sessions (session1 and session2) in a Workflow.
Create Workflow variable(s) in the Workflow.
In Session1, go to Edit >Components > Post-Session On Success Variable Assignment,
assign values from mapping
variables/parameter to workflow variables.
In Session2 Edit > Components > Pre-session Variable Assignment, assign values from
workflow variables to mapping
variables/parameter.
OR
from PowerCenter 8.6, there is an option to share the mapping variable to multiple
sessions in the same workflow using the
presession_variable_assignment option and then create a workflow variable.

ETL Labs
Scenario 25:
In Informatica, what is the benefit apart from Performance of using more than one
INTEGRATION SERVICES in a
Domain
Solution:
Load balancing and failover

También podría gustarte