Blog

Informatica Interview Questions

  • (5.0)
Informatica Interview Questions
If you're looking for Informatica Interview Questions for Experienced or Freshers, you are at right place. There are lot of opportunities from many reputed companies in the world. According to research Informatica has a market share of about 29.4%. So, You still have opportunity to move ahead in your career in Informatica Development. Mindmajix offers Advanced Informatica Interview Questions 2018 that helps you in cracking your interview & acquire dream career as Informatica Developer.
 
Are you interested in taking up for Informatica Course? Enroll for Free Demo on Informatica Training!
 
Q: What is Version Control?
1. By using version control we are maintaining the history of the metadata objects.
2. A versioned repository stores multiple versions of an object.
3. Each version is a separate object with unique number.
4. You can perform the following change management tasks to create and manage multiple versions of objects in the respository.
i. Check in  Read only Mode
ii. Check in  Editable Mode

i. Check in:-

1. You must save an object before you can check it in.
2. When you check in an object, the repository creates a new version of the object & assigns it a version number.
3. The repository increments the version number when you check in an object.

ii. Check Out:-

1. To edit an object, you must check out the object.
2. When you check out an object, the repository obtains a write – intent lock on the mapping objects.
3. No other users can edit the object when you have checked out.

 
Q: How to use Normalizer Transformation in Informatica?
1. This is of type an Active T/R which reads the data from COBOL files and VSAM sources (virtual storage access method)
2. Normalizer T/R act like a source Qualifier T/R while reading the data from COBOL files.
3. Use Normalizer T/R that converts a single input record into multiple output records. This is known as Data pivoting.
 
Q: What are the Limitations of Pushdown Optimization?
1. Rank T/R cannot be pushed
2. Transaction control T/R
3. Sorted aggregation.
 
Procedure:-
 
1. Design a mapping with filter, rank and expression T/R.
2. Create a session  Double click the session select properties tab.
 
Attribute     Value                   
Pushdown optimization     Full                    
 
3. Select the mapping tab set reader, writer connection with target load type normal.
4. Click apply  click ok  save the session.
5. Create & start workflow.
 
Pushdown Optimization Viewer:-
Double click the session  Select the mapping tab from left window  select pushdown optimization.
 
Q: Differences between Copy and Shortcut?
 
Copy Shortcut
1. Copy an object to another folder 1. Dynamic link to an object in the folder
2. Changes to original object doesn’t reflect                                         2. Dynamically reflects the changes to an original object      
3. Duplicate’s the space 3. Preserves the space
4. Created from unshared folders 4. Created from shared folders
 
Q: How to use PMCMD Utiliy Command?
1. It is a command based client program that communicates with integration service to perform some of the tasks which can also be performed using workflow manager client.
2. Using PMCMD we can perform the following tasks:
 
i. Starting workflow.
ii. Scheduling workflow.
 
3. The PMCMD can be operated in two different modes:
 
i. Interactive Mode.
ii. Command line Mode.
 
Q: Scheduling a Workflow?
1. A schedule is an automation of running the workflow at a given date and time.
2. There are 2 types of schedulers:
 
(i) Reusable scheduler
(ii) Non Reusable scheduler
 
(i) Reusable scheduler:-
A reusable scheduler can be assigned to multiple workflows.
 
(ii) Non Reusable scheduler:-
- A non reusable scheduler is created specific to the workflow.
- A non reusable scheduler can be converted into a reusable scheduler.
 
The following are the 3rd party schedulers:
 
1. Cron (Unix based scheduling process)
2. Tivoli
3. Control M
4. Autosys
5. Tidal
6. WLM (work hard manager)
 
- 99% production people will do scheduling.
- Before we run the workflow manually. Through scheduling we run workflow this is called Auto Running.
 
             Scheduling Workflow
 
Q: What is Dynamic Lookup Cache?
1. The cache updates or changes dynamically when lookup on target table.
2. The dynamic lookup T/R allows for the synchronization of the target lookup table image in the memory with its physical table in the database.
3. The dynamic lookup T/R or dynamic lookup cache is operated in only connected mode (connected lookup )
4. Dynamic lookup cache support only equality conditions (=conditions)
 
New Lookup Row Description
0 The integration service does not update or insert the row in the cache
1 The integration service inserts the row into the cache
2 The integration service updates the row in the cache
 
Q: How to use PowerCenter Command Line in Informatica?
The transformation language provides two comment specifiers to let you insert comments in expression:
 
- Two Dashes ( - - )
- Two Slashes ( / / )
 
The Power center integration service ignores all text on a line preceded by these two comment specifiers.
 
Q: Differences between variable port and Mapping variable?
 
Variable Port Mapping Variable
1. Local to the T/R 1. Local to the Mapping
2. Values are non-persistant 2. Values are persistent
3. Can’t be used with SQL override 3. Can be used with SQL override
 
• Mapping variables is used for incremental extraction.
• In mapping variables no need to change the data. It automatically changed.
• In mapping parameter you have to change the data and time.
 
Q: Which is the T/R that builts only single cache memory?
Rank can build two types of cache memory. But sorter always built only one cache memory.
- Cache is also called Buffer.
 
Q: What is XML Source Qualifier Transformation in Informatica?
1. Reads the data from XMl files.
2. XML source definition associates with XML source Qualifier.
3. XML files are case sensitive markup language.
4. Files are saved with an extension .XML.
5. XML files are hierarchical (or) parent child relationship file formats.
6. Files can be normalized or denormalized.
 
Q: Differences between connected and unconnected lookup?
Connected Lookup Unconnected Lookup
1. Part of the mapping dataflow 1. Separate from the mapping data flow.
2. Returns multiple values (by linking output ports to another transformation)                                2. Return one value by checking the Return R port option for the outport that provides the return value.
3. Executed for every record passing through the transformation. 3. Only executed when the lookup function is called.
4. More visible, shows where the lookup values are used. 4. Less visible, as the lookup is called from an expression within another transformation
5. Default values are used. 5. Default values are ignored.

Q: What is Load Order?
Design mapping applications that first loads the data into the dimension tables. And then load the data into the fact table.
Load Rule:- If all dimension table loadings are success then load the data into fact table.
Load Frequency:- Database gets refreshed on daily loads, weekly loads and montly loads.

Q: What is Snowflake Schema?
A large denormalized dimention table is splitted into multiple normalized dimensions.

Advantage:
Select Query performance increases.

Disadvantage:
Maintanance cost increases due to more no. of tables.

Q: Stand alone Email task?
1. It can be used any where in the workflow, defined will Link conditions to notify the success or failure of prior tasks.
2. Visible in Flow Diagram.
3. Email Variables can be defined with stand alone email tasks.

Q: What is Mapping Debugger?
- Debugger is a tool. By using this we can identify records are loaded or not and correct data is loaded or not from one T/R to other T/R.
- Session succeeded but records are not loaded. In this situation we have to use Debugger tool.

What is the functionality of F10 in informatica?
F10  Next Instance
- What T/R having Nocast?
A. Lookup T/R
Note:- Prevent wait is available in any task. It is available only in Event wait task.
- F5    Start Debbugger.
- Debugger is used for test the records are loader or not, correct data is loader or not.
- Debugger is used only for to test Valid Mapping but not invalid Mapping.

Q: What is Worklet and types of worklets?
1. A worklet is defined as group of related tasks.
2. There are 2 types of the worklet:

(i) Reusable worklet
(ii) Non-Reusable worklet

3. Worklet expands and executes the tasks inside the workflow.
4. A workflow which contains the worklet is known as Parent Workflow.

(a) Reusable Worklet:-

Created using worklet designer tool.
Can be assigned to Multiple workflows.

(b) Non-Reusable Worklet:-

Created using workflow designer tool.
Created Specific to workflow.

Q: What is Relative Mode?
In Real time we use this.
Relative Time: The timer task can start the timer from the start timer of the timer task, the start time of the workflow or worklet, or from the start time of the parent workflow.
 
Relative Mode

- Timer task is mainly used for scheduling workflow.
- Workflow 11 AM Timer (11:05 AM)  Absolute Mode
- Anytime workflow start after 5 mins Timer  (5 mins) will start Relative Mode.
 
Q: Difference between Filter and Router T/R?
 
Filter T/R Router T/R
1. Single condition 1. Multiple conditions
2. Single Target 2. Multiple Targets
3. Rejected rows cannot be captured 3. Default group captures rejected rows.

Q: What is a Repository Manager?

It is a GVI based administrative client which allows to perform the following administrative tasks:
 
1. Create, edit and delete folders.
2. Assign users to access the folders with read, write and execute permissions.
3. Backup and Restore repository objects.
 
Q: What is Rank Transformation in Informatica?
1. This a type of an active T/R which allows you to findout either top performance or bottom performers.
2. Rank T/R is created with the following types of the port:
 
i. Input Port (I)
ii. Output Port (O)
iii. Rank Port (R)
iv. Variable Port (V)
 
Q: What is meant by Informatica PowerCenter Architecture?
The following components get installed:
 
i. Power Center Clients
ii. Power Center Repository.
iii. Power Center Domain.
iv. Power Center Repository Service  (PCRS)
v. Power Center Integration Service (PCIS)
vi. Informatica administrator.
 
Mapping is nothing but ETL Application.
 
Q: What is Workflow Monitor?
i. It is a GUI based client application which allows use to monitor ETL objects running an ETL Server.
ii. Collect runtime statistics such as:
 
a. No. of records extracted.
b. No. of records loaded.
c. No. of records rejected.
d. Fetch session log
e. Throughput
 
- Complete information can be accessed from workflow monitor.
- For every session one log file is created.
 
If informatica have own scheduler why using third party scheduler?
The client uses various applications (mainframes, oracle apps use Tivoli scheduling tool) and integrate different applications & scheduling that applications it is very easy by using third party schedulers.
 
Q: What is Workflow Manager?
It is a GUI based client which allows you to create following ETL objects.
 
1. Session
2. Workflow
3. Scheduler.
 
Session:-
- A session is a task that executes mapping.
- A session is created for each Mapping.
- A session is created to provide runtime properties.
A session is a set of instructions that tells ETL server to move the data from source to destination.
 
Workflow:-
Workflow is a  set of instructions that tells how to run the session taks and when to run the session tasks.
 
Q: What is Informatica PowerCenter?
- A data integration tool which combines the data from multiple OLTP source systems, transforms the data into homogeneous format and delivers the data through out the enterprise at any speed.
- It is a GUI based ETL product from informatica corporation which was founded in 1993 Red wood city, California.
- There are many products in informatica corporation:
 
1. Informatica Analyzer.
2. Life cycle management.
3. Master data
 
Having many products in informatica.
Informatica power center is one of the product of informatica.
Using informatica power center we will do the Extraction, transformation and loading.
 
Q: What is a Dimensional Model?
1. Data Modeling:- It is a process of designing the database by fulfilling business requirements specifications.
2. A Data Modeler (or) Database Architech Designs the warehouse Database using a GUI based data modeling tool called “ERWin”.
3. ERWin is a datamodeling tool from computer Associates (A).
4. A dimensional modeling consists of following types of schemas designed for Datawarehouse:
 
a. Star Schema.
b. Snowflake Schema.
c. Galary Schema.
5. A schema is a data model which consists of one or more tables.

Checkout Informatica Tutorials

Q. What are the differences between Connected and Unconnected Lookup?
The differences are illustrated in the below table:
 
Connected Lookup Unconnected Lookup
Connected lookup participates in dataflow and receives input directly from the pipeline Unconnected lookup receives input values from the result of a LKP: expression in another transformation
Connected lookup can use both dynamic and static cache                                                                                 Unconnected Lookup cache can NOT be dynamic
Connected lookup can return more than one column value ( output port ) Unconnected Lookup can return only one column value i.e. output port
Connected lookup caches all lookup columns Unconnected lookup caches only the lookup output ports in the lookup conditions and the return port
Supports user-defined default values (i.e. value to return when lookup conditions are not satisfied) Does not support user defined default values

Q. What is meant by active and passive transformation?
An active transformation is the one that performs any of the following actions:
Change the number of rows between transformation input and output. Example: Filter transformation
Change the transaction boundary by defining commit or rollback points., example transaction control transformation
Change the row type, example Update strategy is active because it flags the rows for insert, delete, update or reject
On the other hand a passive transformation is the one which does not change the number of rows that pass through it. Example: Expression transformation.

Q. What is the difference between Router and Filter?
Following differences can be noted:
 
Router Filter
Router transformation divides the incoming records into multiple groups based on some condition. Such groups can be mutually inclusive (Different groups may contain same record) Filter transformation restricts or blocks the incoming record set based on one given condition.                                                                                                                                     
Router transformation itself does not block any record. If a certain record does not match any of the routing conditions, the record is routed to default group Filter transformation does not have a default group. If one record does not match filter condition, the record is blocked
Router acts like CASE.. WHEN statement in SQL (Or Switch().. Case statement in C) Filter acts like WHERE condition is SQL.

Do you want to study from the biggest collection of Informatica questions and answers? Where each question is hand-picked from real-life Interviews and answers prepared by the industry experts? Do you want to download the question-answer set in PDF format for offline study? If yes, get the Master Informatica Question Answer Set.

Q. What can we do to improve the performance of Informatica Aggregator Transformation?
Aggregator performance improves dramatically if records are sorted before passing to the aggregator and “sorted input” option under aggregator properties is checked. The record set should be sorted on those columns that are used in Group By operation.
It is often a good idea to sort the record set in database level e.g. inside a source qualifier transformation, unless there is a chance that already sorted records from source qualifier can again become unsorted before reaching aggregator
You may also read this article to know how to tune the performance of aggregator transformation
 
Q. What are the different lookup cache(s)?
Informatica Lookups can be cached or un-cached (No cache). And Cached lookup can be either static or dynamic. A static cache is one which does not modify the cache once it is built and it remains same during the session run. On the other hand, A dynamic cache is refreshed during the session run by inserting or updating the records in cache based on the incoming source data. By default, Informatica cache is static cache.
A lookup cache can also be divided as persistent or non–persistent based on whether Informatica retains the cache even after the completion of session run or deletes it.
 
Q. How can we update a record in target table without using Update strategy?
A target table can be updated without using ‘Update Strategy’. For this, we need to define the key in the target table in Informatica level and then we need to connect the key and the field we want to update in the mapping Target. In the session level, we should set the target property as “Update as Update” and check the “Update” check-box.
Let’s assume we have a target table “Customer” with fields as “Customer ID”, “Customer Name” and “Customer Address”. Suppose we want to update “Customer Address” without an Update Strategy. Then we have to define “Customer ID” as primary key in Informatica level and we will have to connect Customer ID and Customer Address fields in the mapping. If the session properties are set correctly as described above, then the mapping will only update the customer address field for all matching customer IDs.
 
Q. Under what condition selecting Sorted Input in aggregator may fail the session?
If the input data is not sorted correctly, the session will fail.
Also if the input data is properly sorted, the session may fail if the sort order by ports and the group by ports of the aggregator are not in the same order.
 
Q. Why is Sorter an Active Transformation?
This is because we can select the “distinct” option in the sorter property.
When the Sorter transformation is configured to treat output rows as distinct, it assigns all ports as part of the sort key. The Integration Service discards duplicate rows compared during the sort operation. The number of Input Rows will vary as compared with the Output rows and hence it is an Active transformation.
 
Q. Is lookup an active or passive transformation?
From Informatica 9x, Lookup transformation can be configured as as “Active” transformation.
Find out How to configure lookup as active transformation
However, in the older versions of Informatica, lookup used to be a passive transformation
 
Q. What is the difference between Static and Dynamic Lookup Cache?
We can configure a Lookup transformation to cache the underlying lookup table. In case of static or read-only lookup cache the Integration Service caches the lookup table at the beginning of the session and does not update the lookup cache while it processes the Lookup transformation.
In case of dynamic lookup cache the Integration Service dynamically inserts or updates data in the lookup cache and passes the data to the target. The dynamic cache is synchronized with the target.
In case you are wondering why do we need to make lookup cache dynamic, read this article on dynamic lookup
 
Q. What is the difference between STOP and ABORT options in Workflow Monitor?
When we issue the STOP command on the executing session task, the Integration Service stops reading data from source. It continues processing, writing and committing the data to targets. If the Integration Service cannot finish processing and committing data, we can issue the abort command.
In contrast ABORT command has a timeout period of 60 seconds. If the Integration Service cannot finish processing and committing data within the timeout period, it kills the DTM process and terminates the session.
 
Q. What are the new features of Informatica 9.x in developer level?
From a developer’s perspective, some of the new features in Informatica 9.x are as follows:
Now Lookup can be configured as an active transformation – it can return multiple rows on successful match
Now you can write SQL override on un-cached lookup also. Previously you could do it only on cached lookup
You can control the size of your session log. In a real-time environment you can control the session log file size or time
Database deadlock resilience feature – this will ensure that your session does not immediately fail if it encounters any database deadlock, it will now retry the operation again. You can configure number of retry attempts.
 
Q. What is an Aggregator Transformation?
An aggregator is an Active, Connected transformation which performs aggregate calculations like AVG, COUNT, FIRST, LAST, MAX, MEDIAN, MIN, PERCENTILE, STDDEV, SUM and VARIANCE.
 
Q. How an Expression Transformation differs from Aggregator Transformation?
An Expression Transformation performs calculation on a row-by-row basis. An Aggregator Transformation performs calculations on groups.
 
Q. Does an Informatica Transformation support only Aggregate expressions?
Apart from aggregate expressions Informatica Aggregator also supports non-aggregate expressions and conditional clauses.
 
Q. How does Aggregator Transformation handle NULL values?
By default, the aggregator transformation treats null values as NULL in aggregate functions. But we can specify to treat null values in aggregate functions as NULL or zero.
 
Q. What is Incremental Aggregation?
We can enable the session option, Incremental Aggregation for a session that includes an Aggregator Transformation. When the Integration Service performs incremental aggregation, it actually passes changed source data through the mapping and uses the historical cache data to perform aggregate calculations incrementally.
For reference check Implementing Informatica Incremental Aggregation
 
Q. What are the performance considerations when working with Aggregator Transformation?
1. Filter the unnecessary data before aggregating it. Place a Filter transformation in the mapping before the Aggregator transformation to reduce unnecessary aggregation.
2. Improve performance by connecting only the necessary input/output ports to subsequent transformations, thereby reducing the size of the data cache.
3. Use Sorted input which reduces the amount of data cached and improves session performance.
 
Q. What differs when we choose Sorted Input for Aggregator Transformation?
Integration Service creates the index and data caches files in memory to process the Aggregator transformation. If the Integration Service requires more space as allocated for the index and data cache sizes in the transformation properties, it stores overflow values in cache files i.e. paging to disk. One way to increase session performance is to increase the index and data cache sizes in the transformation properties. But when we check Sorted Input the Integration Service uses memory to process an Aggregator transformation it does not use cache files.
 
Q. Under what conditions selecting Sorted Input in aggregator will still not boost session performance?
1. Incremental Aggregation, session option is enabled.
2. The aggregate expression contains nested aggregate functions.
3. Source data is data driven.
 
Q. Under what condition selecting Sorted Input in aggregator may fail the session?
If the input data is not sorted correctly, the session will fail.
Also if the input data is properly sorted, the session may fail if the sort order by ports and the group by ports of the aggregator are not in the same order.
 
Q. Suppose we do not group by on any ports of the aggregator what will be the output.
If we do not group values, the Integration Service will return only the last row for the input rows.
 
Q. What is the expected value if the column in an aggregator transform is neither a group by nor an aggregate expression?
Integration Service produces one row for each group based on the group by ports. The columns which are neither part of the key nor aggregate expression will return the corresponding value of last record of the group received. However, if we specify particularly the FIRST function, the Integration Service then returns the value of the specified first row of the group. So default is the LAST function.
 
Q. Give one example for each of Conditional Aggregation, Non-Aggregate expression and Nested Aggregation.
Use conditional clauses in the aggregate expression to reduce the number of rows used in the aggregation. The conditional clause can be any clause that evaluates to TRUE or FALSE.
SUM( SALARY, JOB = CLERK )
Use non-aggregate expressions in group by ports to modify or replace groups.
IIF( PRODUCT = Brown Bread, Bread, PRODUCT )
The expression can also include one aggregate function within another aggregate function, such as:
MAX( COUNT( PRODUCT ))
 
Q. What is a Rank Transform?
Rank is an Active Connected Informatica transformation used to select a set of top or bottom values of data.
 
Q. How does a Rank Transform differ from Aggregator Transform functions MAX and MIN?
Like the Aggregator transformation, the Rank transformation lets us group information. The Rank Transform allows us to select a group of top or bottom values, not just one value as in case of Aggregator MAX, MIN functions.
 
Q. What is a RANK port and RANKINDEX?
Rank port is an input/output port use to specify the column for which we want to rank the source values. By default Informatica creates an output port RANKINDEX for each Rank transformation. It stores the ranking position for each row in a group.
 
Q. How can you get ranks based on different groups?
Rank transformation lets us group information. We can configure one of its input/output ports as a group by port. For each unique value in the group port, the transformation creates a group of rows falling within the rank definition (top or bottom, and a particular number in each rank).
 
Q. What happens if two rank values match?
If two rank values match, they receive the same value in the rank index and the transformation skips the next value.
 
Q. What are the restrictions of Rank Transformation?
1. We can connect ports from only one transformation to the Rank transformation.
2. We can select the top or bottom rank.
3. We need to select the Number of records in each rank.
4. We can designate only one Rank port in a Rank transformation.
 
Q. How does a Rank Cache works?
During a session, the Integration Service compares an input row with rows in the data cache. If the input row out-ranks a cached row, the Integration Service replaces the cached row with the input row. If we configure the Rank transformation to rank based on different groups, the Integration Service ranks incrementally for each group it finds. The Integration Service creates an index cache to stores the group information and data cache for the row data.
 
Q. How does Rank transformation handle string values?
Rank transformation can return the strings at the top or the bottom of a session sort order. When the Integration Service runs in Unicode mode, it sorts character data in the session using the selected sort order associated with the Code Page of IS which may be French, German, etc. When the Integration Service runs in ASCII mode, it ignores this setting and uses a binary sort order to sort character data.
 
Q. What is a Sorter Transformation?
Sorter Transformation is an Active, Connected Informatica transformation used to sort data in ascending or descending order according to specified sort keys. The Sorter transformation contains only input/output ports.
 
Q. Why is Sorter an Active Transformation?
When the Sorter transformation is configured to treat output rows as distinct, it assigns all ports as part of the sort key. The Integration Service discards duplicate rows compared during the sort operation. The number of Input Rows will vary as compared with the Output rows and hence it is an Active transformation.
 
Q. How does Sorter handle Case Sensitive sorting?
The Case Sensitive property determines whether the Integration Service considers case when sorting data. When we enable the Case Sensitive property, the Integration Service sorts uppercase characters higher than lowercase characters.
 
Q. How does Sorter handle NULL values?
We can configure the way the Sorter transformation treats null values. Enable the property Null Treated Low if we want to treat null values as lower than any other value when it performs the sort operation. Disable this option if we want the Integration Service to treat null values as higher than any other value.
 
Q. How does a Sorter Cache works?
The Integration Service passes all incoming data into the Sorter Cache before Sorter transformation performs the sort operation.
The Integration Service uses the Sorter Cache Size property to determine the maximum amount of memory it can allocate to perform the sort operation. If it cannot allocate enough memory, the Integration Service fails the session. For best performance, configure Sorter cache size with a value less than or equal to the amount of available physical RAM on the Integration Service machine.
If the amount of incoming data is greater than the amount of Sorter cache size, the Integration Service temporarily stores data in the Sorter transformation work directory. The Integration Service requires disk space of at least twice the amount of incoming data when storing data in the work directory.
 
Q. What is a Union Transformation?
The Union transformation is an Active, Connected non-blocking multiple input group transformation use to merge data from multiple pipelines or sources into one pipeline branch. Similar to the UNION ALL SQL statement, the Union transformation does not remove duplicate rows.
 
Q. What are the restrictions of Union Transformation?
1. All input groups and the output group must have matching ports. The precision, datatype, and scale must be identical across all groups.
2. We can create multiple input groups, but only one default output group.
3. The Union transformation does not remove duplicate rows.
4. We cannot use a Sequence Generator or Update Strategy transformation upstream from a Union transformation.
5. The Union transformation does not generate transactions.
 
Q. What is the difference between Static and Dynamic Lookup Cache?
We can configure a Lookup transformation to cache the corresponding lookup table. In case of static or read-only lookup cache the Integration Service caches the lookup table at the beginning of the session and does not update the lookup cache while it processes the Lookup transformation.
In case of dynamic lookup cache the Integration Service dynamically inserts or updates data in the lookup cache and passes the data to the target. The dynamic cache is synchronized with the target.
 
Q. What is Persistent Lookup Cache?
Lookups are cached by default in Informatica. Lookup cache can be either non-persistent or persistent. The Integration Service saves or deletes lookup cache files after a successful session run based on whether the Lookup cache is checked as persistent or not.
 
Q. What is the difference between Reusable transformation and Mapplet?
Any Informatica Transformation created in the in the Transformation Developer or a non-reusable promoted to reusable transformation from the mapping designer which can be used in multiple mappings is known as Reusable Transformation. When we add a reusable transformation to a mapping, we actually add an instance of the transformation. Since the instance of a reusable transformation is a pointer to that transformation, when we change the transformation in the Transformation Developer, its instances reflect these changes.
A Mapplet is a reusable object created in the Mapplet Designer which contains a set of transformations and lets us reuse the transformation logic in multiple mappings. A Mapplet can contain as many transformations as we need. Like a reusable transformation when we use a mapplet in a mapping, we use an instance of the mapplet and any change made to the mapplet is inherited by all instances of the mapplet.
 
Q. What are the transformations that are not supported in Mapplet?
Normalizer, Cobol sources, XML sources, XML Source Qualifier transformations, Target definitions, Pre- and post- session Stored Procedures, Other Mapplets.
 
Q. What are the ERROR tables present in Informatica?
1. PMERR_DATA– Stores data and metadata about a transformation row error and its corresponding source row.
2. PMERR_MSG– Stores metadata about an error and the error message.
3. PMERR_SESS– Stores metadata about the session.
4. PMERR_TRANS– Stores metadata about the source and transformation ports, such as name and datatype, when a transformation error occurs.
 
Q. What is the difference between STOP and ABORT?
When we issue the STOP command on the executing session task, the Integration Service stops reading data from source. It continues processing, writing and committing the data to targets. If the Integration Service cannot finish processing and committing data, we can issue the abort command.
In contrast ABORT command has a timeout period of 60 seconds. If the Integration Service cannot finish processing and committing data within the timeout period, it kills the DTM process and terminates the session.
 
Q. Can we copy a session to new folder or new repository?
Yes we can copy session to new folder or repository provided the corresponding Mapping is already in there.
 
Q. What type of join does Lookup support?
Lookup is just similar like SQL LEFT OUTER JOIN.
 
Explore Informatica Sample Resumes! Download & Edit, Get Noticed by Top Employers!Download Now!

 

List of Informatica Courses:

Mindmajix offers training for many of other informatica courses depends on your requirement:

 Informatica Analyst  Informatica PIM
 Informatica SRM  Informatica MDM
 Informatica Data Quality  Informatica ILM
 Informatica Big Data Edition  Informatica Multi Domain MDM

 


Popular Courses in 2018

Get Updates on Tech posts, Interview & Certification questions and training schedules