If you're looking for Informatica Interview Questions for Experienced and Freshers, you are in right place. There are a lot of opportunities from many reputed companies in the world. According to research, Informatica has a Progressive market share. So, You still have the opportunity to move ahead in your career in Informatica Development.
Mindmajix offers Advanced Informatica Interview Questions 2024 that help you in cracking your interview & acquire a dream career as Informatica Developer.
For your better understanding, we have segregated these Informatica Interview Questions into 3 different categories.
Want to enhance your career and become a professional in Informatica, then enroll in "Informatica Training". This course will help you to achieve excellence in this domain. |
This Informatica Interview Questions blog covers all the core concepts from basic to advanced level. So utilize these Questions to improve your chances of being hired in your next interview!
The following are the difference between Informatica vs Talend
Informatica | Talend |
Provides only commercial data integration | Available open-source and commercial editions |
Founded way back in 1993 | Founded in the year 2006 |
Charges applicable per customer | Open source is for free |
RDBMS repository stores metadata generated | Implemented on any java supported platforms |
Integrating code is not so effective | Code customization is effective |
No prior knowledge is required | knowledge on java is preferred |
Automated deployment is not up to the mark | Deployment made easy |
Transformations are re-usable | Components are re-usable |
Enterprise Data Warehousing is the data of the organization being created or developed at a single point of access. The data is globally accessed and viewed through a single source since the server is linked to this single source. It also includes the periodic analysis of the source.
To get the relevant data or information, the Lookup transformation is used to find a source qualifier, a target, or other sources. Many types of files can be searched in the Lookup transformation like for example flat files, relational tables, synonym, or views, etc. The Lookup transformation can be cited as active or passive. It can also be either connected or unconnected. In mapping, multiple lookup transformations can be used. In the mapping, it is compared with the lookup input port values.
The following are the different types of ports with which the lookup transformation is created:
Connected lookup is the one that takes up the input directly from the other transformations and also participates in the data flow. On the other hand, an unconnected lookup is just the opposite. Instead of taking the input from the other transformations, it simply receives the values from the result or the function of the LKP expression.
Connected Lookup cache can be both dynamic and static but unconnected Lookup cache can't be dynamic in nature. The First one can return to multiple output ports but the latter one returns to only one output port. User-defined values which ads generally default values are supported in the connected lookup but are not supported in the unconnected lookup.
The number of parameters that can include in an unconnected lookup is numerous. However, no matter how many parameters are put, the return value would be only one. For example, parameters like column 1, column 2, column 3, and column 4 can be put in an unconnected lookup but there is only one return value.
Informatica lookup caches can be of different nature like static or dynamic. It can also be persistent or non-persistent. Here are the names of the caches:
Data warehouse consists of different kinds of data. A database also consists of data but however, the information or data of the database is smaller in size than the data warehouse. Datamart also includes different sorts of data that are needed for different domains. Examples - Different dates for different sections of an organization like sales, marketing, financing, etc.
The main organizational point sometimes undertakes all the interlinked and interconnected nodes and relationships and this is known as the domain. These links are covered mainly by one single point of the organization.
The powerhouse server is the main governing server that helps in the integration process of various different processes among the different factors of the server's database repository. On the other hand, the repository server ensures repository integrity, uniformity, and consistency.
The total figure of repositories created in Informatica mainly depends on the total amounts of the ports of the Informatica.
A session is partitioned in order to increase and improve the efficiency and the operation of the server. It includes the solo implementation sequences in the session.
Parallel processing helps in further improvement of performance under hardware power. The parallel processing is actually done by using the partitioning sessions. This partitioning option of the Power Center in Informatica increases the performance of the Power Center by parallel data processing. This allows the large data set to be divided into a smaller subset and this is also processed in order to get a good and better performance of the session.
There are different types of algorithms that can be used to implement parallel processing. These are as follows:
Best mapping development practices are as follows -
Explore Informatica Tutorial for more information |
The different mapping design tips are as follows:
Converting data from a source to a target is generally implemented by a teaching service and this is known as a session. Usually, the session manager executes the session. In order to combine session’s executions, batches are used in two ways - serially or parallelly.
Any number of sessions can be grouped in one batch but however, for an easier migration process, it is better if the number is lesser in one batch.
The mapping variable refers to the changing values of the sessions' execution. On the other hand, when the value doesn't change during the session then it is called mapping parameters. The mapping procedure explains the procedure of the mapping parameters and the usage of these parameters. Values are best allocated before the beginning of the session to these mapping parameters.
These are the three most important features of complex mapping.
The debugging option helps in judging whether the mapping is correct or not without really connecting to the session.
OLAP or also known as On-Line Analytical Processing is the method with the assistance of which multi-dimensional analysis occurs.
The different types of OLAP are
The surrogate key is just the replacement in the place of the prime key. The latter is natural. This is a different type of identity for each consisting of different data.
Are you planning to build a career in Informatica? Sign up for this Informatica Certification Training in Hyderabad to begin your journey today!
When the Power Centre Server transfers data from the source to the target, it is often guided by a set of instructions and this is known as the session task.
Command task only allows the flow of more than one shell command or sometimes the flow of one shell command in Windows while the work is running.
The type of command task that allows the shell commands to run anywhere during the workflow is known as the standalone task.
The workflow includes a set of instructions that allows the server to communicate for the implementation of tasks.
There are four types of tools:
Target load order is dependent on the source qualifiers in a mapping. Generally, multiple source qualifiers are linked to a target load order.
Informatica Power Centre consists of the following Metadata;
Two repositories are as follows
Mainly Extraction, Loading (ETL), and Transformation of the above-mentioned metadata are performed through the Power Centre Repository.
For More Info: Informatica ETL Tools |
When the server faces rejection of the update strategy transformation, it regrets files. The database consisting of the information and data also gets disrupted. This is a rare case scenario.
Procedure:
1. Design a mapping with filter, rank, and expression T/R.
2. Create a session --> Double click the session select properties tab.
Attribute | Value |
Pushdown optimization | Full |
3. Select the mapping tab --> set reader, writer connection with target load type normal.
4. Click apply --> click ok --> save the session.
5. Create & start the workflow.
Pushdown Optimization Viewer:-
Double click the session --> Select the mapping tab from the left window --> select pushdown optimization.
The following are the differences between copy and shortcut
Copy | Shortcut |
Copy an object to another folder | Dynamic-link to an object in the folder |
Changes to the original object don’t reflect | Dynamically reflects the changes to an original object |
Duplicate’s the space | Preserves the space |
Created from unshared folders | Created from shared folders |
It is a command based client program that communicates with integration service to perform some of the tasks which can also be performed using workflow manager client.
Using PMCMD we can perform the following tasks:
Starting workflow.
Scheduling workflow.
The PMCMD can be operated in two different modes:
Interactive Mode.
Command-line Mode.
A schedule is automation of running the workflow at a given date and time.
There are 2 types of schedulers:
Reusable scheduler:
A reusable scheduler can be assigned to multiple workflows.
Non Reusable scheduler:
The following are the 3rd party schedulers:
- 99% of production people will do the scheduling.
New Lookup Row | Description |
0 | The integration service does not update or insert the row in the cache |
1 | The integration service inserts the row into the cache |
2 | The integration service updates the row in the cache |
The transformation language provides two comment specifiers to let you insert comments in the expression:
The Power center integration service ignores all text on a line preceded by these two comment specifiers.
The following are the differences between variable port and Mapping variable:
Variable Port | Mapping Variable |
Local to the T/R | Local to the Mapping |
Values are non-persistent | Values are persistent |
Can’t be used with SQL override | Can be used with SQL override |
Rank can build two types of cache memory. But sorter always built only one cache memory. The cache is also called Buffer.
Design mapping applications that first load the data into the dimension tables. And then load the data into the fact table.
Snowflake Schema is a large denormalized dimension table is split into multiple normalized dimensions.
Advantage:
Select Query performance increases.
Disadvantage:
Maintenance cost increases due to more no. of tables.
Leave an Inquiry to learn Informatica Certification Course in Bangalore
F10 --> Next Instance
Lookup T/R
Note:- Prevent wait is available in any task. It is available only in the Event wait task.
(a) Reusable Worklet:
(b) Non-Reusable Worklet:
In Real-time we use this.
Relative Time: The timer task can start the timer from the start timer of the timer task, the start time of the workflow or worklet, or from the start time of the parent workflow.
The following are the differences between Filter T/R and Router T/R:
Filter T/R | Router T/R |
Single condition | Multiple conditions |
Single Target | Multiple Targets |
Rejected rows cannot be captured | Default group captures rejected rows. |
This a type of active T/R which allows you to find out either top performance or bottom performers.
Rank T/R is created with the following types of the port:
Mapping is nothing but an ETL Application.
It is a GUI based client application that allows users to monitor ETL objects running an ETL Server.
Collect runtime statistics such as:
3. Complete information can be accessed from the workflow monitor.
4. For every session, one log file is created.
The client uses various applications (mainframes, oracle apps use Tivoli scheduling tool) and integrates different applications & scheduling those applications it is very easy by using third party schedulers.
It is a GUI-based client that allows you to create the following ETL objects.
Session:
Workflow:
Workflow is a set of instructions that tells how to run the session tasks and when to run the session tasks.
A data integration tool that combines the data from multiple OLTP source systems, transforms the data into a homogeneous format and delivers the data throughout the enterprise at any speed.
It is a GUI-based ETL product from Informatica corporation which was founded in 1993 in Redwood City, California.
There are many products in Informatica corporation:
Having many products in Informatica.
Informatica power center is one of the products of Informatica.
Using Informatica power center we will do the Extraction, transformation, and loading.
Data Modeling:
Dimensional modeling consists of the following types of schemas designed for Datawarehouse:
Star Schema.
Snowflake Schema.
Gallery Schema.
A schema is a data model that consists of one or more tables.
Rank transformation can return the strings at the top or the bottom of a session sort order. When the Integration Service runs in Unicode mode, it sorts character data in the session using the selected sort order associated with the Code Page of IS which may be French, German, etc. When the Integration Service runs in ASCII mode, it ignores this setting and uses a binary sort order to sort character data.
The sorter is an active transformation because when it configures output rows, it discards duplicates from the key and consequently changes the number of rows.
The following are the types of transformations available in Informatica:
Based on the change in the number of rows, the active transformations are those which change the number of input and data rows passed to them. While passive transformations remain the same for any number of input and output rows passed to them.
The output files created by the Informatica server at runtime are listed below:
The following are the differences between static cache and dynamic cache:
Static Cache | Dynamic Cache |
A static cache is a default cache, and we cannot insert or update the caches | In the dynamic cache, we can insert or update data into the lookup and pass the data to the target. |
Handles multiple matches | Doesn't handles multiple matches |
Suitable for relational and flat file lookup types | Suitable for relational lookup types |
Relational operators like = &= can be used | Only = operator is used |
Used for both connected and unconnected lookup transformation | Used for only connected lookup transformation |
Router transformation contains the following types of transformations:
Input group
Output group:
Further, the output group contains two types:
The below table will detail the differences between the stop and abort options in a workflow monitor:
Stop | Abort |
The stop option is used for executing the session task and allows another task to run. | The abort option turns off the task completely that is running. |
While using this option, the integration service stop reading data from the source of the file | Abort waits for the services to be completed, and then only actions take place |
Processes data to the targets or sources | Time out period of 60 seconds |
Stops sharing resources from the processes | Stops the process and session gets terminated |
Ans. A reusable data object created in the Mapplet designer is called a Mapplet. It includes a collection of transformations that allows you to reuse transformation logic in different mappings.
The following are the difference between Mapping and Mapplet:
Mapping | Mapplet |
A set of source, target, and transformation | Collection of transformations |
They are not reusable and developed with different transformation | They are reusable |
The below-listed transformations are used for SQL override:
The differences between SQL override and Lookup override are listed below:
SQL Override | Lookup Override |
Limits the no of rows that enter the mapping pipeline | Limits the no of lookup rows for avoiding table scan and saves lookup time |
Manually enters the query | By default, the "Order By" clause |
Supports any kind of join by writing the query | Supports only Non-Equi joins |
A shared cache is a static lookup cache shared by various lookup transformations in the mapping. Using a shared cache reduces the amount of time needed to build the cache.
Compatibility between code pages used for getting accurate data movement when the Informatica Server runs in the Unicode data movement mode. There won't be any data losses if code pages are identical. One code page can be a superset or subset of another.
In Informatica, expression transformation is a passive transformation that allows performing non-aggregate calculations on the source data. It means you can perform calculations on a single row. Using this transformation, you can test data before passing it to the target table or another transformation through conditional statements.
Aggregator Transformation in Informatica is an active transformation that allows you to perform calculations like sum, average, etc. We can perform aggregate operations over a group of rows, and it stores all the estimates and records in a temporary placeholder called aggregator cache memory.
Filter transformation in Informatica is an active transformation that changes the number of rows passed through it. It allows the rows to pass through it based on specified filter conditions and drops rows that don't meet the requirement. The data can be filtered based on one or more terms.
In Informatica, union transformation is an active transformation because it combines the two or more data streams into one. The total no of rows passing through Union is equal to no of rows passing out of it, and the sequence of rows from the input stream preserved are the same in the output stream, but we cannot preserve the position of the rows.
Incremental aggregation usually gets created when a session gets created through the execution of an application. This aggregation allows you to capture changes in the source data for aggregating calculations in a session. If the source changes incrementally, you can capture those changes and configure the session to process them. It will allow you to update the target incrementally, rather than deleting the previous load data and recalculating similar data each time you run the session.
Reusable transformations are practiced numerous times in mapping. It is stored as metadata and is different from other mappings that use transformation. If any changes are performed to the reusable transformation, then all the mappings where the transformation is used get nullified.
The update strategy is the active and connected transformation that allows to insert, delete, or update records in the target table. Also, it restricts the files from not reaching the target table.
Both Informatica and Datastage are powerful ETL tools. Still, the significant difference between both is Informatica forces you to organize in a step-by-step process. In contrast, Datastage provides flexibility in dragging and dropping objects based on logic flow.
Informatica | Datastage |
Dynamic partitioning | Static partitioning |
Supports flat-file lookups | Supports hash files, lookup file sets, etc. |
It has a service-oriented architecture | It has a client-server architecture |
Step by step data integration solution | Project-based integration solutions |
Transaction Control in Informatica is an active and connected transformation that allows committing or rollbacking transactions during mapping execution. A transaction is a collection of rows bound by commit or rollback rows. Based on variation in no input rows, a transaction is defined. Commit or rollback operations ensure data availability.
The built-in variables available in this transformation are:
Informatica PowerCenter | Informatica PowerMart |
Informatica PowerCenter processes high volumes of data | Informatica PowerMart processes small volumes of data |
Supports data retrieval from ERPSources like PeopleSoft, SAP, etc. | Doesn't support data retrieval |
Converts local repositories to global | Doesn't convert local repositories to global |
Supports both global and local repositories | Supports only local repositories |
Mindmajix offers training for many other Informatica courses depending on your requirement:
Informatica Analyst | Informatica PIM |
Informatica SRM | Informatica MDM |
Informatica Data Quality | Informatica ILM |
Informatica Big Data Edition | Informatica Multi-Domain MDM |
Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:
Name | Dates | |
---|---|---|
Informatica Training | Dec 24 to Jan 08 | View Details |
Informatica Training | Dec 28 to Jan 12 | View Details |
Informatica Training | Dec 31 to Jan 15 | View Details |
Informatica Training | Jan 04 to Jan 19 | View Details |
Madhuri is a Senior Content Creator at MindMajix. She has written about a range of different topics on various technologies, which include, Splunk, Tensorflow, Selenium, and CEH. She spends most of her time researching on technology, and startups. Connect with her via LinkedIn and Twitter .