Informatica Vs Talend
|Informatica Vs Talend|
|Provides only commercial data integration||Available open source and commercial editions|
|Founded way back in 1993||Founded in the year 2006|
|Charges applicable per customer||Open source is for free|
|RDBMS repository stores metadata generated||Implemented on any java supported platforms|
|Integrating code is not so effective||Code customization is effective|
|No prior knowledge is required||knowledge on java is preferred|
|Automated deployment is not up to the mark||Deployment made easy|
|Transformations are re-usable||Components are re-usable|
If you want to enhance your career and become a professional in Informatica, then visit Mindmajix - a global online training platform: " Informatica Training" This course will help you to achieve excellence in this domain.
For your better understanding, we have segregated this Informatica interview questions into two different categories.
Informatica Interview Questions
If you're looking for Informatica Interview Questions for Experienced or Freshers, you are in right place. There are lot of opportunities from many reputed companies in the world. According to research Informatica has a market share of about 29.4%. So, You still have the opportunity to move ahead in your career in Informatica Development. Mindmajix offers Advanced Informatica Interview Questions 2020 that helps you in cracking your interview & acquire dream career as Informatica Developer.
Q1) What is the meaning of Enterprise Data Warehousing?
Ans: Enterprise Data Warehousing is the data of the organization being created or developed at a single point of access. The data is globally accessed and viewed through a single source since the server is linked to this single source. It also includes the periodic analysis of the source.
Q2) What is the meaning of Lookup transformation?
Ans: To get the relevant data or information, Lookup transformation is used to find a source qualifier, a target or other sources. Many types of files can be searched in the Lookup transformation like for example flat file, relational tables, synonym or view etc. The Lookup transformation can be cited as active or passive. It can also be either connected or unconnected. In a mapping, multiple lookup transformations can be used. In the mapping, it is compared with the lookup input port values. The following are the different types of ports with which the lookup transformation is created:
1. Input port
2. Output port
3. Lookup ports
4. Return port
Q3) What are the points of differences between connected lookup and unconnected lookup?
Ans: Connected lookup is the one which takes up the input directly from the other transformations and also participates in the data flow. On the other hand, an unconnected lookup is just the opposite. Instead of taking the input from the other transformations, if simply receives the values from the result or the function of the LKP expression. Connected Lookup cache can be both dynamic and static but unconnected Lookup cache can't be dynamic in nature. The First one can return to multiple output port but the latter one returns to only one output port. User-defined values which ads generally default values are supported in the connected lookup but is not supported in the unconnected lookup.
Q4) How many input parameters can be present in an unconnected lookup?
Ans: The number of parameters that can include in an unconnected lookup is numerous. However, no matter how many parameters are put, the return value would be only one. For example, parameters like column 1, column 2, column 3, and column 4 can be put in an unconnected lookup but there is only one return value.
Q5) How many lookup caches are available?
Ans: Informatica lookup caches can be of different nature like static or dynamic. It can also be persistent or non-persistent. Here are the names of the caches:
1. Static Cache
2. Dynamic Cache
3. Persistent Cache
4. Shared Cache
Q6) What is the difference between a data warehouse, a data mart and a database?
Ans: Data warehouse consists of different kinds of data. A database also consists of data but however, the information or data of the database is smaller in size than the data warehouse. Data mart also includes different sorts of data that are needed for different domains. Examples - Different dates for different sections of an organization like the sales, marketing, financing etc.
Q7) What is a domain?
Ans: The main organizational point sometimes undertakes all the interlinked and interconnected nodes and relationship and this is known as the domain. These links are covered mainly by one single point of organization.
Q8) Cite the differences between a powerhouse and repository server?
Ans: Powerhouse server is the main governing server which helps in the integration process of various different processes among the defferent factors of the server's database repository. On the other hand, repository server ensures the repository integrity, uniformity, and consistency.
Q9) In Informatica, how many numbers of repositories are possible to be made?
Ans: The total figure of repositories created in Informatica mainly depends on the total amounts of the ports of the Informatica.
Q10) What are the benefits of a partitioned session?
Ans: A session is partitioned in order to increase and improve the efficiency and the operation of the server. It includes the solo implementation sequences in the session.
Informatica Scenario Based Interview Questions
Q11) Define parallel processing?
Ans: Parallel processing helps in further improvement of performance under hardware power. The parallel processing is actually done by using the partitioning sessions. This partitioning option of the Power Center in Informatica increases the performances of the Power Center by parallel data processing. This allows the large data set to be divided into a smaller subset and this is also processed in order to get a good and better performance of the session.
Q12) What are the different types of methods for the implementation of parallel processing in Informatica?
Ans: There are different types of algorithms that can be used to implement the parallel processing. These are as follows -
Database Partitioning - Database partitioning is actually a type of table partitioning information. There is a particular type of service that queries the database system or the information of the database, named the Integration Service. Basically, it looks up the partitioned data from the nodes of the database.
Round-Robin Partitioning - With the aid of this, the Integration service does the distribution of data across all partitions evenly. It also helps in grouping data in a correct way.
Hash Auto-keys partitioning - The hash auto keys partition is used by the power center server to group data rows across partitions. These grouped ports are used as a compound partition by the Integration Service.
Hash User-Keys Partitioning - This type of partitioning is the same as auto keys partitioning but here rows of data are grouped on the basis of a user-defined or a user-friendly partition key. The ports can be chosen individually that correctly defines the key.
Key Range Partitioning - More than one type of ports can be used to form a compound partition key for a specific source with it’s aid, the key range partitioning. Each partition consists of different ranges and data is passed based on the mentioned and specified range by the Integration Service.
Pass-through Partitioning - Here, the data are passed from one partition point to another. There is no distribution of data.
Q13) What are the best mapping development practices?
Ans: Best mapping development practices are as follows -
Source Qualifier - This includes extracting the necessary data keeping aside the unnecessary ones. It also includes limiting columns and rows. Shortcuts are mainly used in the source qualifier. The default query options like for example User Defined Join and Filter etc, are suitable to use other than using source qualifier query override. The latter doesn't allow the use of partitioning possible all the time.
Expressions - It includes the use of local variables in order to limit the number of huge calculations. Avoiding data type conversions and reducing invoking external coding is also part of an expression. Using operators are way better than using functions as numeric operations are better and faster than string operation.
Aggregator - Filtering the data is a necessity before the Aggregation process. It is also important to use sorted input.
Filter - The data needs a filter transformation and it is a necessity to be close towards the source. Sometimes, multiple filters are also needed to be used which can also be later replied by a router.
Joiner - The data is required to be joined in the Source Qualifier as it is important to do so. It is also important to avoid the outer joins. A fewer row is much more efficient to be used as a Master Source.
Lookup - Here, joins replace the large lookup tables and the database is reviewed. Also, database indexes are added to columns. Lookups should only return those ports that meet a particular condition.
Q14) What are the different mapping design tips for Informatica?
Ans: The different mapping design tips are as follows -
Standards - The design should be of a good standard. Following a standard consistently is proven to be beneficial in the long run projects. Standards include naming descriptions, conventions, environmental settings, documentation and parameter files etc.
Reusability - Using reusable transformation is the best way to react to the potential changes as quickly as possible. mapplets and worklets, these types of Informatica components are best suited to be used.
Scalability - It is important to scale while designing. In the development of mappings, the volume must be correct.
Simplicity - It is always better to create different mappings instead of creating one complex mapping. It is all about creating a simple and logical process of design
Modularity - This includes reprocessing and using modular techniques for designing.
----- Related Artice: Mapplet In Informatica -----
Q15) What is the meaning of the word ‘session’? Give an explanation of how to combine execution with the assistance of batches?
Ans: Converting a data from a source to a target is generally implemented by a teaching service and this is known as a session. Usually, session's manager executes the session. In order to combine session’s executions, batches are used in two ways - serially or parallelly.
Q16) How many numbers of sessions is grouped in one batch?
Ans: Any number of sessions can be grouped in one batch but however, for an easier migration process, it is better if the number is lesser in one batch.
Q17) Differentiate between mapping parameter and mapping variable?
Ans: Mapping variable refers to the changing values of the sessions' execution. On the other hand, when the value doesn't change during the session then it is called mapping parameters. Mapping procedure explains the procedure of the mapping parameters and the usage of this parameter. Values are best allocated before the beginning of the session to these mapping parameters.
Q18) What are the features of complex mapping?
1. Difficult requirements
2. Numerous transformations
3. Complex logic regarding business
These are the three most important features of complex mapping.
Q19) Which option helps in finding whether the mapping is correct or not?
Ans: The debugging option helps in judging whether the mapping is correct or not without really connecting to the session.
Q20) What do you mean by OLAP?
Ans: OLAP or also known as On-Line Analytical Processing is the method with assistance of which multi-dimensional analysis occur.
Q21) Mention the different types of OLAP?
Ans: The different types of OLAP are ROLAP, HOLAP.
Q22) What is the meaning of surrogate key?
Ans: The surrogate key is just the replacement in the place of the prime key. The latter is natural in nature. This is a different type of identity for each consisting of different data.
Q23) What is a session task?
Ans: When the Power Centre Server transfers data from the source to the target, it is often guided by a set of instruction and this is known as the session task.
Q24) What is the meaning of command task?
Ans: Command task only allows the flow of more than one shell command or sometimes flow of one shell command in Windows while the work is running.
Q25) What is the meaning of standalone command task?
Ans: The type of command task that allows the shell commands to run anywhere during the workflow is known as the standalone task.
Q26) Define workflow?
Ans: The workflow includes a set of instructions which allows the server to communicate for the implementation of tasks.
Q27) How many tools are there in workflow manager?
Ans: There are four types of tools -
1. Task Designer
2. Task Developer
3. Workflow Designer
4. Worklet Designer
Q28) Define target load order?
Ans: Target load order is depended on the source qualifiers in a mapping. Generally, multiple source qualifiers are linked to a target load order.
Q29) Define Power Centre repository of Informatica?
Ans: Informatica Power Centre consists of the following Metadata like -
1. Source Definition
2. Session and session logs
4. Target Definition
6. ODBC Connection
Two repositories are as follows -
Subscribe to our youtube channel to get new updates..!
1. Global Repositories
2. Local Repositories
Mainly Extraction, Loading (ETL) and Transformation of the above-mentioned metadata are performed through the Power Centre Repository.
----- For More Info: Informatica PowerCenter - ETL Tools -----
Q30) Name the scenario in which Informatica server reject files?
Ans: When the server faces a rejection of the update strategy transformation, it regrets files. The database consisting of the information and data also gets disrupted. This is a rare case scenario.
Informatica Advanced Interview Questions
Q31) How to use Normalizer Transformation in Informatica?
- This is of type an Active T/R which reads the data from COBOL files and VSAM sources (virtual storage access method)
- Normalizer T/R act like a source Qualifier T/R while reading the data from COBOL files.
- Use Normalizer T/R that converting a each input record into multiple output records. This is known as Data pivoting.
Q32) What are the Limitations of Pushdown Optimization?
- Rank T/R cannot be pushed
- Transaction control T/R
- Sorted aggregation.
1. Design a mapping with filter, rank and expression T/R.
2. Create a session --> Double click the session select properties tab.
3. Select the mapping tab --> set reader, writer connection with target load type normal.
4. Click apply --> click ok --> save the session.
5. Create & start workflow.
Pushdown Optimization Viewer:-
Double click the session --> Select the mapping tab from left window --> select pushdown optimization.
Q33) Differences between Copy and Shortcut?
|Copy Vs ShortCut|
|1. Copy an object to another folder||1. Dynamic link to an object in the folder|
|2. Changes to original object doesn’t reflect||2. Dynamically reflects the changes to an original object|
|3. Duplicate’s the space||3. Preserves the space|
|4. Created from unshared folders||4. Created from shared folders|
Q34) How to use PMCMD Utiliy Command?
It is a command based client program that communicates with integration service to perform some of the tasks which can also be performed using workflow manager client.
Using PMCMD we can perform the following tasks:
The PMCMD can be operated in two different modes:
Command line Mode.
Q35) Scheduling a Workflow?
1. A schedule is an automation of running the workflow at a given date and time.
2. There are 2 types of schedulers:
(i) Reusable scheduler
(ii) Non Reusable scheduler
(i) Reusable scheduler:-
A reusable scheduler can be assigned to multiple workflows.
(ii) Non Reusable scheduler:-
- A non reusable scheduler is created specific to the workflow.
- A non reusable scheduler can be converted into a reusable scheduler.
The following are the 3rd party schedulers:
1. Cron (Unix based scheduling process)
3. Control M
6. WLM (work hard manager)
- 99% production people will do scheduling.
- Before we run the workflow manually. Through scheduling we run workflow this is called Auto Running.Q36) What is Dynamic Lookup Cache?
- The cache updates or changes dynamically when lookup on target table.
- The dynamic lookup T/R allows for the synchronization of the target lookup table image in the memory with its physical table in the database.
- The dynamic lookup T/R or dynamic lookup cache is operated in only connected mode (connected lookup )
- Dynamic lookup cache support only equality conditions (=conditions)
|New Lookup Row||Description|
|0||The integration service does not update or insert the row in the cache|
|1||The integration service inserts the row into the cache|
|2||The integration service updates the row in the cache|
Q37) How to use PowerCenter Command Line in Informatica?
Ans: The transformation language provides two comment specifiers to let you insert comments in expression:
- Two Dashes ( - - )
- Two Slashes ( / / )
The Power center integration service ignores all text on a line preceded by these two comment specifiers.
Q38) Differences between variable port and Mapping variable?
|Variable Port Vs Mapping Variable|
|Variable Port||Mapping Variable|
|1. Local to the T/R||1. Local to the Mapping|
|2. Values are non-persistant||2. Values are persistent|
|3. Can’t be used with SQL override||3. Can be used with SQL override|
Mapping variables is used for incremental extraction.
In mapping variables no need to change the data. It automatically changed.
In mapping parameter you have to change the data and time.
Q39) Which is the T/R that builts only single cache memory?
Ans: Rank can build two types of cache memory. But sorter always built only one cache memory.
Cache is also called Buffer.
Q40) What is XML Source Qualifier Transformation in Informatica?
- Reads the data from XMl files.
- XML source definition associates with XML source Qualifier.
- XML files are case sensitive markup language.
- Files are saved with an extension .XML.
- XML files are hierarchical (or) parent child relationship file formats.
- Files can be normalized or denormalized.
Q41) What is Load Order?
Ans: Design mapping applications that first loads the data into the dimension tables. And then load the data into the fact table.
- Load Rule:- If all dimension table loadings are success then load the data into fact table.
- Load Frequency:- Database gets refreshed on daily loads, weekly loads and montly loads.
Q42) What is Snowflake Schema?
Ans: A large denormalized dimention table is splitted into multiple normalized dimensions.
Select Query performance increases.
Maintanance cost increases due to more no. of tables.
Q43) Stand alone Email task?
- It can be used any where in the workflow, defined will Link conditions to notify the success or failure of prior tasks.
- Visible in Flow Diagram.
- Email Variables can be defined with stand alone email tasks.
Q44) What is Mapping Debugger?
- Debugger is a tool. By using this we can identify records are loaded or not and correct data is loaded or not from one T/R to other T/R.
- Session succeeded but records are not loaded. In this situation we have to use Debugger tool.
Q45) What is the functionality of F10 in informatica?
Ans: F10 --> Next Instance
Q46) What T/R having Nocast?
Ans: Lookup T/R
Note:- Prevent wait is available in any task. It is available only in Event wait task.
- F5 --> Start Debbugger.
- Debugger is used for test the records are loader or not, correct data is loader or not.
- Debugger is used only for to test Valid Mapping but not invalid Mapping.
Q47) What is Worklet and types of worklets?
- A worklet is defined as group of related tasks.
- There are 2 types of the worklet:
- Reusable worklet
- Non-Reusable worklet
- Worklet expands and executes the tasks inside the workflow.
- A workflow which contains the worklet is known as Parent Workflow.
(a) Reusable Worklet:-
Created using worklet designer tool.
Can be assigned to Multiple workflows.
(b) Non-Reusable Worklet:-
Created using workflow designer tool.
Created Specific to workflow.
Q48) What is Relative Mode?
Ans: In Real time we use this.
Relative Time: The timer task can start the timer from the start timer of the timer task, the start time of the workflow or worklet, or from the start time of the parent workflow.
- Timer task is mainly used for scheduling workflow.
- Workflow 11 AM --> Timer (11:05 AM) --> Absolute Mode
- Anytime workflow start after 5 mins Timer --> (5 mins) will start Relative Mode.
Q49) Difference between Filter and Router T/R?
|Filter T/R Vs Router T/R|
|Filter T/R||Router T/R|
|1. Single condition||1. Multiple conditions|
|2. Single Target||2. Multiple Targets|
|3. Rejected rows cannot be captured||3. Default group captures rejected rows.|
Q50) What is a Repository Manager?
- Create, edit and delete folders.
- Assign users to access the folders with read, write and execute permissions.
- Backup and Restore repository objects.
Q51) What is Rank Transformation in Informatica?
- This a type of an active T/R which allows you to findout either top performance or bottom performers.
- Rank T/R is created with the following types of the port:
- i. Input Port (I)
- ii. Output Port (O)
- iii. Rank Port (R)
- iv. Variable Port (V)
Q52) What is meant by Informatica PowerCenter Architecture?
- Power Center Clients
- Power Center Repository.
- Power Center Domain.
- Power Center Repository Service (PCRS)
- Power Center Integration Service (PCIS)
- Informatica administrator.
Mapping is nothing but ETL Application.
Q53) What is Workflow Monitor?
It is a GUI based client application which allows use to monitor ETL objects running an ETL Server.
Collect runtime statistics such as:
a. No. of records extracted.
b. No. of records loaded.
c. No. of records rejected.
d. Fetch session log
- Complete information can be accessed from workflow monitor.
For every session one log file is created.
Q54) If informatica have own scheduler why using third party scheduler?
Ans: The client uses various applications (mainframes, oracle apps use Tivoli scheduling tool) and integrate different applications & scheduling that applications it is very easy by using third party schedulers.
Q55) What is Workflow Manager?
Ans: It is a GUI based client which allows you to create following ETL objects.
- A session is a task that executes mapping.
- A session is created for each Mapping.
- A session is created to provide runtime properties.
- A session is a set of instructions that tells ETL server to move the data from source to destination.
Workflow is a set of instructions that tells how to run the session taks and when to run the session tasks.
Q56) What is Informatica PowerCenter?
Ans: A data integration tool which combines the data from multiple OLTP source systems, transforms the data into homogeneous format and delivers the data through out the enterprise at any speed.
It is a GUI based ETL product from informatica corporation which was founded in 1993 Red wood city, California.
There are many products in informatica corporation:
1. Informatica Analyzer.
2. Life cycle management.
3. Master data
Having many products in informatica.
Informatica power center is one of the product of informatica.
Using informatica power center we will do the Extraction, transformation and loading.
Q57) What is a Dimensional Model?
Data Modeling:- It is a process of designing the database by fulfilling business requirements specifications.
A Data Modeler (or) Database Architech Designs the warehouse Database using a GUI based data modeling tool called “ERWin”.
ERWin is a datamodeling tool from computer Associates (A).
A dimensional modeling consists of following types of schemas designed for Datawarehouse:
A schema is a data model which consists of one or more tables.
Q58) How does Rank transformation handle string values?
Ans: Rank transformation can return the strings at the top or the bottom of a session sort order. When the Integration Service runs in Unicode mode, it sorts character data in the session using the selected sort order associated with the Code Page of IS which may be French, German, etc. When the Integration Service runs in ASCII mode, it ignores this setting and uses a binary sort order to sort character data.
Find Training Programs In your Nearest Locations:
Are you looking to get trained on Informatica, we have the right course designed according to your needs. Our expert trainers help you gain the essential knowledge required for the latest industry needs. Join our Informatica Certification Training program from your nearest cities.
Informatica Training Chennai, Informatica Training Bangalore, Informatica Training Hyderabad, Informatica Training Mumbai, Informatica Training Kolkata, Informatica Training Noida, Informatica Training Gurgaon, Informatica Training Pune, Informatica Training Delhi, Informatica Training Dallas, Informatica Training New Jersey, Informatica Training Chicago, Informatica Training Houston.
These courses are equipped with Live Instructor-Led Training, Industry Use cases, and hands-on live projects. This training program will make you an expert in Informatica and help you to achieve your dream job. Additionally, you get access to Free Mock Interviews, Job and Certification Assistance by Certified Informatica Trainers.
List of Informatica Courses:
Mindmajix offers training for many of other informatica courses depends on your requirement:
|Other Informatica Courses|
|Informatica Analyst||Informatica PIM|
|Informatica SRM||Informatica MDM|
|Informatica Data Quality||Informatica ILM|
|Informatica Big Data Edition||Informatica Multi Domain MDM|