Oracle GoldenGate is widely used in enterprise environments, especially in industries with high data volumes and complex integration requirements. If you are applying for a job that involves Oracle GoldenGate, studying interview questions can help you prepare for the interview process. Familiarizing yourself with frequently asked Oracle Golden Gate interview questions will increase your chances of success and demonstrate your ability to work with Oracle Golden gate effectively.
If you're looking for Oracle GoldenGate Interview Questions & Answers for Experienced or Freshers, you are at the right place.
There are a lot of opportunities from many reputed companies in the world. According to research, Oracle GoldenGate has a market share of about 9.3%. So, You still have the opportunity to move ahead in your career in Oracle GoldenGate Development.
Mindmajix offers Advanced Oracle GoldenGate Interview Questions 2023 that helps you in cracking your interview & acquire a dream career as Oracle GoldenGate Developer.
We have categorized Oracle GoldenGate Interview Questions into 2 levels they are:
Ans: GoldenGate supports the following topologies.
|If you want to enrich your career and become a professional in Oracle GoldenGate, then enroll in "Oracle GoldenGate Online Training" - This course will help you to achieve excellence in this domain.|
Ans: The replication configuration consists of the following processes.
Ans: Goldengate supports both DML and DDL Replication from the source to target.
Ans: The following supplemental logging is required.
Ans: Integrated Capture (IC):
Ans: The following are the minimum required parameters that must be defined in the extract parameter file.
Ans: Only one Extract process can write to one extra at a time. So you can’t configure multiple extracts to write to the same exttrail.
Ans: Oracle Goldengate provides 3 types of Encryption.
Ans: You can encrypt a password in OGG using
Ans: You can encrypt the password/data using the AES in three different keys
|Read these latest Oracle Performance Tuning Interview Questions and Answers that help you grab high-paying jobs|
Ans: The following are some of the more interesting features of Oracle GoldenGate 12c:
Ans: You can install Oracle GoldenGate 12c using in 2 ways:
Ans: OGG Credential Store manages Encrypted Passwords and USERIDs that are used to interact with the local database and Associate them with an Alias.
Instead of specifying the actual USERID and Password in a command or a parameter file, you can use an alias. The Credential Store is implemented as an auto-login wallet within the Oracle Credential Store Framework (CSF).
Ans: Steps to configure Oracle Credential Store are as follows:
By Default Credential Store is located under “dircrd” directory.
If you want to specify a different location use can specify the “CREDENTIALSTORELOCATION” parameter in the GLOBALS file.
Example: CREDENTIALSTORELOCATION /u01/app/oracle/OGG_PASSWD
Ans: ADD CREDENTIAL STORE
Example: GGSCI> ALTER CREDENTIALSTORE ADD USER GGS@orcl, PASSWORD oracle ALIAS extorcl DOMAIN OracleGoldenGate
GGSCI> INFO CREDENTIALSTORE
GGSCI> INFO CREDENTIALSTORE DOMAIN OracleGoldenGate
Ans: In OGG 12c you can encrypt data with the following 2 methods:
Ans: The database services required to support Oracle GoldenGate capture and apply must be enabled explicitly for an Oracle 126.96.36.199 database.
This is required for all modes of Extract and Replicat.
To enable Oracle GoldenGate, set the following database initialization parameter. All instances in Oracle RAC must have the same setting.
Ans: In a Coordinated Mode Replicat operates as follows:
|Read these latest Oracle PL SQL Interview Questions and Answers that help you grab high-paying jobs|
Ans: The difference between classic mode and coordinated mode is that Replicat is multi-threaded in coordinated mode.
Within a single Replicat instance, multiple threads read the trail independently and apply transactions in parallel. Each thread handles all of the filtering, mapping, conversion, SQL construction, and error handling for its assigned workload.
A coordinator thread coordinates the transactions across threads to account for dependencies among the threads.
Ans: You can create the COORDINATED REPLICATE with the following OGG Command:
Ans: Starting with OGG 12c, if you don’t specify a DISCARDFILE OGG process now generates a dicard file with default values whenever a process is started with START command through GGSCI.
Ans: Yes, Starting with OGG 12c you can now start Extract at a specific CSN in the transaction log or trail.
Ans: The parameters below can be used to improve the replicate performance:
Ans: The lag and checkpoint latency of the Extract, pump and Replicat processes are normally monitored.
Ans: In pass-through mode, the Extract process does not look up the table definitions, either from the database or from a data definitions file.
This increases the throughput of the data pump, as the object definition lookup is bypassed.
Ans: Some of the possible reasons are:
Ans: Some of the possible reasons are:
Ans: OGG checkpoint provides the fault tolerance and makes sure that the transaction marked for committed is capture and captured only once.
Even if the extract went down abnormally, when you start the process again it reads the checkpoint file to provide the read consistency and transaction recovery.
|Read these latest Oracle Exadata Interview Questions and Answers that help you grab high-paying jobs|
Ans: When operating in integrated capture mode, you must make sure that you have assigned sufficient memory to STREAMS_POOL_SIZE. An undersized STREAMS_POOL_SIZE or limiting the streams pool to use a specific amount of memory can cause troubles.
The best practice is to allocate STREAMS_POOL_SIZE at the instance level and allocate the MAX. SGA at GG process level as below:
Ans: In OGG you can configure replicate at the data at the schema level or at the table level using the TABLE parameter of extract and MAP parameter of replicate.
For replicating the entire database you can list all the schemas in the database in the extract/replicate parameter file.
Depending on the amount of redo generation you can split the tables in a schema into multiple extracts and replicates to improve the performance of data replication. Alternatively, you can also group a set of tables in the configuration by the application functionality.
Alternatively, you may need to remove tables that have long-running transactions in a separate extract process to eliminate lag on the other tables.
Let’s say that you have a schema named SCOTT and it has 100 hundred tables.
Out of these hundred tables, 50 tables are heavily utilized by the application.
To improve the overall replication performance you create 3 extracts and 3 replicates as follows:
Ext_1/Rep_1 and Ext_2/Rep_2 contain 25 tables each which are heavily utilized or generate more redo.
Ext_3/Rep_3 contains all the other 50 tables which are least used.
Ans: The WARNLONGTRANS parameter can be specified with a threshold time that a transaction can be open before Extract writes a warning message to the ggs error log.
Example: WARNLONGTRANS 1h, CHECK INTERVAL 10m
Ans: Use the following command to view the Extract checkpoint information.
Ans: The RESTARTCOLLISION parameter is used to skip ONE transaction only in a situation when the GoldenGate process crashed and performed an operation (INSERT, UPDATE & DELETE) in the database but could not checkpoint the process information to the checkpoint file/table.
On recovery, it will skip the transaction and AUTOMATICALLY continue to the next operation in the trail file.
When using HANDLECOLLISION GoldenGate will continue to overwritten and process transactions until the parameter is removed from the parameter files and the processes restarted.
Ans: The log dump utility is used to open the trail files and look at the actual records that have been extracted from the redo or the archive log files.
Ans: This occurs when the V$ARCHIVED_LOG.NEXT_CHANGE# is greater than the SCN required by the GoldenGate Capture process and RMAN is trying to delete the archived logs.
The RMAN-08147 error is raised when RMAN tries to delete these files.
When the database is open it uses the DBA_CAPTURE values to determine the log files required for mining.
However, if the database is in the mounted state the V$ARCHIVED_LOG. NEXT_CHANGE# value is used.
See MetaLink note: 1581365.1
Ans: You must use the DECRYPT option before viewing data in the Trail data.
List a few useful Logdump commands to view and search data stored in OGG trail files.
Below are few log dump commands used on a daily basis for displaying or analyzing data stored in a trail file.
Ans: Oracle is able to provide faster integration of the new database features by moving the GoldenGate Extraction processes into the database.
Due to this, the GoldenGate Integrated Extract has a number of features like Compression which are not supported in the traditional Extract. You can read more about how to upgrade to Integrated Extract and more about Integrated Delivery.
Going forward preference should be given to creating new extracts as Integrated Extracts and also to upgrade existing traditional Extracts.
Ans: Oracle 188.8.131.52 is the minimum required database version that supports both Integrated extract and Integrated Replica.
Ans: Oracle Integrated Delivery is only available for Oracle Databases.
Ans: Yes with 12c, performance statistics are collected in the AWR repository and the data is available via the normal AWR reports.
Ans: The steps to be executed would be the following:
Ans: It is equivalent to the Oracle database SCN transaction number.
Ans: You will have to use the CSV Flat File Adaptor to create CSV files. The source would be the extract trail files which use the configuration of the adaptor settings to generate CSV files.
Ans: When the source and the target schema objects are not the same (different DDLs) the Replicat process needs to know the source definition of the objects. The output from the DEFGEN utility is used in conjunction with the trail data to determine which column value in the trail belongs to which column.
Ans: You must use OGG 11.2 and configure the GoldenGate Integrated Capture process to extract data from compressed tables.
Note: Pre OGG 11.2 doesn’t support extracting data from compressed tables
Ans: Oracle GoldenGate Integrated Capture process supports Oracle databases 10.2 and higher. But if you are running Oracle database 10.2 and want to you Oracle GoldenGate Integrated Capture process then you must configure downstream topology.
Ans: It is recommended that all instances of Oracle GoldenGate be the same version to take advantage of the new functionality, but this is not possible all the time and is not required.
In this scenario, OGG provides a parameter called ‘FORMAT RELEASE’ which allows customers to use different versions of Oracle GoldenGate Extract, trail files, and Replicat together.
Note: The input and output trails of a data pump must have the same trail file version.
Ans: OGG has 2 functionalities, one it is used for Online data Replication and the second for Initial Loading.
If you are replicating data between 2 homogeneous databases then the best method is to use a database-specific method (Exp/Imp, RMAN, Transportable tablespaces, Physical Standby, and so on). Database-specific methods are usually faster than the other methods.
If you are replicating data between 2 heterogeneous databases or your replicate involves complex transformations, then the database-specific method can’t be used. In those cases, you can always use Oracle GoldenGate to perform the initial load.
Within Oracle GoldenGate you have 4 different ways to perform the initial load.
Oracle GoldenGate initial loading reads data directly from the source database tables without locking them. So you don’t need downtime but it will use database resources and can cause performance issues. Take extra precautions to perform the initial load during the non-peak time so that you don’t run into resource contention.
Ans: OGG by default assumes that the sources and target tables are identical. A table is said to be identical if and only if the table structure, data type, and column order are the same on both the source and the target.
If the tables are not identical you must use the parameter ‘SOURCEDEFS’ pointing to the source table definition and ‘COLMAP’ parameter to map the columns from source to target.
Ans: Use the manager process to delete the extract files after they are consumed by the extract/replicate process
Ans: Use the TRANLOGOPTIONS ARCHIVEDLOGONLY option in the parameter file.
Ans: There are 3 basic resources required:
Stay updated with our newsletter, packed with Tutorials, Interview Questions, How-to's, Tips & Tricks, Latest Trends & Updates, and more ➤ Straight to your inbox!
|Oracle GoldenGate Training||Jun 06 to Jun 21|
|Oracle GoldenGate Training||Jun 10 to Jun 25|
|Oracle GoldenGate Training||Jun 13 to Jun 28|
|Oracle GoldenGate Training||Jun 17 to Jul 02|
Yamuna Karumuri is a content writer at Mindmajix.com. Her passion lies in writing articles on IT platforms including Machine learning, PowerShell, DevOps, Data Science, Artificial Intelligence, Selenium, MSBI, and so on. You can connect with her via LinkedIn.
Copyright © 2013 - 2023 MindMajix Technologies