If you're looking for OBIEE Interview Questions for Experienced or Freshers, you are at the right place. There are a lot of opportunities for many reputed companies in the world. According to research, OBIEE has a market share of about 4.5%.
So, You still have an opportunity to move ahead in your career in OBIEE Development. Mindmajix offers Advanced OBIEE Interview Questions 2021 that helps you in cracking your interview & acquire your dream career as OBIEE Developer.
|Types of OBIEE Interview Questions|
Top 10 Frequently Asked OBIEE Interview Questions
|If you would like to Enrich your career with an OBIEE certified professional, then visit Mindmajix - A Global online training platform: “OBIEE Training” Course. This course will help you to achieve excellence in this domain.|
The differences between OBIEE and Tableau -
|It is a BI & Reporting tool||It is a data visualization tool|
|It is high in cost & has standard pricing's||It is less in cost for smaller enterprise and high for large enterprise|
|It is less in visualization||It has plenty of visualizations|
|It should be used after the training session on it||It is very easy for a beginner as it is drag & drop functionality|
|For this finalized BI solutions is implemented||It is mostly used for POC reporting|
|It has predefined BI frames for multiple sectors||In this, we need to start from the scrap|
|It suits medium & large industries||It suits small & medium scale industries|
|It helps to create perfect reports||No tool available|
The presentation services and Oracle BI server are part of the OBIEE.
SQL is constructed by the user and it is passed to an analytic engine. After this, the Oracle BI which is the analytic engine will give a description of the data source of physical SQL. Then the data is recovered back to the analytical engine and it is presented to the services related to the presentation.
There are several ways with the help of which SQL can be extracted from OBIEE.
In the case of OBIEE 11g, reports can be sort in by the process of selecting the modify option and then clicking on the sort option which there in the column that is relevant in a pane of criteria.
For OBIEE, the user is given the opportunity of doing different kinds of reports. It is done by clicking on the modify request option and the following Narrative view. After this, it is advised to give @1 for the result of the first column and @2 for the second column, and many more. Also, there is the option of giving a heading in case of no result that is done by clicking on the view that is narrative.
|Related Article - OBIEE Administration Interview Questions|
The user will create a dashboard that is interactive in the case of OBIEE. It can be done by selecting the administration and also the Manage dashboard option. After this add a column selector that will help in creating the dashboard that is interactive.
In the case of OBIEE, there is a write-back option that is used to give columns in the form of updatable. It also helps in viewing reports.
Yes, it is possible to execute a direct SQL for OBIEE. You can do this by simply selecting the request that is a direct database and it is present underneath the area involving the subject.
The developers of OBIEE are able to create a report of two subject areas. First, you have to come down to the bottom of a page from the pane of criteria of the repository is created that is present in the first area of the subject. Then click on the option of combine request. This procedure will help you in creating a report of two areas of the subject.
To put changes to production from development, in the case of RPD you can make use of the option of merge that is present in the admin tool. In the case of reports and dashboards, you can use the framework of the content accelerator to port the changes.
Two types of variables are present for OBIEE 11g.
For the system level, under the cache section, the enable option is used. And in the case of table level, for enabling the cache then the repository that is used for offline mode should be opened. Remember thus repository must not be the present repository. After this, you can select the disable or enable an option for the cache.
First, you have to check whether the table already exists or not. If it exists then you can add a physical layer. After this, you can select the BMM option and then select the presentation later. After this, you have to make sure to reload the metadata of the serve. The added column will become visible for every user.
If a user is serious wants to just change the column heading of the report then the user should make use of the session variable.
For the purpose of creating elf joins, a table alias is utilized. The process of creating a table alias is that you have to first right-click on the table of the physical layer and then you can click on the alias.
In the case of OBIEE, the hierarchy is created in the BMM layer for the purpose of dimensional tables. You have to right-click on the dimension table. Then create a dimension. After this, the hierarchy and levels can be defined manually.
The method of containing a measure that is held on a particular level of dimension is referred to as Level base metrics. Quarterly sales or monthly sales in one such example of metrics that are level-based.
In order for creating metrics that are level-based, you need to first make a new column that is logical and should be based on original measures. After this, you have to drag and then drop this new column accordingly.
The various layers are:
The process with the help of a system that gives confirmation is known as authentication. The various types of authentication are
There may be a situation in which there is a need for connecting two tables that have no relation to them. In this case, another table is used that helps in connecting the other two tables. This third table is known as a bridge table that has the same columns as present in the other two tables.
The metadata information is stored in the repository. The Siebel repository is known as a file system that has the repository file extensions of file. rpd. Metadata repository.
The rules which are connected with data modeling, connectivity, aggregate navigation, and security and caching are stored in the repositories of the metadata. This happens with the Siebel Analytical server.
Several repositories can be accessed with the help of the Siebel analytics server. Also, each repository of metadata is able to store several business models.
The life cycle of Siebel analytics is
The five parts that are included in the architecture of Siebel
The analytical model which is made with the help of the administration tool of Siebel analytics is represented by Metadata.
There are three layers in the repository.
The query repository tool is used for OBIEE or Siebel admin tool. It also allows the user to make an examination of the tool of repository Metadata. Also, the relation between certain objects of Metadata is examined by this. The objects generally consist of in which column of the presentation later should map with which table of the physical layer.
The stages that are present in a certain valuation, contract, economics, etc. are defined as pipelines.
Use the option scheduler that will help you in generating trigger reports that are time-based.
There are mainly two folders known as the prompts and reports.
You can stop a report run on the dashboard automatically by selecting the cancel button.
The term JDK stands for Java development kit which is basically a package of software that consists of tools that are required for writing, compiling, debugging, and also for running Java applets.
It is defined as a repository file which is also known as a rapid file database.
ETL stands for Extract, Transform and Load. ETL Plan is to design the flow of the metadata
Extract —> Transform —> Load
Source Transformation Rule Target
To import metadata from an excel sheet we need to create a driver for the excel data source. This can be achieved by using the following steps
The Query repository tool gives the option to search and analyze the data from the database according to the name, type, and other attributes that are describing the database.
The relationship between the different view layer data and the corresponding physical layer columns.
The opaque views are tables that are created with join or other query data that contain “SELECT” query output. The opaque views make the logical understanding simple for implementation but there are heavy performance constraints. They are only used when there is no other way to get to the final solution.
The Admin tool has the “Manage Sessions” tab which gives you access to the logs that are being generated for each session. After the report generation sessions, you can easily view the log to map each request to the corresponding tables and databases.
The presentation layer is dependent on the database that is underlying each server. Therefore the presentation layer alone cannot be migrated as a stand-alone aspect of the database. What we can do instead is have an ODBC or similar database connection established across from the different servers to the particular main system and then carry over the presentation semantics from the other server with that database-oriented changes in the logic layer.
The creation of the logical column on the higher level of the dashboard will have an effect on the tables only on that view level and not on the other dashboards and other requests. The logical columns created on the repository level will, in turn, get their effect on all the other requests and reports from different view levels. So it is always preferable to have the logical column created at the repository level.
The Siebel Analytics server can be deployed as a stand-alone system or can be deployed as an integrated service which interfaces and communicates to the different Analytics server.
The user ID and password need not be stored in the repository of the Siebel Analytics server. The external tables and LDAP offer other possibilities. The userID and password for user authentication are stored in the external table. The information on different tables and the access information for each user are stored in this external table. The other way is the Lightweight Directory Access Protocol. This is similar to imposing an access limitation to all the different directories and folders thereby having the limitations to the data viewable for the different users.
There are two types of variables, namely the session variables and the repository variables. The session variables are pertaining to each session that is created for every login of a user. They may be System or Non-system variables.
The repository variables are the ones that are specific to a repository/database. The repository variables contain the parameters that are corresponding to different attributes of the repository and queries. They are again classified as static and dynamic variables. The static variables are the ones that are having permanent values throughout. The administrator can change it whenever needed. The dynamic variables are the ones that have values that are corresponding to the SQL queries and data fetches.
The dynamic variables can take up values depending on the scheduled updates that are started by the administrator. They can also take up values due to the SQL queries that have been recently executed from the user side. Initialization blocks run at a specific time or triggered according to a specific condition.
The logical table created at the BMM layer can be based on the data from a single physical layer table when it is called a single Logical Table source. When the specific logical layer table is dependent on the columns of different physical layer tables, it is called Multiple LTS. Most of the time we will be dealing with Multiple LTS.
No, it is not mandatory to create the hierarchies for all the tables, we can just define hierarchies to those tables that need to have it.
The Siebel variable is the storage parameters that we can link within the metadata and other configuration parameters in the Siebel. With the help of the variable manager, all the configuration parameters can be loaded into the specific variable depending upon the different environments we are trying to have. This can help us in making the administrative tasks simpler.
NQSConfig.ini, NQSCluster.ini, odbc.ini, instanceconfig.xml
Whether the issue is specific to this user/general. In general, then you might want to check the joins, referential integrity between tables. If specific, then you may have to check his security authorization, business model filters, session variable initialization / any query timing limitations, number of connection pool parameters, etc
Talk in detail using the below material.. this talks of the entire flow as to which table is used for what
Oracle BI Java Host, Oracle BI Presentation Server, and Oracle BI Server
There are 2 Global Filters as follows:
A materialized view is a physical object and replica of one or more master objects. It will refresh in intervals.
Building reports on de-normalized data is not the best practice it leads to performance issues but we can build the reports. These are reports can’t be used for business analysis because the data will fluctuate in non-regular intervals.
Read the NQSConfig file “Repository Section”. You will easily find the answer. The answer for this is “It can't be changed”.
The repository location can be changed, this must be done when clustering the BI Server. The parameters in the NQSCONFIG.INI file are
REQUIRE_PUBLISHING_DIRECTORY = YES;
We cannot have outer joins in the Physical layer. We can outer joins in the BMM layer. In the BMM layer – a complex join can be a full inner join or a full outer join or whatever your criteria were, but in the physical layer – a physical join is always an inner join.
Well, this is situation-dependent. The only way is to check with source numbers
The Provider Services tool that comes with Essbase is used to provide the interface, with Oracle BI Server talking to Essbase through its XMLA interface. (Need to find the answer still)
Ravindra Savaram is a Content Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.