If you're looking for Oracle Demantra Interview Questions for Experienced or Freshers, you are at the right place. There are a lot of opportunities from many reputed companies in the world. According to research, Oracle Demantra has a market share of about 0.8%. So, You still have the opportunity to move ahead in your career in Oracle Demantra Advanced Forecasting. Mindmajix offers Advanced Oracle Demantra Interview Questions 2024 that help you in cracking your interview & acquire a dream career as an Oracle Demantra Consultant.
2) What is the procedure to install the Demantra Base Application or Patches?
3) What will you do if you face any issues starting the webserver or Demantra WAR file?
4) What actions can be taken if we are facing any issues with the Demantra Batch or Simulation Engine?
5) What is the lifetime of flashback logs?
6) What privilege has been given in the Oracle database as part of Demantra's installation?
7) What are the three modules in Oracle Demantra that can be processed in a separate manner?
8) What are the issues that are related to the integration of EBS along with that of Demantra?
9) What is the place where you can run data collection processor data load?
10) How is the creation of new indexes of data related to sales time-consuming?
Oracle Demantra is an Oracle tool for demand management and supply chain management. An automated forecast process is enabled, which simultaneously maps demand forecasting against various factors like customer commitments, inventory counts, and supply restrictions. Implementation of Oracle Demantra has numerous benefits like higher service levels, lower inventory costs, greater customer satisfaction, and lower distribution costs.
Demantra implementations result in:
Create a directory called C:/Temp in the machine. In this location only, setup.exe will be implemented for Demantra Base Application. It will be executed prior to the generation of installation and log file in this directory.
Moreover, there will also be some additional log files under the following directories:
If you would like to Enrich your career with an Oracle Demantra certified professional, then visit Mindmajix - A Global online training platform: “Oracle Demantra Training” Course. This course will help you to achieve excellence in this domain. |
To minimize the upgrade time, we can force parallelism using a logon trigger on the server. There is a script in the upgrade process, which contains a new set of indexes, one per engine profile and on each profile quantity_form expression. Make sure to disable or drop that trigger once the upgrade has completed.
Below is the trigger used and tested successfully.
CREATE OR REPLACE TRIGGER
force_parallel_ddl_trg
AFTER LOGON ON database
BEGIN
IF (USER='DEMANTRA') THEN
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DDL';
END IF;
END force_parallel_ddl_trg;
/
We need to revisit and explore the Collaborator.log file in case of any issues. This log file is located in the Collaboratordemantralogs folder in the Demantra webserver. Moreover, every individual web server will possess logs in different directories. These logs can also help us to dig further. These log locations will be specified in the respective web server documentation.
If any issues with Demantra Batch or Simulation Engine, we should review Key engine logs. These logs are present in the root driver of the machine where the engine is started. The folder’s name, which is holding these logs, usually has Engine2K in it.
The data load process is the process of populating Demantra staging properly (through SQL loader or through Collection process in case of integrated EBS-Demantra instance). Once staging tables are populated, we can run the EP_LOAD_MAIN procedure in two ways.
1) Manually
2) Workflow for downloading data to Demantra base table.
Demantra admin user can be “dm/dm” or “sop/sop” based on the module installed.
Navigation for this specific error is as follows.
Business Modeler è Tools è Procedure Error Log
This log is stored in the backend table -
db_exception_log.
Related Article: Embeds machine learning capabilities into oracle database |
Shipment and bookings collection job move from EBS database to dementra database. This generally can have extra logs available by turning on the profile
MSD_DEM:
Debug Mode to Yes (default is No). You will start finding references to Demantra schemas or tables like etc. Errors can be tracked using
Active_Proc_Dyn
or through the workflow.
If RVWR writes operation fails, database state depends on the conditions of write error occurrence.
1. In case of a Guaranteed Restore Point – to guard the restore point guarantee violation, the database crashes itself.
2. In the case of a Guaranteed Restore Point and the main database – The database will continue to operate normally but the Flashback Mode will be set off automatically for it.
3. In case of a Guaranteed Restore Point and a standby database – Until the resolution of write failure comes, the database will hang itself.
Yes, it can be done but indirectly.
The size of the Flashback Buffer = 2 *
LOG_BUFFER.
Due to performance reasons,
LOG_BUFFER
should be set to a minimum of 8MB for databases running in Flashback mode.
No. Flashback Logs are temporary log files, which are not backed up using RMAN.
Flashback logs are kept until the parameter is satisfied. In case of any space issue, they may be deleted to create space for other logs or backup.
LIST RESTORE POINT
[
ALL|restore_point_name
]
By querying from another session, we can see the developments of the flashback.
1. It is recommended to employ a fast file system for flash recovery areas like ASM.
2. Adequate disk spindles should be configured to support the disk throughput and to write the flashback logs successfully.
3. If the storage system holding flashback logs is having a non-volatile RAM, we should configure the file system on top of striped storage volumes. This can improve performance as flashback logs will grow across multiple spindles.
4. For any large and production databases, we should set the parameter LOG_BUFFER to be a minimum of 8MB. This can assure maximum memory allocation for writing flashback database logs.
As part of the Demantra installation process, the following privilege is set up in the Oracle database, which is required while setting up JD Edwards EnterpriseOne Integration.
GRANT CREATE ANY DIRECTORY TO demantrauser1;
If one doesn’t want to use JD Edwards integration, this privilege can be revoked using the REVOKE command.
Three modules can be processed separately in Oracle Demantra. They are Real-Time Sales and Operations planning, demand modeling, and advanced forecasting and demand management.
To install the base file of Demantra in pieces you first need to create a TMP folder on the C drive. You can opt to create a file on the machine where the setup file would be executed. It is so because it would generate a log file in this folder. Moreover, you should also note that some additional logs would also be present in various folders. For instance, the Oracle back-end database would comprise database objects and demand planners. The SQL, back-end database, comprises demand objects as well as demand planners.
The Demantra staging table can be adequately populated by using a SQL Loader. However, one has to be careful in handling an integrated EBS Demantra case. Moreover, the workflow that comes from the main table has to be in sync with the Demantra base table. In this context, it is interesting to note that the Demantra admin user can be in the form of dm/dm.
If you want to view the errors in the source file, you can opt for opening the Business Modeler, and then you can begin tools. From tools, you can open the procedure error log, and from there you can view the errors related to the errors in the source file.
It is important to note that this position depends on the case where there is an occurrence of the write error. For instance, as an operator, you must note that if there is the presence of a guaranteed restore point, the database can crash to ensure that the guarantee related to restoring position is not voided. On the other hand, if there is not a guaranteed restore point and moreover it is a primary database, the mode related to flashback would automatically turn off. It would continue to be in operation in a normal manner. Additionally, there can also be a situation where there is a complete absence of a guaranteed restore point, and it is a standby database, this database would create hanging problems and can be solved only when the disk writing problem can be solved.
The answer is no in the context of creating a backup to the flashback logs. The flashback logs are not backed up in any case. Moreover, you should note that even if the backup area of recovery is used to back up the contents related to FRA contents, various file types are usually backed up. The incremental and full-back upsets and datafile copies are some files that are typically backed up by the Demantra server. On the other hand, the log files related to flashback are always considered to be files that are ever-changing in nature. They cannot be backed up by RMAN. Moreover, media recovery is not needed in the case of these data.
You can improve the performance of the flashback file by the usage of a fast file system. You can also speed up the production of the data by avoiding the usage of a fast file system. It also prevents operating file system caching such as ASM. One can also configure the disk spindles for the file system that would assemble the area of flash recovery.
For the production of large-sized databases, you may require multiple disk sizes so that you can support the enormous file. Moreover, this large file size can also be used to imprint the flashback logs effectively. On the other hand, to facilitate the production of large databases, you can set the parameter to buffer so that it can stay within the permissible size.
On the other hand, if you have used a storage system to hold the flash recovery that does not comprise of a non-volatile RAM, you can opt for using a RAM that can be tripped in case the file size exceeds. Moreover, this trip RAM would also allow you to make sure that the flashback logs are spread across multiple ranges of data and thereby improving its performance across various platforms.
In most situations, the important engine logs are usually located on the root drive. The folder that comprises these woods typically has various types of error sources somewhere in the folder location. However, one should always pay close attention to the fact that there can be at least Engine 2K logs. On the other hand, if you want to write the master Engine Manager Logs you always need to opt for the Engine Administrator, and you can also change the engine manager tab from STD to FILE. One should still note that it is not advisable to re-register the engine after facilitating this alteration.
It is important to note that a majority of the EBS activities can be logged in by using a specific note. However, the bookings and shipment collection tasks can have extra logging that is always turned on by altering MSD_DEM and debug mode to yes. The reservations and shipment collection assignment can continuously transform into the other side of the wall from the bottom of the EBS to the Oracle Demantra side of the database. Moreover, the errors may also need to be tracked by utilizing the workflow and the troubleshooting sections of the note.
It is important to note that flashback logs are managed by the organization of Oracle. Moreover, Oracle would try to keep various Flashback logs as needed to satiate the parameter. However, if there is the presence of pressure in the Flash Recovery Area (FRA), the records related to flashback may be deleted to accommodate room for other specialized functions like archived logs and backups.
On the other hand, for instance, if the area related to fast recovery has enough space, then a log pertaining to flashback is created so that it can satiate the retention target of flashback. On the other hand, if the log files related to flashback are old enough, it is no longer needed to satisfy the retention target of flashback.
Moreover, in this case, it is worthy to note that the log file related to flashback is reused. In this context, it is also vital to mention that if the area pertaining to the fast recovery is populated, then an archived redo log may be automatically deleted by using the rapid recovery area. In this manner, the flashback log files can be quickly eliminated without removing the original data.
Moreover, you should also note that no file in the area related to the fast recovery is eligible for deletion in case if you are needed to satiate a restore point in Oracle Demantra. In other words, the backup retention policy can also cause a fast recovery area so that it can fill the space completely.
It is important to note that in RMAN, you can always use the list restore point command. On the other hand, if you are opting for a recovery catalog, you can use the view RC command to make sure that you can witness all the restore points at the same place at a given point in time. This would also reduce the instances of mistakes in the target database.
You can always opt for witnessing the progress of a flashback database. During the operation of the database related to flashback, you can run various query commands from another session to observe the development of the flashback. Quite interestingly, the database associated with flashback has two different phases in the form of the actual flashback and the recovery related to various types of media. It usually happens afterward to bring the database to a state which is consistent. You would see the following message in the actual flashback session which would give you a bright idea about the progress of the operation related to the database. Moreover, this letter would also depict the amount of free RAM which is available at the time of the service.
Explore Oracle Demantra Sample Resumes Download & Edit, Get Noticed by Top Employers! |
As a part of the upgrade, there exists a script building on a new set of indexes. It is assigned to an engine profile and on each profile quantity. Quite interestingly, the creation of an index from a big data table can take an extended amount of time. However, always remember to drop or disable that trigger when the upgrade has been completed.
It is usually done in order to facilitate a more robust demand forecasting feature in the system. It is also vital to note that Demantra is mostly used for supply chain management. Hence, the creation of indexes in a shorter amount of time enables Demantra to bring more forecasts into your sales team so that you can have a clear picture of the demands of your customer in the near future.
On the other hand, the creation of new indexes would also help you to make sure that you can disable or drop that trigger when the upgrade or update has been completed. In this manner, you would always make sure that you can clearly judge the demands of your target audiences which would further boost your product development.
You can opt for reviewing the collaborator log file that is usually situated in the Demantra and collaborator folder. You can also locate it on the webserver on which Oracle Demantra is running. You can also opt for checking each web server that would even have their logs in several directories. These can also be taken into due consideration to ensure that the process of command running is free from any underlying errors.
Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:
Name | Dates | |
---|---|---|
Oracle Demantra Training | Dec 24 to Jan 08 | View Details |
Oracle Demantra Training | Dec 28 to Jan 12 | View Details |
Oracle Demantra Training | Dec 31 to Jan 15 | View Details |
Oracle Demantra Training | Jan 04 to Jan 19 | View Details |
Ravindra Savaram is a Technical Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.