Log on to SAP HANA Studio
Go to HANA System
Go to Security
Create User for Repository
Assign Privileges for user as
Public
Monitoring.
Log on to Data services Repository Manager
Select the Repository as Local
Database type as SAP HANA and Version as HANA 1.X
Provide The HANAS Server name and newly created user name and password.
Log on to Hadoop
Right-click and open the Terminal
Log on to user as object
i.e. # su object.
Change directory to BODs installation Bin directory
i.e. #cd/home/object/bods/data services/bin
Setting the environmental variables
i.e. /home/—-/bin>../al-env.sh
To connect to the server manager of BODS
i.e. /home/—-/bin>./svycfg
Server Manager options are open and enter the option as ‘2’ to create the job server.
Enter the option as ‘c’ h create a new job server and click on enter.
Provide the name for the job server
Enter the TCP Port Number for Job server
If you want to enable the SNMP for the job server provide ‘y’ otherwise ‘N’
If you have create the repository with ODBC means enter ‘y’ otherwise ‘N’
Provide the database server name for HANA
Provide the Port number for HANA
Select version and provide user name and
Provide ‘y’ to confirm the information
Now the job server will be created
To start or stop the Job service.
After creating the job server, the job server manager utility is open
Enter the option ‘1’ to start the services.
To start the server, enter the option as ‘s’
To stop the server, enter the option
To come out, select the option as ‘q’
Enter the option as ‘X’
The server will be started.
Steps involved in configuring SAP Business objects for use with Hadoop.
Configure SAP Business objects with HIVE JDBC drivers, if the server is of a version lower than B04.0 with SP5. And in BO Server 4 sps, SAP Provides Hive connectivity by default.
In order to configure JDBC drivers in earlier versions, we have to place the set of JAR Files.
The data access layer allows the SAP BOBT platform to connect to Apache Hadoop Hive 0.7.1 and 0.8.0 databases through JDBC on all platforms.
To create a connection to the Hive thrift server, you first have to place the ser of hive JAR files to the Hive directory which is available in the below path.
Connection server-install-dir/connection server/jdbc/drivers/Hive.
Below are the Hive JAR files, we have to copy bash on the version 0.7.1
Log in to Information Design Tool(JDT)
Create a user session with login credentials
Under sessions, open connections folder
Create a new relational connection
Provide the relational conn name and click on Next
Under Driver selection menu, select
ApacheHadoop Hive
JDBC Drivers.
Click on Next
Provide Hadoop Hive host name and port as below.
Hadoop45.wdf.sap.com:10000.
Click on Test connectivity.
If it is successful, save the connection by clicking finish
Frequently Asked MapReduce Interview Questions & Answers
Create a project in IDT
Create a shortcut for the above connection in the project.
Right-click on the relational connection and select publish a connection to a repository
Select your folder and click on the finish.
Now the connection is secured.
Create a Data foundation layer and bind the conn with the data foundation layer.
Provide the name for Data foundation and click on next
Select the secured relational connection for example Hive conn.cns.
Click on finish and save the data foundation.
This conn is used by the data foundation layer to import data from the server.
From the data foundation layer, drag and drop the tables required and join the tables.
Save the data foundation layer.
Create a new business layer and bind the data foundation layer with the business layer.
Provide the name for the business layer and select your data foundation layer.
finish saving the business layer.
Right-click on the Business layer and select Publish to the repository.
Use integrally before publishing to check dependencies.
Log on to Crc and set universe access policy for users.
Log on to launch pad
Click on web application
Click on Create
Select universe and OK
All the universe in the Repository will be displayed.
Select the Hadoop-Hive data universe
Click on select
Drag and drop the fields to the workspace of result objects.
Run the Query to create a web Report.
Hadoop Administration | MapReduce |
Big Data On AWS | Informatica Big Data Integration |
Bigdata Greenplum DBA | Informatica Big Data Edition |
Hadoop Hive | Impala |
Hadoop Testing | Apache Mahout |
Stay updated with our newsletter, packed with Tutorials, Interview Questions, How-to's, Tips & Tricks, Latest Trends & Updates, and more ➤ Straight to your inbox!
Name | Dates | |
---|---|---|
Hadoop Training | Mar 25 to Apr 09 | |
Hadoop Training | Mar 28 to Apr 12 | |
Hadoop Training | Apr 01 to Apr 16 | |
Hadoop Training | Apr 04 to Apr 19 |
Ravindra Savaram is a Content Lead at Mindmajix.com. His passion lies in writing articles on the most popular IT platforms including Machine learning, DevOps, Data Science, Artificial Intelligence, RPA, Deep Learning, and so on. You can stay up to date on all these technologies by following him on LinkedIn and Twitter.
1 /15
Copyright © 2013 - 2023 MindMajix Technologies