Log on to SAP HANA Studio
Go to HANAS System
Go to Security
Create User for Repository
Assign Privileges for user as
Log on to Data services Repository Manager
Select the Repository as Local
Data base type as SAP HANA and Version as HANA 1.X
Provide The HANAS Server name and newly created user name and password.
Log on to Hadoop
Right click and open the Terminal
Log on to user as object
i.e. # su object.
Change directory to BODs installation Bin directory
i.e. #cd/home/object/bods/data services/bin
Setting the environmental variables
To connect to server manager of BODS
Server Manger options are open and enter the option as ‘2’ to create the job server.
Enter the option as ‘c’ h create a new job server and click on enter.
Provide the name for the job server
Enter the TCP Port Number for Job server
If you want to enable the SNMP for the job server provide ‘y’ otherwise ‘N’
If you have create the repository with ODBC means enter ‘y’ otherwise ‘N’
Provide the database server name for HANA
Provide the Port number for HANA
Select version and provide user name and
Provide ‘y’ to confirm the information
Now the job server will be created
To start or stop the Job service.
After creating the job server, the job server manager utility is open
Enter the option ‘1’ to start the services.
To start the server, enter the option as ‘s’
To stop the server, enter the option
To come out, select the option as ‘q’
Enter the option as ‘X’
The server will be started.
Steps involved in configuring SAP Business objects for use with Hadoop.
Connection server-install-dir/connection server/jdbc/drivers/Hive.
Below are the Hive JAR files, we have to copy bash on the version 0.7.1
Log in to Information Design Tool(JDT)
Create a user session with login credentials
Under sessions, open connections folder
Create a new relational connection
Provide the relational conn name and click on Next
Under Driver selection menu, select
Apache Hadoop Hive JDBC Drivers.
Click on Next
Provide Hadoop Hive host name and port as below.
Click on Test connectivity.
If it is successful, save the connection by clicking finish
Create a project in IDT
Create a shortcut for the above connection in the project.
Right-click on the relational connection and select publish connection to a repository
Select your folder and click on finish.
Now the connection is secured.
Create a Data foundation layer and bind the conn with the data foundation layer.
Provide the name for Data foundation and click on next
Select the secured relational connection for example Hive conn.cns.
Click on finish and save the data foundation.
This conn is used by data foundation layer to import data from server.
From the data foundation layer, drag and drop the tables required and join the tables.
Save the data foundation layer.
Create a new business layer and bind the data foundation layer with the business layer.
Provide the name for business layer and select your data foundation layer.
finish to save the business layer.
Right click on the Business layer and select Publish to repository.
Use integrally before publishing to check dependencies.
Log on to Crc and set universe access policy for users.
Log on to launch pad
Click on web application
Click on Create
Select universe and OK
All the universe in the Repository will be displayed.
Select the Hadoop-Hive data universe
Click on select
Drag and drop the fields to the work space of result objects.
Run the Query to create a web Report.
|Big Data On AWS||Informatica Big Data Integration|
|Bigdata Greenplum DBA||Informatica Big Data Edition|
|Hadoop Testing||Apache Mahout|
Get Updates on Tech posts, Interview & Certification questions and training schedules