Blog

Hadoop with BODS Integration

Creation of REPOSITORY in BODS:-

Creation of User in HANA:-

Log on to SAP HANA Studio

                 Go to HANAS System

                     Go to Security

                       Create User for Repository

                          Assign Privileges for user as

                                    Public

                                    Monitoring.

Creation of Repository:-

Log on to Data services Repository Manager

          Select the Repository as Local

              Data base type as SAP HANA and Version as HANA 1.X

              Provide The HANAS Server name and newly created user name and password.

Configuring the Job server in Hadoop Environment:-

Log on to Hadoop

            Right click and open the Terminal

                  Log on to user as object

i.e. # su object.

         Change directory to BODs installation Bin directory

i.e. #cd/home/object/bods/data services/bin

          Setting the environmental variables

i.e. /home/—-/bin>../al-env.sh

              To connect to server manager of BODS

i.e. /home/—-/bin>./svycfg

            Server Manger options are open and enter the option as ‘2’ to create the job server.

           Enter the option as ‘c’ h create a new job server and click on enter.

  Provide the name for the job server

  Enter the TCP Port Number for Job server

      If you want to enable the SNMP for the job server provide ‘y’ otherwise ‘N’

  If you have create the repository with ODBC means enter ‘y’ otherwise ‘N’

   Provide the database server name for HANA

     Provide the Port number for HANA

      Select version and provide user name and

      Provide ‘y’ to confirm the information

      Now the job server will be created

To start or stop the Job service.

  After creating the job server, the job server manager utility is open

    Enter the option ‘1’ to start the services.

  To start the server, enter the option as ‘s’

  To stop the server, enter the option

  To come out, select the option as ‘q’

  Enter the option as ‘X’

  The server will be started.

Interested in mastering MapReduce? Enroll now for FREE demo on MapReduce training

Integrating SAP Business objects with Hadoop:-

Universe Design Using IDT:-

Steps involved in configuring SAP Business objects for use with Hadoop.

  • Configure SAP Business objects with HIVE JDBC drivers, if the server is of a version lower than B04.0 with SP5. And in BO Server 4 sps, SAP Provides Hive connectivity by default.
  • In order to configure JDBC drivers in earlier versions, we have to place the set of JAR Files.
  • The data access layer allows the SAP BOBT platform to connect to Apache Hadoop Hive 0.7.1 and 0.8.0 databases through JDBC on all platforms.
  • To create a connection to the Hive thrift server, you first have to place the ser of hive JAR files to the Hive directory which is available in the below path.
Connection server-install-dir/connection server/jdbc/drivers/Hive.

Below are the Hive JAR files, we have to copy bash on the version 0.7.1

  1. Hadoop-0.2.0.1- core. jar or hadoop- core -0.20.2.jar
  2. Hive-exec-0.7.1.jar
  3. Hive-jdbc–0.7.1.jar
  4. Hive-metastore-0.7.1.jar
  5. Hive-service-0.7.1.jar
  6. Libfb 303. Jar
  7. Log4J-1.2.16. Jar
  8. Commos-Logging-1.0.4. Jar
  9. Slf 4J-api-1.6.1. Jar
  10. Slf 4J- Log4J12 -1.6.1. Jar

Creation of ROLAP Connection for Hadoop system using IDT:-

Log in to Information Design Tool(JDT)

                   Create a user session with login credentials

  Under sessions, open connections folder

  Create a new relational connection

  Provide the relational conn name and click on Next

   Under Driver selection menu, select

             Apache Hadoop Hive JDBC Drivers.

  Click on Next

   Provide Hadoop Hive host name and port as below.

Hadoop45.wdf.sap.com:10000.
 

   Click on Test connectivity.

  If it is successful, save the connection by clicking finish

Frequently Asked MapReduce Interview Questions & Answers

Creation of universe using Hadoop Hive data:-

  Create a project in IDT

  Create a shortcut for the above connection in the project.

  Right-click on the relational connection and select publish connection to a repository

  Select your folder and click on finish.

  Now the connection is secured.

  Create a Data foundation layer and bind the conn with the data foundation layer.

      Provide the name for Data foundation and click on next

      Select the secured relational connection for example Hive conn.cns.

    Click on finish and save the data foundation.

      This conn is used by data foundation layer to import data from server.

      From the data foundation layer, drag and drop the tables required and join the tables.

    Save the data foundation layer.

      Create a new business layer and bind the data foundation layer with the business layer.

      Provide the name for business layer and select your data foundation layer.

       finish to save the business layer.

      Right click on the Business layer and select Publish to repository.

       Use integrally before publishing to check dependencies.

  Log on to Crc and set universe access policy for users.

Creating of WEBI report from Hadoop –Hive universe:-

  Log on to launch pad

  Click on web application

   Click on Create

      Select universe and OK

      All the universe in the Repository will be displayed.

  Select the Hadoop-Hive data universe

      Click on select

      Drag and drop the fields to the work space of result objects.

  Run the Query to create a web Report.

 

Explore MapReduce Sample Resumes! Download & Edit, Get Noticed by Top Employers!Download Now!

RELATED COURSES

Get Updates on Tech posts, Interview & Certification questions and training schedules