The SAP HANA STUDIO is the primary interface for developers, administrators and data modelers.
1. HANA Studio is an Eclipse-based, integrated development environment (IDE) that is used to develop artifacts in a HANA server.
2. It enables technical users to manage the SAP HANA database, to create and manage user authorizations, to create new or modify existing models of data etc.
3. It is a client tool, which can be used to access local or remote HANA system.
It is based on the open-source Eclipse framework, and it consists of three perspectives: the administration console, the information modeler, and lifecycle management.
The administration console of the studio allows system administrators to administer and monitor the database. It includes database status information as well as functions to start/stop the database, create backups, perform a recovery, change the configuration, and so on. It provides an all-in-one support environment for system monitoring, backup and recovery, and user provisioning.
The information modeler, also known as a HANA Data Modeler is the heart of HANA System. It enables to create modeling views at the top of database tables and implement business logic to create a meaningful report for analysis. It enables users to create new data models or modify existing ones.
The life cycle management supports you in all phases of an SAP HANA application lifecycle, from modeling your product structure, through application development, transport, assemble, and install. It provides an automated SAP HANA service pack (SP) for updates using the SAP Software Update Manager for SAP HANA (SUM for SAP HANA).
SAP HANA is a platform that offers new levels of data modeling that exceeds what’s possible with traditional relational database management systems (RDBMS). But it requires data to be handled in more sophisticated ways to achieve maximum performance.
Business and IT users can either create on-the-fly non-materialized data views or build reusable ones on top of standard SQL tables via a very intuitive user interface, which utilizes SQLScript and stored procedures to perform business logic on the data models. Information models created in SAP HANA can be consumed directly by Business Objects BI clients or indirectly by using the Universe/Semantic Layer built on top of SAP HANA views.
Information models in SAP HANA are a combination of attributes/dimensions and measures. SAP HANA provides three types of modeling views:
1. Attribute views are built on dimensions or subject areas used for business
2. Analytical views are multidimensional views or OLAP cubes, which enable users to analyze values from a single-fact tables related to the dimensions in the attribute
3. Calculation views are used to create custom data sets to address complex business requirement using database tables, attribute views, and analytical views in on-the-fly
In traditional databases, users experience bottlenecks when changing business requirements require modifications to the existing data model, which requires users to delete and re-load data into materialized views. In contrast, in SAP HANA, dynamic data modeling on the lowest granular level is loaded into the system. Raw data is constantly available in memory for analytical purposes, and it is not pre-loaded in the cache, physical aggregate tables, index tables, or any other redundant data storage.
DATA Provisioning is a process of creating, preparing, and enabling a network to provide data to its user. Data needs to be loaded to SAP HANA before data reaches to the user via a front-end tool.
SAP HANA offers both real-time replication and near real-time/batch replication to move data from source systems to the SAP HANA database. Replication-based data provisioning like Sybase Replication Server or SAP SLT (System Landscape Transformation) provides near real-time synchronization of data sets between the source system and SAP HANA. After the initial replication of historical records, the changed data are pushed from the source to SAP HANA based on triggers such as table updates. SAP SLT can also be used to “direct write” data back to the source system in scenarios where “write back” or “round trip” synchronization to the SAP source system is needed.
ETL-based data provisioning is primarily accomplished with SAP BusinessObjects Data Services (DS). DS loads snapshots of data periodically as a batch and is triggered from the target system. The type of data provisioning tool used is primarily determined by the business needs of the use case and the characteristics of the source system.
SLT replicator provides near-real-time and scheduled data replication from SAP source systems to SAP HANA. It is based on SAP’s proven System Landscape Optimization (SLO) technology that has been used for many years for Near Zero Down Time upgrade and migration projects. Trigger-Based Data Replication using SLT is based on capturing database changes at a high level of abstraction in the source SAP system. It benefits from being database and OS agnostic, and it can parallelize database changes on multiple tables or by segmenting large table changes. SLT can be installed on an existing SAP source system or as an additional lightweight SAP system side-by-side with the source system.
SAP HANA also supports real-time replication with direct write using database shared library (DBSL) connection. Using DBSL, the SAP HANA database can be connected as a secondary database to an SAP ECC system and provide accelerated data processing for existing SAP applications. Applications can use the DBSL on the application server layer to simultaneously write to traditional databases and the SAP HANA database.
Extraction – Transformation – Load (ETL) based Data replication load scenario uses SAP BUSINESS OBJECTS DataServices to load the relevant business data from virtually any source system (SAP and non-SAP) to the SAP HANA database. This method enables you to read the required business data on the level of the application layer. You deploy this method by defaming data flows in Data Services and Scheduling the replication jobs.
SAP BusinessObjects Data Services is a proven ETL tool that supports broad connectivity to databases, applications, legacy, file formats, and unstructured data. It provides the modeling environment to model data flows from one or more source systems along with transformations and data cleansing.
The SAP HANA Studio Administration Console provides an all-in-one environment for System Monitoring, Back-up & Recovery, and User provisioning.
System Monitor in HANA studio provides an overview of all your HANA system at a glance. From System Monitor, you can drill down into the details of an individual system in Administration Editor. It tells about Data Disk, Log disk, Trace Disk, Alerts on resource usage with priority.
The Administration console provides tools to monitor the system’s status, its services, and the consumption of its resources. Administrators are notified by an alert mechanism when critical situations arise. Analytics and statistics on historical monitoring data are also provided to enable efficient data center operations and for planning future resource allocations.
The Administration console in the SAP HANA Studio supports the following scenarios:
Recovery to the last data backup
Recovery to both the last and previous data backups
Recovery to last state before the crash
Point-in-time recovery Storage snapshots (SP7)
Log replay on storage snapshots (SP7)
Scale-out support (new hosts are backed up automatically, also added in SP7)
In the event of disaster scenarios such as fires, power outages, earthquakes or hardware failures, SAP HANA supports Hot Standby using synchronous mirroring with the redundant data center concept — including a redundant SAP HANA system — in addition to Cold Standby using a standby system within one SAP HANA landscape, where the failover is triggered automatically.
SAP HANA supports user provisioning with authentication, role-based security and analysis authorization using analytic privileges. Analytical privileges provide security to the analytical objects based on a set of attribute values. These values can be applied to a set of users by assigning them to user/role.
Ever since SAP HANA was initially released in 2010, SAP has demonstrated an unwavering commitment to innovation and continual product improvements. In the previous service packs (SP5/SP6), SAP added extensive new functionality, such as collapsing the architectural layers between application and data processing, as well as converging database and application services into a single, in-memory based architecture. SAP HANA SP5 also supplied numerous capabilities such as enhanced security, high availability via data replication, more sophisticated text analysis, and an array of functionality — known as SAP HANA Extended Application Services — to provide developers access to the database via a consumption model over HTTP. Taken together, these improvements greatly bolstered SAP HANA proficiency in large-scale data center deployments.
With SAP HANA SP6, SAP builds on this solid legacy by debuting SAP HANA smart data access technology. This technology helps enterprises to dynamically derive real-time insights across heterogeneous sources such as Hadoop. New in-memory spatial capabilities in SAP HANA enable organizations to uncover richer and more meaningful signals from the business and geospatial data. And the completion of the integration of Sybase data management with SAP HANA will further transform customers’ end-to-end data management landscape.
SAP HANA SP7 improvements can be broken into three primary categories:
Enhanced information processing to help customers get the most from their data.
Augmented developer and DBA productivity to deliver applications more quickly.
Operational improvements to extend performance and reliability.
Managing the massive data volumes that characterize Big Data environments is very costly: it takes a lot of time, requires additional, often redundant storage and network capacity, and mandates superfluous maintenance activities. It also delivers results more slowly to users. Thus, avoiding unnecessary large-scale data movements is a core best practice for optimal Big Data productivity; it’s simply better and more efficient to bring the computation to the data, rather than the other way round. This tactic is particularly powerful when combined with in-memory processing.
By expanding upon the inherently powerful SAP HANA information processing capabilities, SAP enables application developers to leverage the full strength of the database through analytics libraries, predictive results exploration, text search, geospatial data, and so on. Developers are able to process Big Data and combine the results with real-time analytics, in a single query. This also makes it feasible to access and synthesize enterprise data regardless of location, size, and representation — without needing to move any information.
Smart Data Access is a data virtualization feature in SAP HANA that allows customers to access data virtually from remote sources such as Hadoop, Oracle, Teradata, SQL Server and SAP databases and combine it with data that resides in an SAP HANA database.
SAP HANA smart data access provides data virtualization capabilities that expedite dynamic data queries across heterogeneous relational and non- relational database systems such as Hadoop, SAP® Sybase® Adaptive Server® Enterprise (SAP Sybase ASE), SAP® Sybase® IQ, Teradata, and SAP HANA itself. In SP7, support for Oracle and Microsoft SQL Server was added as well as the ability to use Sybase Event Stream Processor as a data source. Using the generic adaptor framework, other data sources can be added as well, provided those data sources support ODBC.
SAP is bringing the complete potential of the SAP HANA platform to provide data virtualization, which will simplify data queries across heterogeneous sources, while optimizing response time based on where data is stored and how it is used. SAP HANA smart data access helps customers construct real-time big data applications with fast and secure query access to data across their business networks, while minimizing unnecessary data transfers and data redundancy.
SAP has expanded the scope of the SAP HANA platform with the incorporation of new spatial data processing capabilities that combine geospatial data with business data, resulting in a new dimension for real-time business applications. Customers and ISVs are able to process blends of spatial, predictive, and text analysis results within one SQL statement, providing simplified development of intelligent and intuitive location-based solutions. Additionally, SAP HANA customers are able to utilize geo-content at no additional charge, making it possible to seamlessly develop and deploy spatially-based solutions using native in-memory processing, mapping content, and services.
These new spatial capabilities, content, and services will be exceptionally useful for industries that require real-time, mission-critical location solutions for planning, monitoring, and analytics, combined with faster performance, and decreased total cost of ownership (TCO).
For example, energy infrastructure companies can employ the spatial processing capabilities in SAP HANA to immediately identify and take action on high-risk pipeline components based on numerous variables, all in real-time. This newfound agility will help reduce potential outages, maintenance costs, and the risk of catastrophic failure.
SAP HANA SP6 and SP7 furnish developers with an assortment of new and augmented techniques for working with natural language via full text and fuzzy searches. In particular, fuzzy search now incorporates compound words, column conditions, predefined columns, and additional filter conditions. Text analysis has been strengthened with new language identification intelligence, along with greater fact extraction throughput.
SAP HANA multilingual text mining capabilities have also been enhanced to support additional languages such as Simplified Chinese.
This makes it much easier to measure and monitor client sentiment instantaneously, and thus use the “voice of the customer” to pinpoint sentiments, problems, and requests. The result: better, more informed decisions.
Fully exploiting the power of Big Data means gaining insight into the entire range of the enterprise’s information. It’s now possible to fuse data from SAP HANA and SAP Sybase IQ into a single logical database that’s visible from the SAP Business Warehouse. This means that customers can now achieve real- time, in-memory processing with near-line storage at petascale data size, along with full visibility across all information siloes.
Complex event processing has proven to be one of the most prolific sources for Big Data. SAP Sybase Event Stream Processor (ESP) is now integrated with SAP HANA, simplifying comprehensive integration with machine and event-driven data.
This amalgamation means that customers can now construct queries and applications that associate structured and unstructured data, including the massive streams of information captured by SAP Sybase ESP.
Get Updates on Tech posts, Interview & Certification questions and training schedules