SQL Server In-Memory OLTP Architecture Overview
In the next few sections, we shall discuss In-Memory OLTP fundamentals, architecture, concepts, terminology, hardware and software requirements, and some myths about how it is implemented in SQL Server 2014.
Four In-Memory OLTP architecture pillars
Before thinking about how to use In-Memory OLTP, it is important to understand the underlying architecture. In-Memory OLTP is built on four pillars. The pillars were developed in the context of industry hardware and business trends to offer customer benefits. Figure 2.3 summarizes the four pillars and its associated customer benefits.
FIGURE 2.3: Pillars of In-Memory OLTP architecture.
Main memory optimized
As a result of the steady decline in the price of memory and the rapid rate at which the size of memory was growing, putting tables in main memory became feasible, thereby replacing the need to place and access tables on disk. With this change, came a significant reduction in the time required to gain access to tables, because the pages were no longer required to be read into the cache from disk. New functionality, such as hash indexes and non-clustered range indexes, can exploit data that is in physical memory rather than on disk, which allows for faster access and higher performance in data operations.
T-SQL compiled to machine code
Natively compiled stored procedures are Transact-SQL stored procedures compiled to native code that access memory-optimized tables. Natively compiled stored procedures allow for efficient execution of the queries and business logic in the stored procedure.
Natively compiled stored procedures do not implement the full Transact-SQL programmability and query surface area. There are certain Transact-SQL constructs that cannot be used inside natively compiled stored procedures.
The SQL Server product group recognized that if it could reduce the number of instructions needed to execute the same logic, it could do the same work at a lower processing time. The product group implemented this idea of transforming SQL stored procedures to a C program and then compiling the program into a DLL by using the Visual C compiler. The resulting machine code replaced stored procedures and the usual interpretation of logic through query execution. This made it possible to run a stored procedure by using fewer instructions, leading to more efficient business-logic processing that was significantly faster. With the optimization in processing time, internal testing at the lower level of the storage engine verified that, machine code could reduce the instruction count by 30 to 50 times, which resulted in a proportional increase in throughput and in lower latency.
Here is a partial list of some of the other TSQL constructs that cannot be used in a natively compiled stored procedure:
- Cursors are not supported
- Cannot use multi-row insert statements
- Cannot use common table expressions
- Cannot use subqueries
SQL Server has scaled extremely well because of the performance and scalability improvements made on it over the past releases. Unfortunately, certain application patterns— for example, a last page insert in the clustering key order or concurrent updates of hot pages—still suffered latch contention and did not scale as well as a result.
When many people attempt to modify data in a database at the same time, a system of controls must be implemented so that modifications made by one person do not adversely affect those of another person. This is called concurrency control.
The concurrency control theory has two classifications for the methods of instituting concurrency control:
- Pessimistic concurrency control
- Optimistic concurrency control
The additional improvements implemented through In-Memory OLTP in SQL Server 2014 allows for higher concurrency. First, the product group did away with page structures for accessing memory-optimized tables. This means that no paging or latching occurs to create bottlenecks. Second, the core engine uses lock-free algorithm structures that are based on multiversion optimistic concurrency control with full ACID (atomic, consistent, isolated, and durable) support. These improvements remove common scalability bottlenecks and provide high concurrency and frictionless scale-up opportunities to increase overall performance when memory–optimized tables are used.
SQL Server integration
Microsoft Integration Services is a platform for building enterprise-level data integration and data transformation solutions. You use Integration Services to solve complex business problems by copying or downloading files, sending e-mail messages in response to events, updating data warehouses, cleaning and mining data, and managing SQL Server objects and data. The packages can work alone or in concert with other packages to address complex business needs. Integration Services can extract and transform data from a wide variety of sources such as XML data files, flat files, and relational data sources, and then load the data into one or more destinations.
Integration Services includes a rich set of built-in tasks and transformations; tools for constructing packages; and the Integration Services service for running and managing packages.
Microsoft says that SQL Server Integration Services (SSIS) “is a platform for building high performance data integration solutions, including extraction, transformation, and load (ETL) packages for data warehousing.”
Here, the SQL Server product group decided that In-Memory OLTP should be easy to consume and that performance-critical tables should take advantage of this feature. What evolved from this idea is an In-Memory OLTP engine that is fully integrated into the SQL Server Database Engine and managed with a familiar set of tools. People who are familiar with SQL Server can quickly make use of the benefits of In-Memory OLTP because the management, administration, and development experiences are the same. Moreover, In-Memory OLTP works seamlessly with other features, such as Always On Availability Groups, Always On Failover Cluster Instances, replication, backups, and restores.
In-Memory OLTP concepts and terminology:
The following section reviews In-Memory OLTP concepts and terminology:
- Disk-Based tables: This is the traditional way through which SQL Server has stored data since the product’s inception. Data in a table is stored in 8-KB pages and read and written to a disk. Each table also had its own data and index pages.
- Memory-optimized tables: Memory–optimized tables are the alternative to traditional disk-based tables and follows the new structures associated with In-Memory OLTP. The primary store for memory-optimized tables is main memory, but a second copy in a different format is maintained on disk for durability purposes.
- Native compilation:Native compilation allows the tables and the stored procedures accessing them to be loaded in the memory. The tables and stored procedures loaded on the memory tends to be faster than the regular tables and stored procedures. The output of native compilation is a DLL. These dlls are loaded into the SQL Server Process. This dll creation process happens during the creation of memory optimized tables and memory optimized stored procedures. So there is no additional step for creating the dlls. This will be handled during the memory optimized object creation. To achieve faster data access and efficient query execution, SQL Server natively compiles stored procedures that access memory-optimized tables into native DLLs. When stored procedures are natively compiled, the need for additional compilation and interpretation is reduced. Also, compilation provides additional performance enhancements, as compared with using memory-optimized tables alone.
- Interop: In this process, interpreted Transact-SQL batches and stored procedures are used instead of a natively compiled stored procedure when accessing data in a memory-optimized table. Interop is used to simplify application migration.
- Cross-container transactions: Cross-container transactions are either implicit or explicit user transactions that include calls to natively-compiled stored procedures or operations on memory-optimized tables. In SQL Server, calls to stored procedures do not initiate a transaction. Executions of natively compiled procedures in autocommit mode (not in the context of a user transaction) are not considered cross-container transactions. Any interpreted query that references memory-optimized tables is considered a part of a cross-container transaction, whether executed from an explicit or implicit transaction or in auto-commit mode. This is a hybrid approach in which transactions use both memory-optimized tables and disk-Based tables.
- Durable and nondurable tables: By default, memory–optimized tables are completely durable and offer full ACID support. Note that memory-optimized tables that are not durable are still supported by SQL Server, but the contents of a table exist only in the memory and are lost when the server restarts. The syntax DURABILITY=SCHEMA_ONLY is used to create nondurable tables.
Hardware and software requirements for memory-optimized tables
A unified experience for organizations has been created in every area—including but not limited to, deployment and support—through the tight integration of In-Memory OLTP with SQL Server 2014. However, before you try this new capability, you should become acquainted with the requirements for using memory-optimized tables. In addition to the general hardware and software requirements for installing SQL Server 2014, (“SQL Server 2014 editions and engine enhancements”), there are also the requirements for using memory-optimized tables:
- Production environments require the 64-bit Enterprise edition of SQL Server 2014 with the Database Engine Services component. The Developer edition can also be used when developing and testing. The 32-bit environments are not supported.
- To store data in tables and also in indexes, SQL Server requires sufficient memory. You must configure memory to accommodate memory-optimized tables and to have indexes to be fully resident in memory.
- When configuring memory for SQL Server, you should account for the size of the buffer pool needed for the disk-Based tables and for other internal structures.
- The processor used for the instance of SQL Server must support the cmpxchg16b instruction.
- Free disk space for that is two times the size of your durable memory-optimized tables.
- A processor needs to support the instruction cmpxchg16b to use In-Memory OLTP. All modern 64-bit processors support cmpxchg16b. If you are using a VM host application and SQL Server displays an error caused by an older processor, see if the application has a configuration option to allow cmpxchg16b. If not, you could use Hyper-V, which supports cmpxchg16b without needing to modify a configuration option.
In-Memory OLTP use cases
Many use cases show the benefits of In-Memory OLTP. Consider these scenarios:
- An application that is incurring high latch contention can alleviate this contention and scale up by converting the tables from disk-based tables to memory-optimized tables.
- Natively compiled stored procedures can be used to address low-latency scenarios because In-Memory OLTP reduces the response times associated with poor performing procedures (assuming that business logic can be compiled).
- Many scale-out operations that require only read access, suffer from CPU performance bottlenecks. By moving the data to In-Memory OLTP, it is possible to significantly reduce CPU. With higher scalability, this allows you to take advantage of existing processing resources to achieve higher throughput.
- Think about the data-staging and load phases of a typical ETL process. At times, numerous operations need to be completed, including gathering data from an outside source and uploading it to a staging table in SQL Server, making changes to the data, and then transferring the data to a target table. For these types of operations, nondurable memory-optimized tables provide an efficient way to store staging data by completely eliminating storage cost, including transactional logging.
Myths about In-Memory OLTP:
Before moving on to the next section and walking through some In-Memory OLTP examples, it’s useful to rectify some of the misconceptions surrounding In-Memory OLTP.
- Myth 1: SQL Server In-Memory OLTP is a recent response to competitors’ offerings
- Reality : Work on In-Memory OLTP commenced approximately four years ago in response to business and hardware trends occurring in the industry.
- Myth 2: In-Memory OLTP is like DBCC PINTABLE
- Reality : DBCC PINTABLE was an old operation in SQL Server 7 that made it possible for the pages associated with a table to be read into the buffer pool and remain in memory instead of being removed or deleted. Although there are some similarities, In-Memory OLTP is a new design focused on optimizing in-memory data operations. There are no pages or buffer pool for memory-optimized tables.
- Myth 3: In-memory databases are new separate products
- Reality : Unlike with many of its competitors, In-Memory OLTP is fully integrated into SQL Server 2014. If you know SQL Server, you would definitely know about In-Memory OLTP.
- Myth 4: You can use In-Memory OLTP in an existing SQL Server application without any changes
- Reality : In reality, a few changes are required. At the very least, some changes to the schema are needed to be made.
- Myth 5: Since tables are in memory, the data is not durable or highly available —I will lose it after a server crash
- Reality : In-Memory OLTP is fully durable and includes several highly available features, including AlwaysOn features. Data persists on disk and will survive a server crash.
Enroll for LIVE, Instructor LED ONLINE SQL SEVER TRAINING