In-Memory OLTP integration and application migration In SQL Server

In-Memory OLTP application migration

A before-and-after illustration is the best way to compare the internal behavior of SQL Server when transactions are processed using traditional disk-based tables as opposed to memory-optimized tables. Figure 2.4 shows how a traditional transaction from a client application is processed using disk-based tables. Figure 2.6, shown later in the topic, demonstrates the processing behavior when the same tables are migrated to memory-optimized tables and the In-Memory OLTP engine is used. Both figures also illustrate how tightly coupled In-Memory OLTP is with the Database Engine component.

Client application in disk-based table
FIGURE 2.4: Client application process using disk-based table access.

In Figure 2.4, the SQL Server Database Engine communicates with the client application by using a Microsoft communication format called a Tabular Data Stream (TDS). The transaction goes through a parser and a catalog and an optimizer, and the T-SQL query is compiled for execution. During execution, the data is fetched from storage into the buffer pool for changes. At the time, the transaction is committed, the log records are flushed to disk. The changes to the data and index pages are flushed to disk asynchronously.

Interested in mastering SQL Server? Enroll now for FREE Demo on SQL Server Training

Will In-Memory OLTP improve performance?

The performance improvement is evident when data in a memory-optimized table is accessed from traditional, interpreted Transact-SQL. This performance improvement is even greater when data in a memory-optimized table is accessed from a natively compiled stored procedure (NCSProc).

Figure 2.4 depicts a traditional scenario using disk-based tables. Although the processing times are sufficient in this example, it is not hard to conceive that further optimization of the database application’s performance will be needed one day. When that day comes, organizations can use the native tools in SQL Server to help them determine whether In-Memory OLTP is right for their environment or not. Specifically, organizations can use the Analysis, Migrate and Report (AMR) tool built into SQL Server Management Studio. The following steps can determine whether In-Memory OLTP is right for an organization:

1. Establish a system performance baseline.
2. Configure the Management Data Warehouse (MDW).
3. Configure data collection.
4. Run a workload.
5. Run the AMR tool.
6. Analyze results from AMR reports and migrate tables.
7. Migrate stored procedures.
8. Run workload again and collect performance metrics.
9. Compare new workload performance results to the original baseline.

Ultimately, the AMR tool analyzes the workload to determine whether In-Memory OLTP will improve performance. It also helps organizations to plan and execute their migration to memory-optimized tables. In addition, the report provides scan statistics, contention statistics, execution statistics, table references, and migration issues to ensure that organizations are given a wealth of information to further assist them with their analysis and eventually their migration.

 MindMajix YouTube Channel

Using the Memory Optimization Advisor to migrate disk-based tables

Transaction Performance Analysis reports informs you about which tables in your database will benefit if ported to use In-Memory OLTP. After you identify a table that you would like to port to use In-Memory OLTP, you can use the memory optimization advisor to help you migrate the disk-based database table to In-Memory OLTP.

The memory-optimization advisor allows you to:

  • Identify any features used in a disk-based table that are not supported for memory-optimized tables.
  • Migrate a table and data to memory-optimized (if there are no unsupported features).

Frequently Asked SQL Server Interview Questions

After running the AMR tool and identifying a table to port to In-Memory OLTP, you can use the Table Memory Optimization Advisor to help migrate specific disk-based tables to memory-optimized tables. Do this by right-clicking a table in Management Studio and then selecting Memory Optimization Advisor. This step invokes a wizard that begins conducting validation tests and providing migration warnings, as illustrated in Figure 2.5. The wizard also requires users to make a number of decisions about memory optimization, such as selecting which memory-optimized filegroup to use, the logical file name, and the file path. Finally, the wizard allows users to rename the original table, estimates current memory cost in megabytes, and prompts users to specify whether to use data durability for copying table data to the new memory-optimized table.

FIGURE 2.5: Using the Table Memory Optimization Advisor checklist to migrate disk-based tables.

Memory Optimization Advisor

FIGURE 2.5: Using the Table Memory Optimization Advisor checklist to migrate disk-based tables.

Analyzing In-Memory OLTP behavior after memory-optimized table migration

Now it’s time to take account of Figure 2.6. Let’s assume that the AMR tool made a recommendation to migrate Tables 1 and 2 in the example depicted in Figure 2.4 to memory-optimized tables. Figure 2.6 focuses on In-Memory OLTP behavior after migration to see if memory-optimized tables has occurred, and whether stored procedures have been natively compiled.

A new area of memory is added for memory-optimized tables and indexes. In addition, a full suite of new DMVs, XEvents, and instrumentation is also added, allowing the engine to keep track of memory utilization. Finally, a memory-optimized filegroup, which is based on the semantics of FILESTREAM, is also added. Access to memory-optimized tables can occur via query interop, natively compiled stored procedures, or a hybrid approach. In addition, indexes for In-Memory OLTP are not persisted. They reside only in memory and are loaded when the database is started or is brought online.

Query interop is the easiest way to migrate the application to In-Memory OLTP and access memory-optimized tables. This method does not use native compilations. It uses either ad hoc interpreted Transact-SQL or traditional stored procedures, which is the approach depicted by option 1 in Figure 2.6. As an alternative, natively compiled stored procedures are the fastest way to access data in memory-optimized tables. This approach is depicted by option 2 in Figure 2.6.

Client application process based on memory-optimized table
FIGURE 2.6: Client application process based on memory-optimized table access

Explore SQL Server Sample Resumes! Download & Edit, Get Noticed by Top Employers!Download Now!

 

List of Related Microsoft Certification Courses:

 SSIS Power BI
 SSRS SharePoint
 SSAS SQL Server DBA
 SCCM BizTalk Server
 Team Foundation Server BizTalk Server Administrator

 

 

Course Schedule
NameDates
SQL Server TrainingNov 02 to Nov 17View Details
SQL Server TrainingNov 05 to Nov 20View Details
SQL Server TrainingNov 09 to Nov 24View Details
SQL Server TrainingNov 12 to Nov 27View Details
Last updated: 04 Apr 2023
About Author

Arogyalokesh is a Technical Content Writer and manages content creation on various IT platforms at Mindmajix. He is dedicated to creating useful and engaging content on Salesforce, Blockchain, Docker, SQL Server, Tangle, Jira, and few other technologies. Get in touch with him on LinkedIn and Twitter.

read less