Blog: Is In-Memory OLTP a Useless Feature?

useless

What makes a feature useless? I might agree that if absolutely NO ONE uses it, or has ever used it, then it is useless, by obvious definition.  But if some people use it, but not everyone, is that useless? What if only 1% of product users employ a feature in production? Is that useless? What if those 1% base their entire production system around the feature? Is that more or less useless than another feature used by 80% of the installed customer base in only one or two queries, and for which there is another way to accomplish the same desk with equal efficiency?

I suggest that there isn’t a clear way to measure uselessness, once the usage rate is non-zero. I can even imagine some features that might have seemed exciting when first introduced, inspiring a developer to spend weeks or even month building a proof of concept or prototype. And then what if the prototype failed to behave as anticipated? I can imagine the developer would be left with feelings that were quite a bit less than warm and fuzzy, and might single-handedly, out of frustration, declare the feature ‘useless’. That, of course, does not mean that no one else was able to use the feature to their decided advantage.

Adam Machanic, a long-time colleague of mine, recently posted a list of ‘most useless features of SQL Server’, gathered from suggestions made by his Twitter followers. http://sqlblog.com/blogs/adam_machanic/archive/2017/06/14/the-sql-hall-of-shame.aspx I was quite surprised to see that one of the features listed was In-Memory OLTP.  Admittedly, he did not claim the feature was completely useless, but was echoing someone else’s tweet. Adam seemed to imply there might be some (very limited) use cases for this feature.  I’ll admit that after having written two books on the topic, I am not completely unbiased about this technology. But I am actually baffled about this feature could be called ‘useless’ when many SQL Server production applications are getting great use out of it.

The code-name for this technology introduced in SQL Server 2014 is Hekaton, which comes from the Greek word for 100. Microsoft’s original vision was that memory optimization could potentially provide up to 100 TIMES the throughput for certain OLTP workloads. And even before SQL Server 2014 was publicly released, there were production applications using this technology and achieving more than a 30-fold improvement. By the time SQL Server 2016 was released, the 100-fold increase had been realized. This doesn’t mean that every workload in every application will get this kind of improvement; it only means that it is possible.

SQL Server 2014 had some major limitations, so it was true that many potential adopters decided not to pursue in-memory technology, except for temporary objects. But even with just the improvements in accessing temporary objects, many applications could see very measurable performance gains. So even with all the limitations, there were applications using In-Memory OLTP in SQL Server 2014 because of the enormous benefits.

SQL Server 2016 relaxed many of the most objectionable restrictions and opened the door to many more developers starting to explore this technology. For a list of the changes, you can read the official documentation here https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/what-s-new-in-SQL-server-2016-database-engine#InMemory or take a look at this blog post that describes all the changes up through SQL Server 2016 CTP3: https://blogs.msdn.microsoft.com/sqlserverstorageengine/2015/11/12/in-memory-oltp-whats-new-in-sql2016-ctp3/

Adam indicated in his post that the reason the In-Memory OLTP technology was designed was specifically to deal with certain kinds of latch contention. My conversations with the engineers at Microsoft do not support this assumption. I was told “The original reason for developing this technology was to respond to hardware trends, in terms of memory size as well as core count.”

Below is one of my favorite graphics which I frequently use to introduce In-Memory OLTP in classes or conference sessions. It shows four “pillars” on which In-Memory OLTP was envisioned, designed and built. Eliminating latch contention is just one part of one pillar.

in-memory pillars

The first pillar deals with having more memory available. More memory means that the working set now fits in memory for many workloads allowing different kinds of data structures and data access algorithms. Movement of data between disk and memory is no longer a concern and is not the most prevalent cost in processing data-driven queries.

The third pillar deals with contention. Having more cores means more concurrent requests in the system, thus more potential for friction, especially when accessing the same physical data structures. So in addition to designing more efficient data structures to handle more data in memory all the time, the design of the In-Memory OLTP technology eliminated logical locks and physical latching, as well as removing spinlocks. Even if you have short transactions and conflicts are rare, it is much more efficient to go lock-free and have the code just retry an operation than to take locks and wait. All these kinds of locks impose a CPU overhead for lock management, as well as context switching. Both acquiring and releasing locks requires CPU cycles. Context switches are especially expensive, and can cost hundreds or thousands of cycles.

After making data access much more efficient (both through reduced code path and by removing waits), query execution turned out to be one of the biggest remaining bottlenecks, requiring a lot of CPU. Native compilation, covered in the second pillar, significantly reduces the number of CPU cycles by compiling the query plan into executable machine code.

The biggest argument against In-Memory OLTP being useless is how many applications are now built around this technology. You can see a dozen case studies of SQL Server applications built using In-Memory OLTP at this link: http://customers.microsoft.com/en-us/search?sq=in-memory%20oltp&ff=&p=0&so=story_publish_date%20desc.

Is In-Memory OLTP going to solve all the performance problems for all SQL Server applications? Of course not.  But is it useless? Hardly.