Spider Strategies Blog RSS Feed

SQL Server Performance by Query Type

Dan Kolz, November 3, 2014

Summary

The SQL Server 2100 parameter limit is a pain to code around when selecting lots of rows by ID and using batches of parameterized queries doesn't seem to perform very well.

This performance study shows that you should definitely stop using batched parameterized queries for selecting rows by ID. They should be replaced with a temporary table approach for large batches, or constructed SQL strings if temporary tables prove unworkable.

Approaches Overview

This study looks at the performance of four approaches to finding many rows by ID.

Parameterized Query Batches - Executing several parameterized queries, as many as are needed to select all of the rows

select * from mytable where id in (?, ?, ?, ..., ?, ?)

Constructed SQL - Standard SQL with many values as part of the IN condition

select * from mytable where id in (1, 2, 3, ..., 3013, 3014)

Temporary Tables - Inserting the keys into a temporary table and joining the temporary table to the table being queried

create table #temptable (id int);
insert into #temptable (id) values (1), (2), (3)
select * from mytable mt inner join #temptable tt on mt.id = tt.id

Table Variables - Like the temporary tables approach but using a table variable instead of a temporary table

DECLARE @IDHOLDER TABLE (id int primary key);
insert into @IDHOLDER(id) select * from ( values (92), (0), (26) ) as X(a)
select t1.* from mytable t1 inner join @IDHOLDER t2 on t1.id = t2.id

The Scenarios

There are two different sizes of queries done by different parts of the code. Although this study is primarily interested in optimizing for numbers of parameters greater than 2100, we want to make sure a new approach isn't bad for the small batches.

10k selected IDs, Big Tables

Microsoft SQL Server has a limit on the number of parameters that a parameterized query can have (2100). However, many of our queries have a total number of IDs selected far greater. Currently we are using batches of IDs with parameterized queries to extract the entire set. However, these operations are cumbersome to code and don't seem to perform as well as we'd like.

Hundreds of selected IDs, big tables

Other sections of our code operate on relatively smaller fixed batch sizes of about 250 IDs selected. While this code doesn't have the acrobatics of working with large total numbers of row IDs, it is among our most frequently executed code and fairly modest improvements to its speed may have big impacts on application scaling.

The Test Database

The test database is Microsoft SQL Server 2008r2 running inside a Window 7 VM with a 4 CPU cores, 6GB of RAM, and the PAE extensions on. The disk image is on 7200 RPM spinning media. In my testing there seemed always to be free memory so I think that is not a limiting factor.

I used four tables with similar formats but a varying number of rows: 100, 10k, 1m, 50m. The table create DDL was:

create table <table_name> (id int IDENTITY(1,1) PRIMARY KEY, age int, nonid int, name varchar(1000))

Each row always had data for all columns. The smaller three tables had 70 characters of data for the name column. The 50m row table had only 5 characters.

Selection Approaches

Java code was used to conduct the test (the language most of Scoreboard is written in). All tests were performed using JDBC and the Microsoft sqljdbc driver version 4. All tests where run in a newly created connection set to use transactions and had their memory cache cleared by running:

DBCC DROPCLEANBUFFERS;
DBCC FREEPROCCACHE

In spot testing leaving the cache uncleared did improve performance but did not seem to change the relative performance of the queries.

Parameterized Query Batches

A popular technique is to create parameterized queries for database access. The query looks like:

select * from mytable where id in (?, ?, ?, ..., ?, ?)

Using this mechanism has two primary advantages: it protects against SQL injection attacks and allows reuse of a query, avoiding the time it takes for the database to re-parse the query string for subsequent uses. When the query is executed, specific values are passed and substituted for the question marks.

There is a disadvantage to this mechanism which is that only 2100 parameters can be used in any one query. This, as mentioned above, causes us to have to execute our larger queries in batches which requires some additional coding and seems intuitively like it should be slower than a single query.

Constructed SQL

A second approach is to construct an SQL string with the actual values being searched for. While there is a limit to the number of parameters in a parameterized query, there isn't a hard limit on the number of values in a query. The SQL for this approach looks like:

select * from mytable where id in (1, 2, 3, ..., 3004, 3005)

This approach offers an unlimited number of selected IDs but special steps have to be taken to prevent SQL injection attacks. Additionally, constructing the SQL results in code that breaks up the parts of the SQL statement, making them potentially difficult to read and change.

Temporary Tables

The third approach is to insert the ID values into a temporary table and then join that temporary table to the table in question. This results in SQL that looks like:

create table #temptable (id int primary key);
insert into #temptable (id) values (1), (2), (3)
select * from mytable mt inner join #temptable tt on mt.id = tt.id

The advantage of this approach is that the selection itself uses a mechanism (table join) which we know gets a lot of use and has presumably been optimized to the extreme by the SQL Server product team. Additionally, the temporary table once constructed can be reused without the cost of recreating it. If we need to perform different queries with the same ID set, this could be an advantage.

The disadvantage is that this style of query is more complicated. Additionally, Microsoft SQL server will only insert 1000 rows into a table with a single insert query. There are some ways to work around this, but it makes this style even more complicates since multiple inserts need to be run for parameters over 1000, instead of just over 2100.

It is commonly believed that temporary tables are in-memory-only objects. This is not true for Microsoft SQL Server. MySQL has a memory only storage engine which is probably perfect for this technique. However, MS SQL Server temporary tables live only in memory if they are small (says one source) but probably have an existence on disk in the tempdb.

There are some ways to work around this (see the section on inserts below) but it remains something to be worked around.

For temporary tables, the time reported include the time to insert the data but NOT the time it takes to create the temporary table. During testing I found that just issuing the create temporary table statement could take several seconds if I was in a connection that had been used for a previous test run. However, this was not the case if I created all of the temporary tables as the first step in the test or if I used a new connection to test each table in a test run. This intuitively points to a locking or resource contention issue but I was unable to determine what it was. Since time is always short and there are several work arounds, I decided to punt on this problem for the time being.

Table Variables

The forth approach uses SQL Server's ability to assign a table of data to a variable. This variable can then be treated like table name and used in an inner join. The queries look something like:

DECLARE @IDHOLDER TABLE (id int primary key);
insert into @IDHOLDER(id) values (92), (0), (26)
select t1.* from mytable t1 inner join @IDHOLDER t2 on t1.id = t2.id

This is conceptually the same as using a temporary table. However, aside from the limitations that we don't care about for these tests, the major limitation is that the table variable can only be used for a single query. Even subsequent queries later in the same transaction won't be able to use a previously declared table variable. The hope is that an more ephemeral mechanism will be less likely to use disk and therefor faster. However, it is thought that while table variable perform nicely for small data sets (less than 1000 rows) they have problems with large numbers of rows.

Among the additional disadvantages is that SQL Server is thought to sometimes create odd or inefficient query plans with table variables. Although this might be a big deal with complicated queries, it seems like it would be pretty tough for SQL Server to screw up a two table join.

Insert Approaches

Given SQL Server's limitation of inserting no more than 1000 rows and having two mechanism that relied on tables being populated (temporary tables and table variables), I did some tests to see which is the most efficient way of inserting id rows into a table.

Single Parameter JDBC Batches

This approach used a JDBC prepared statement with one parameter and JDBC's addBatch method. Although I'm not sure the JDBC implementation actually works this way, it is conceptually equivalent to have one insert statement for each row but composed as a single string and submitted as a single command.

insert into #tt1 values (?)

Multiple Parameter JDBC Batches

This is similar to the previous method, but each prepared statement contains up to 2000 parameters.

insert into #tt1 values (?, ?, ?, ..., ?)

Constructed SQL with Multiple Insert Statements

This method involves creating one long SQL string that has one or more insert statements where each insert statement would be inserting up to 1000 rows.

insert into #tt1 values (1), (2), ... (1000); insert into #tt1 values (1001), (1002)...

Constructed SQL with Union

This also involved composing an SQL string but used the UNION keyword.

insert into #tt1 select 1 UNION ALL select 2 UNION ALL ...

Constructed SQL with Values Sub-table

This also involved as single SQL statement put used the values subtable format to insert all the rows in a single query.

insert into #tt1 select * from (values (1), (2), ... ) as X(a)

Insert Result Summary

It seems clear that for small data sets it doesn't matter what you do. For large databases the both JDBC methods were the best. If only a string of SQL is an option, the string of multiple insert statements was by far the the best performer.

Selection Results

A word about variability: In these tests I found the results to vary significantly. For shorter times ( < 200ms), it might be as much as 30% or 40%. This is what you'd expect on a system with spinning media also engaged in other light tasks such as email, playing music (listening to Sting right now), and working with documents and spreadsheets. Larger values seemed to vary by about 10% or 15%.

250 Parameters/IDs Selected

For 250 IDs selected against small tables, it seems not to matter very much how you perform the query. Given the variability I noticed, these are all actually about the same number.

For tables with lots of rows, it's clear that the temporary table and constructed SQL query approaches are superior. It's odd to me that the table variable doesn't perform better. When I look at the actual query plan for the table variable query, it shows that the table variable does have an index and it is getting used.

1000 Parameters/IDs Selected

With 1000 IDs being selected we start to see a separation in performance even for small tables. Once again, the temporary table and constructed query approaches are showing the best performance.

Note that in this test none of the approaches have had to resort to any sort of batching to get the query performed.

10k Parameters/IDs Selected

What's clear here is that the parameterized query approach is horrible when batching becomes necessary. It becomes especially bad the larger the table gets.

Let's see this again with only the best three to get an idea of the relative strengths of the better performing methods.

Here we see a shift from earlier runs. The table variable approach is performing on par with the constructed query approach even though the table variable requires batches of insert statements.

The query plan doesn't shed a lot of light on why the constructed query approach works so much more poorly (comparatively) than earlier runs. The plan shows that the constants of the IN condition are scanned once and an index seek is performed on the tables in question.

There's obviously a heavier load on parser because each of these queries have a greater length. There's nothing about the constructed query which should be particularly difficult to parse though.

The temporary table approach really shines here even though we had to insert 10k values into it. As the insert test shows, in the 10k parameter case the insert is probably taking 20% of the time. I mentioned earlier that one advantage of this approach is that the table could be reused, saving the cost of having to do the insert. While true, at least at this scale, that seems like it would be only a modest performance bump.

Things of Interest

You may notice that the time to select 10k parameters is actually less for some approaches (like the table variable) than the time to select 1k parameters on the same tables. I admit to being a little mystified about this. The effect remains even when I change the order so that the 10k parameters test is run first.

Summary

You should definitely stop using batched parameterized queries for selecting rows by ID. They were the bottom performer in every test. They should be replaced with temporary tables if you're willing to do a little work to make sure you're not hitting the create temporary table delay. If, for whatever reason, the temp table approach is not chosen, you should use the constructed query approach within a framework that prevents SQL injection attacks.

Period "to date" Equations in 2.8

Scott O'Reilly, October 17, 2014

In version 2.8 there's a new equation you can use for calculated metrics called “Period to Date.” You can use this to create metrics with values based on rolling sums of other metrics’ values or thresholds. For example:

TD(Quarterly,123)
would result in a quarter-to-date sum for metric 123's actual value.

TD(Yearly,123,Goal)
would result in a year-to-date sum for metric 123's Goal.

Equations like this are a lot easier to understand with an example. Let's say you have metric #1570 that is updated every month:

If you created a calculated metric using the equation TD(Quarterly,1570), it would have values that look like this:

And, a calculated metric with the equation TD(Yearly,1570), would look like this:

If you're interested in beta testing version 2.8, be sure to let us know!

Increase Dashboard Text Size in 2.8

Scott O'Reilly, September 29, 2014

The default dashboard text object size used to be fairly small.

Now when you add a textbox to a dashboard, not only is the box itself much larger, but its font size is bigger as well.

This is one of many usability enhancements in version 2.8. Let us know if you're interested in beta testing.

Default to Current Organization when Building Dashboards

Scott O'Reilly, September 15, 2014

When you're adding objects to a dashboard, the first step is to select the organization of the object that you want to add. In version 2.8 We now start with the organization where your dashboard is located, which makes building dashboards much easier.

It's not a huge change, but it's one of the dozens of enhancements in version 2.8 that make the software easier to use. Beta testing is open, so let us know if you want early access.

Taste Testing at Baskin-Robbins (The Single Most Important Thing You Can Do to Land That Job)

Elisa Subin, September 4, 2014

Are you a recent college grad looking to land that awesome job selling software? Are you an experienced marketing manager looking to make a big career move? Maybe you’re a rock-star developer, looking for the perfect job.

It doesn’t matter. I want to hire you. Believe me, I do. And, I’m sure you’d be great here. We’re an incredible company, probably just what you are looking for. And I know you would fit in perfectly. I can see that you’re qualified. And I’m sure you’re smart. No joke.

But you know those small pink spoons at Baskin Robbins, the ones you get when you want just a taste of that low-fat mango guava sorbet before you commit? I’m sure you love those little pink spoons as much as I do. So, why haven’t you taken the pink spoon we’re offering you?

No, we don’t sell ice cream. Our product is software. But that little pink spoon is right there on our website. It’s our “free sample” if you will. If you want to work here, why haven’t you clicked the “learn more” link on our website and watched the “3 minute video” about our product? Why haven’t you taken a taste test with our little pink spoon?

That little pink spoon isn’t just there to get people to buy our software. It’s also the perfect way for us to gauge your interest in our company. Our software, its what we do, right? If you don’t even want to check it out, I’ll assume you’re not interested in us and not passionate about what we do.

And that’s the key. Passion. Show me you’re interested in us. Show me you’re passionate about what we do. I’ve got your resume. In fact, I’ve got 100 resumes just like yours or better. All are well qualified. All are smart. And all want to talk about why I should hire them.

But why do you want to work here? That’s a harder question. Don’t be the applicant who smiles and asks me if I like my job. I’ll tell you the truth. I don’t like my job. I’m sincerely passionate about it. You can’t fake the level of passion I feel for my role here. And just so you know, I’ve been here for years. In fact, most of us have.

Now, if you think you might want to work here, you need to do your own research. Click the “learn more” link. Take the “3 minute tour.” Heck, you could even email helpdesk with your user questions. That’d be impressive.

That’s the sign I’m looking for, folks. We don’t have 31 different flavors. There’s only one little pink spoon here. Its on our website. It’s free. If you think you want to work here, then take a taste.