Archive for July, 2013

Performance Tuning SQL Server: Hardware Architectures Throughput Analysis

July 21, 2013

Over the last few years I have performed many SQL Server Data Warehouse Database Performance Tuning Sessions with a variety of customers in many different environments. This sessions usually consist of a weeklong engagement in which I lead an in-depth software (SQL Server) and hardware tuning classes. The results of which are put into a highly detailed roadmap and list recommendations for improved SQL Server Data Warehouse performance.

As part of this testing, I use SQLIO to benchmark the hardware performance capabilities of the SQL Server Data Warehouse database server that I am testing (see: Data Warehouse Storage Throughput (MB/sec) ). There are of course many other types of tests and analyses that I preform as well as part of this week long process but I am going to stick with SQLIO for now.

Since I have performed these tests many times I would like to share some of the results that I have seen from various customer SQL Server Data Warehouse server environments.

Note: Even though these are actual aggregate figures from actual customers, given the complexity of each environment, your results may vary. This is meant to give you an idea what kind of performance to expect.

QSDW: QuickStart Data Warehouse 1000, This is a Dell Microsoft SQL Data Warehouse Appliance that is built to Microsoft FastTrack Reference Architecture Best Practices such as parallelism.

DAS – SSD: Direct Attach Storage with Solid State Drives. SSDs not architected for Data Warehouse specific workloads.

SAN – SSD/HDD: Tier 1 SAN that uses a combination of Solid State Drives (SSD) and Hard Disk Drives (HDD). Not architected for Data Warehouse specific workloads.

SAN – HDD: Tier 2 SAN that uses only Hard Disks Drives (HDD). Not architected for Data Warehouse specific workloads.

Laptop: My personal Laptop with one 7.2 K HDD

image

As can be determined from the graph the Dell QuickStart Data Warehouse out preforms even Solid
State Devices and both types of SANs. This due primarily to the fact that the QSDW is specifically tuned to data warehouse workloads which are typically characterized by sequential read/writes in block sizes greater than 64KB. In fact had I increased the block sizes for the tests, the gaps in performance would have increased.

Conclusion:

Ensuring that the Data Warehouse server is specifically tuned for Data Warehouse workloads can have a dramatic impact on performance. In fact this tuning can even be more import than the type of storage technology selected e.g. SAN, SSD, or HDD.

Note: The above graph shows server hardware performance not SQL Server Database Engine performance. The tool SQLIO has nothing to do with SQL Server even though the name might suggest otherwise

Advertisements

Data Warehouse on YouTube

July 12, 2013

I am experimenting with creating a series of Data Warehousing videos. Let me know if these videos are helpful and if you would like to see more.

The videos are loosely based on my PASS and SQL Saturday presentations but go into more detail.

Performance Tuning SQL Server: Parallelism

July 7, 2013

One of the most important architectural principles built into Dell’s latest generation of data warehouse appliances called Quickstart Data Warehouse (QSDW)is parallelism. The following post will give an over of how parallelism can be implement using QSDW as a reference.

Breaking a large data warehouse workload into smaller pieces that then can be worked on in parallel is one of the best ways of scaling SQL Server. This parallel processing technique is exemplified in Non-Uniform Memory Access (NUMA).

Hardware and Software Parallelism

Even though the NUMA architecture is used to create parallelism between main memory (RAM) the server’s CPU cores, the same principle can be applied to the entire SQL Server Data Warehouse Server as well.

However this parallelism is not present in most OLTP systems because those systems rely on a few LUNs/Disk Groups with many disks each to achieve maximum IOPs. While this approach is good for OLTP systems it is not the proper architecture for a SQL Server Data Warehouse Server which seeks to maximize MB/second not IOPS.

The best approach, therefore, is break up the disks into several separate Disk Groups or LUNs and place a SQL Server database file on each LUN. When this is done the SQL Server Database Engine can break the read and write operations into smaller chunks according to the number LUNs that were created thus achieving parallelism. The image below shows have 9 LUNs can be created and have a SQL Server Database file put on each LUN.

image

Because the data for the entire Filegroup/Database is broken  out to to multiple LUNs by placing a SQL Server database file on each LUN, all database operations will be parallelized. This has the added effect of increasing MB/sec by a factor roughly equal to the number of LUNs, in this case eight.

This architecture does create parallelism a the disk/database level, however it is not yet aligned to the number CPU cores present on the Data Warehouse Server. To accomplish this next level of parallelism, instead of creating one Filegroup for the data base, the number of Filegroups will have to align to the number of CPU cores.

The illustration below demonstrates what the configuration may look like on a server with eight cores.

image

Physically each LUN is implemented on Windows server as a Mounted Volume and each mounted volume will have eight database files, one database file for each of the eight file groups.

clip_image005

Only now can the SQL Server Data Warehouse Workload be parallelized across the entire server, from disks to the each CPU core. While this approach might seem like a lot of work, the performance benefits are tremendous and lead to much more reliable and consistent performance under a wide variety of data warehouse workload scenarios.

The parallelism techniques described above are built directly into Dell Quickstart Data Warehouse Appliance (QSDW), thus making it a highly tuned SQL Server Data Warehouse solution that can save a customer significant time and money.