Stephen - Professional Association for SQL Server
Download
Report
Transcript Stephen - Professional Association for SQL Server
Stephen Archbold
About me
Working with SQL Server for 6+ years
Former Production DBA for 24/7 High volume operation
Currently SQL Server consultant at Prodata
Specialising in Performance Tuning and Consolidation
Blog at http://blogs.Prodata.ie and http://simplesql.blogspot.com
Get me on twitter @StephenArchbold
LinkedIn http://ie.linkedin.com/in/stephenarchbold
Agenda
Data Filegroup/File Fundamentals
Storage Design Patterns
OLTP
Data Warehousing – Fast Track style
Data Warehousing on a SAN
What other go faster buttons have we got
Case Study – The unruly fact table
How do we make the changes
Data Filegroup/File Fundamentals
General Filegroup Recommended Practices. Separate for:
Nothing but system tables in Primary
I/O patterns
Different volatility
Data Age
If using Multiple Files in a Filegroup
Files must be equally sized
Files must be equally full
SQL does not redistribute data when adding more files
Pattern 1 - OLTP
Transactional processing is all about speed
You want to get the transaction recorded and the user out as quick as
possible
Metric for throughput becomes less about MB/Sec, and more about
transactions and I/O’s per second
Challenges of OLTP
Solid State Disk becoming more commonplace
These thrive on Random I/O
As the databases can be small, file/filegroup layout can suffer
Faster disk brings different challenges
Filegroup
PRIMARY
MyDB.MDF
Transactions
File2.NDF
File1.NDF
Reference
Ref.NDF
Volatile
Volatile.NDF
File
PAGELATCH!
Facts and Figures
Average Batch Completion Time (Seconds)
160
Transactions Per Second
310,000
156
155
300,000
150
290,000
145
141
140
139
287,770
270,000
132
260,000
130
250,000
125
240,000
120
230,000
Two Files
283,688
280,000
135
Single file
303,030
Four Files
Eight Files
256,410
Single file
Two Files
Four Files
Eight Files
Behind the scenes
Single File
Wait Type
Two Files
Wait Type
%
%
SOS_SCHEDULER_YIELD
55
SOS_SCHEDULER_YIELD
66
PAGEIOLATCH_EX
17
PAGEIOLATCH_EX
12
PAGELATCH_SH
15
PAGELATCH_SH
10
ASYNC_IO_COMPLETION
5
ASYNC_IO_COMPLETION
7
PAGELATCH_UP
5
SLEEP_BPOOL_FLUSH
2
What can we take away from this?
Resolving in memory contention lies with the file layout
This is actually nothing new, TempDB has been tuned this way for
years!
Keep in mind, files are written to in a “round robin” fashion
Data Warehousing
Pattern 2 – Fast Track Scenario
Large Volume
Star Schema
Need to optimize for sequential throughput
Scanning Entire Table
Not Shared Storage
Large Partitioned Fact
Table
Enclosure 1
MyFact_part1.NDF
Controller 1
MyFact_part2.NDF
Partition 1
HBA 1
CPU
CPU
Partition 2
MyFact_Part3.NDF
Controller 2
Myfact_Part4.NDF
Partition 3
CPU
CPU
CPU
CPU
CPU
CPU
Partition 4
Enclosure 2
MyFact_part5.NDF
Partition 5
Controller 1
MyFact_part6.NDF
Partition 6
HBA 2
Partition 7
MyFact_Part7.NDF
Controller 2
CPU
CPU
Partition 8
CPU
CPU
Partition 9
Myfact_Part8.NDF
Enclosure
Partition 10
Filegroup
File / LUN
MyFact_part9.NDF
Controller 1
Partition 11
Partition 12
MyFact_part10.NDF
HBA 3
MyFact_Part11.NDF
Controller 2
Myfact_Part12.NDF
Fast Track – Pros and Cons
Pros
Easy to figure out your needs
Simple, cheap and fast
In depth guidance available from Microsoft
Cons
Not recommended for pinpoint queries
Only really for processing entire data sets
Need VERY understanding Infrastructure team
Pattern 3 – Datawarehouse on SAN
Large Volume
Star Schema
Cannot optimize for sequential throughput
Shared Storage
More mixed workload
Goal – Large Request Size
We need Read Ahead
Enterprise edition is capable of issuing a request for 512KB on a single read
ahead request (Standard you’re stuck at 64K)
It can issue several of these (outstanding I/O) at a time, up to 4MB
But you may not even be close to 512KB…
How close are you to the 512k Nirvana
Run something like:
And watch this guy:
Fragmentation - Party Foul Champion
#1 killer of read ahead
Read ahead size will be reduced if pages being requested aren’t in
logical order
Being a diligent type, you rebuild your indexes
Because SQL is awesome, it does this using parallelism!
So what’s the catch…?
If Read Ahead is your goal, MAXDOP 1 to rebuild your indexes!
Enclosure 1
PRIMARY
Filegroup
File / LUN
MyDB.MDF
Enclosure 1
Filegroup
File / LUN
Primary
MyDB.MDF
Dimensions
Dimensions.NDF
Volatile
Staging.NDF
Facts
Facts.NDF
Large Fact
Fact.NDF
Partition 1
Partition1.NDF
Partition 2
Partition2.NDF
Partition 3
Partition3.NDF
Partition 4
Partition4.NDF
Partition 5
Partition5.NDF
Partition 6
Partition6.NDF
Partition 7
Partition7.NDF
Partition 8
Partition8.NDF
Getting data out of your Data
Warehouse for Analysis Services
How does Analysis Services pull in data?
Do we have any go faster buttons?
On read heavy workloads and Enterprise Edition, Compression
If storing multiple Tables in a Filegroup:
“-E” – For Data Warehouses - This allocates 64 extents (4MB) per object, per file,
rather than the standard 1 (64K)
If using multiple Files in a Filegroup
“-T1117” – For all - This ensures that if auto growth occurs on one file, it occurs on
all others. Ensures “round robin” remains in place
In General on dedicated SQL servers
Evaluate “-T834” – Requires Lock Pages in memory enabled
This enables large page allocations for the Buffer Pool (2Mb – 16Mb)
Can cause problems if memory is fragmented by other apps
Case Study – The Unruly Fact Table
3 TeraByte Data Warehouse
Table scan was topping out at 300 mb/sec
Storage was capable of 1.7 GB/sec
Table partitioning was in place
All tables were in a single Filegroup
Had to get creative on enhancing throughput
Test Conditions
16 core server, Hyper Threaded to 32 cores
128 GB of Memory
SQLIO fully sequential, storage gives 2.2 GB/Sec
32 range scans started up to simulate workload
Page compression was enabled, the T834 trace flag was enabled
MAXDOP of 1 on the server to ensure # of threads were controlled
Facts and Figures
Throughput MB/Sec
1200
I/O Request Size (Kb)
1150
550
511
505
500
1100
450
1000
942
400
900
350
800
300
700
250
615
600
575
200
181
181
150
500
Baseline
Post Index Rebuild Single FG for large Multiple FG one
with MAXDOP 1
table(s)
per partition
Baseline
Post Index Rebuild Single FG for large Multiple FG one per
with MAXDOP 1
table(s)
partition
Other metrics
Scenario
Time Secs
Avg IO (K)
Avg MB/Sec
Max MB/Sec
Baseline
70
181
575
622
Post Index Rebuild with MAXDOP 1
68
181
615
668
Single FG for large table(s)
56
511
942
1196
Multiple FG one per partition
42
505
1,150
1,281
How do we make the changes
Thankfully easy - Index rebuilds!
For non partitioned tables, drop and re-create on the new Filegroup
For partitioned tables – Alter the partition scheme to point to the new
FileGroup
For heaps, create a Clustered Index on the table on the new filegroup, then
drop it!
Summary
File and Filegroup considerations can yield huge gains
Know your workload and optimise for it
If you have a Hybrid workload, then have a Hybrid architecture!
Don’t neglect your SQL Settings
Code changes and indexes aren’t the only way to save the day!
Useful links
Paul Randal – Multi file/filegroup testing on Fusion IO
http://www.sqlskills.com/blogs/paul/benchmarking-multiple-data-files-on-ssds-plus-the-latest-fusion-io-
driver/
Fast Track Configuration Guide
http://msdn.microsoft.com/en-us/library/gg605238.aspx
Resolving Latch contention
http://www.microsoft.com/en-us/download/details.aspx?id=26665
Maximizing Table Scan Speed on SQL 2008 R2
http://henkvandervalk.com/maximizing-sql-server-2008-r2-table-scan-speed-from-dsi-solid-state-storage
Specifying storage requirements (Find that sweet spot!)
http://blogs.prodata.ie/post/How-to-Specify-SQL-Storage-Requirements-to-your-SAN-Dude.aspx
Fragmentation in Data Warehouses
http://sqlbits.com/Sessions/Event9/The_Art_of_War-Fast_Track_Data_Warehouse_and_Fragmentation
Partial Database Availability and Piecemeal restores
http://technet.microsoft.com/en-US/sqlserver/gg545009.aspx
Thank you!