SQL Server

Automating SQL Server VM Provisioning on Azure with PowerShell

Azure IaaS provides a flexible way to fire up new VM without worrying about upfront infrastructure cost. We can provision new VM for DEV/TEST quickly, and proceed with development task. The common task I often do is installing new SQL Server VM, setting up, so it will be ready for testing bench or isolated development. It requires couple of clicks and wait, and sometime tedious.

This article explains how to provision new SQL Server VM by using Azure Management portal. It is easy, but still require couple of click and wait between clicks. We can automate those tasks by using PowerShell, so everything can be done with an execution of a script file. This post also has step by step instruction on how to deploy simple SQL VM on Azure, but I will take another approach in this post. The script in this post will do the following stuffs in VM provisioning:

  • Provision a stand alone SQL Server, this means does not join to AD domain
  • Provision a storage account to store VHD
  • Deploy the VM to specific Azure region, by supplying Affinity Group to control its location

Since this is a stand alone deployment, there will be no Virtual Network created. The VM is accessible through public cloud service name and its endpoint port number.

The first step is connect to Azure account from Windows Azure PowerShell console as follow:

Add-AzureAccount

A pop up will appear asking for Azure account credential.

 

Declare all required variables for Affinity Group and storage account

The following lines declares parameter related with cloud service name, affinity group and storage account.

 

$location="Southeast Asia"

$affinityGroupName="SQL-SEA"

$affinityGroupDescription="SQL SEA Affinity Group"

$affinityGroupLabel="SQL SEA IaaS Group"

$storageAccountName="sql2012teststorage"

$storageAccountLabel="sql2012teststorage Storage Account"

$storageAccountContainer="https://"+$storageAccountName+".blob.core.windows.net/vhds/"

 

Declare variables for VM configuration

Azure provides various VM image, so the script need to know which image to pick for provisioning. Executing Get-AzureVMImage and filter based on label name is one way to retrieve What We wanted.

$sqlImageName = (Get-AzureVMImage | where {$_.Label -like "SQL Server 2012 SP1 Enterprise*"} | sort PublishedDate -Descending)[0].ImageName
$sqlServerName = "kintamanisql"
$cloudServiceName = "kintamanicloud" 
$vmAdminUser = "myadmin" 
$vmAdminPassword = "Company!000"

 

Create Affinity Group

An Affinity Group controls the location of cloud service, so We can put the VM in specific region closer to users. USe the following lines to create the Affinity Group:

New-AzureAffinityGroup `
    -Name $affinityGroupName `
    -Location $location `
    -Description $affinityGroupDescription `
    -Label $affinityGroupLabel

 

Create Azure storage account

A new storage account will be created to store VHD image. This account is located in the Affinity Group specified above, which is Southeast Asia.

New-AzureStorageAccount `
    -StorageAccountName $storageAccountName `
    -Label $storageAccountLabel `
    -AffinityGroup $affinityGroupName

 

Create cloud service

Cloud service is the public facing component of VM to external world. It is a service with FQDN, so accessible from internet via endpoint at specific port. The cloud service ties to Affinity Group so bound to specific region in Azure data center.

New-AzureService -ServiceName $cloudServiceName -Label "Cloud for SQL VM" -AffinityGroup $affinityGroupName

 

Create SQL Server VM

Once the storage account and cloud service created, the VM is ready to be provisioned. All required endpoint to access SQL Server services is also included in the VM creation.

New-AzureVMConfig `
    -Name $sqlServerName `
    -InstanceSize Small `
    -ImageName $sqlImageName `
    -MediaLocation "$storageAccountContainer$sqlServerName.vhd" `
    -DiskLabel "OS" | 
    Add-AzureProvisioningConfig `
        -Windows `
        -DisableAutomaticUpdates `
        -AdminUserName $vmAdminUser `
        -Password $vmAdminPassword |
	Add-AzureEndpoint -Name "SQL" `
        -Protocol "tcp" `
        -PublicPort 22001 `
        -LocalPort 1433|
    New-AzureVM -ServiceName $cloudServiceName

We can consolidate all of those lines into one PowerShell script file so it is ready for execution. The following file put all of those tasks together.
 CreateSQLServerVMAzure.ps1

It will take a while for VM provisioning to run. Once completed, select the VM name and click Connect to download the RDP file.

azure-connect

Once downloaded, execute the RDP file to access Remote Desktop in the VM.

 

 

 

 

Running WordPress with SQL Server on Azure

It has been more than a year this blog runs on Azure platform, since I moved from my Linux hosting (WordPress with PHP – MySQL) to Azure. It’s still WordPress (I love it) but sits on top of WAWS with MySQL. I have been experimenting to run WordPress on IIS with MySQL for quite a while, until I read that I can actually run WordPress with SQL Server instead of MySQL.

I’m a SQL Server guy, and decide it to give it a try. Running it on SQL Server means I can do query, tuning, and everything on database that I really familiar with. Running it with SQL Server is even easier now in Azure by using special installer developed by Brandoo.

 

Creating Azure SQL Database

So I started the deployment by creating an SQL database in Azure. It is and SQL Server engine, not MySQL. This is and empty database for WordPress storage in the next step.

createdb0

You may choose Quick Create or Custom, whichever is convenience. Specify the new server If You never create any before.

createdb1

Once the database is created, You need to take note of database name, SQL login, password and also server name. Those information will be used for WordPress configuration in the next step.

 

Install WordPress on SQL Server

I use WAWS to deploy my WordPress site. This is the fastest and easiest way to deploy a web site in Azure, because I do not need to bother with all underlying infrastructure. If You have not create one, You need to create a Web Site by selecting Compute > Web Site > From Gallery.

create_WAWS

The next screen will show different types of web application available in Azure Web Site. Select Brandoo WordPress under Blog category. This distribution runs on SQL Server instead of default MySQL

brandoo-wordpress.

The next dialog will ask for web site URL and database user name. Make sure to specify the same user name and password created earlier so You can connect to existing SQL database.

createwp-2

You will be connected to existing database, and WordPress installation should kick in. Once is completed, the website is listed in web site section list as follow:

createwp-4

You can browse to the web site URL, for example http://mywpblog.azurewebsites.net and the new WordPress blog will be loaded. As a proof that It really runs on SQL Server, open SSMS and connect to SQL database by using server name and credential created earlier.

ssmsdb

Now I have created a new WordPress site on SQL Server. The next step is to import all of my blog contents from old site to new one. This is pretty straight forward by using WordPress importer/exporter plugin.

And http://choirulamri.org runs on WP – SQL Server in Azure now :).

Analyzing Perfmon with SQL Query (Part 2)

I wrote about how to import Perfmon file into SQL Server in previous post, and now I let’s see how to analyze it using SQL query. I will write couple of examples that I usually use. The exact query may varies depending on the requirement and server environment. The most common question are as follow:

  • how high is CPU utilization?
  • Do I have enough memory in the system?
  • Is there bottleneck in the disk?

A quick query will help to give high level idea, but in-depth analysis is required to pinpoint what’s actually happens in the system. High CPU does not always mean a CPU problem, until detail examination is done to other related counters. In many cases, We need to investigate SQL Server internal by using DMV or Profiler to get more comprehensive views. I will blog this separately, so let’s focus on how to mine the data from imported perfmon.

 

Simple Query: How’s my CPU and Memory?

I wanted to see CPU and Memory information in the system. This is quite simple to do by querying the tables as follow:

SELECT ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), 
		CASE
			WHEN ct.[InstanceName] IS NULL THEN ''
			WHEN ct.[InstanceName] IS NOT NULL THEN RTRIM(ct.InstanceName)
		END AS "InstanceName", 
			AVG(cda.CounterValue) as Average,
			Min(cda.CounterValue) as MinValue,
			MAX(cda.CounterValue) as MaxValue,
			STDEV(cda.CounterValue) as StdValue,
			COUNT (cda.CounterValue) as Freq ,
			di.DisplayString
FROM CounterData cda
	JOIN CounterDetails ct ON cda.CounterID = ct.CounterID
	JOIN DisplayToID di ON cda.GUID = di.GUID
WHERE ((ct.CounterName = '% Processor Time' AND ct.ObjectName = 'Processor')
	OR [ct].[CounterName] = 'Available MBytes')
GROUP BY ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), ct.InstanceName, di.DisplayString
ORDER BY di.DisplayString, RTRIM(ct.ObjectName), RTRIM(ct.CounterName)

Here is the query result from my sample data:

counter-simple

It’s clear that average CPU utilization after tuning is lower than the baseline. It goes down from 19.4% to 13.3%. It’s also useful to get not only average, but also MIN, MAX, and Standard deviation. This gives better idea on various spike that might happen during sampling. High variation means the counter is up and down frequently.

I usually use the following formula to assess system variation: Average + 2 x STDEV. Recalling to basic statistical lesson, the interval between average and 2xVariation represents 68% of population. I’m not statistical expert though, You may refer to better sampling theory for more accuracy.

 

Filtering Data only for Working Hours

Let’s continue further by adding additional filter to the query above. Assuming that the busy hours is in daytime, I can add additional filter by limiting CounterDateTime column. There is a small caveat here: the column is in char with many whitespaces so it will fail in Datetime conversion. Combining TRIM or LEFT function with CONVERT to force it become Datetime should solve the problem.

----filter by working hours 9AM to 5pm
SELECT ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), 
		CASE
			WHEN ct.[InstanceName] IS NULL THEN ''
			WHEN ct.[InstanceName] IS NOT NULL THEN RTRIM(ct.InstanceName)
		END AS "InstanceName", 
			AVG(cda.CounterValue) as Average,
			Min(cda.CounterValue) as MinValue,
			MAX(cda.CounterValue) as MaxValue,
			STDEV(cda.CounterValue) as StdValue,
			COUNT (cda.CounterValue) as Freq ,
			di.DisplayString
FROM CounterData cda
	JOIN CounterDetails ct ON cda.CounterID = ct.CounterID
	JOIN DisplayToID di ON cda.GUID = di.GUID
WHERE ((ct.CounterName = '% Processor Time' AND ct.ObjectName = 'Processor')
	OR [ct].[CounterName] = 'Available MBytes')
	AND DATEPART(HOUR,
					(CONVERT(datetime, LEFT([CounterDateTime],19)))) BETWEEN 9 AND 17
GROUP BY ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), ct.InstanceName, di.DisplayString
ORDER BY di.DisplayString, RTRIM(ct.ObjectName), RTRIM(ct.CounterName)

 

Using PIVOT to compare Baseline and Tuning data
I have 2 sets of counter data in my database: BaselineSept2013 and TuningDec2013. Comparing them should give better view to narrow down the analysis. There are couple of important counters related to SQL Server performance, so I usually pull the following for quick analysis:

  • % Processor Time
  • Avg. Disk sec/Read
  • Avg. Disk sec/Write
  • Available MBytes
  • Page life expectancy
  • Index Searches/sec
  • Full Scans/sec

Each counters are related each other, so it’s important to understand their relationship and not to judge system performance by looking to only CPU. Brent has a very good post about this. Jonathan also wrote nice explanation on fundamental SQL Server counter in his post.

I can use CTE as temporary storage before PIVOT, and select only 7 counters that useful for quick SQL Server analysis. The following query compares performance data between baseline and tuning data set:

--summarize Average Value with pivot, and compare between Baseline and Tuning dataset
WITH CTEPerfData(SourceData, CounterID, ObjectName,	CounterName, 
						InstanceName, CounterValue)
AS
(
	SELECT di.DisplayString, ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), 
			CASE
				WHEN ct.[InstanceName] IS NULL THEN ''
				WHEN ct.[InstanceName] IS NOT NULL THEN RTRIM(ct.InstanceName)
			END AS "InstanceName", 
				cda.CounterValue
	FROM CounterData cda
		JOIN CounterDetails ct ON cda.CounterID = ct.CounterID
		JOIN DisplayToID di ON cda.GUID = di.GUID
	WHERE ((ct.CounterName = '% Processor Time' AND ct.ObjectName = 'Processor')
		OR ([ct].[CounterName] = 'Avg. Disk sec/Read' )
		OR ([ct].[CounterName] = 'Avg. Disk sec/Write' )
		OR [ct].[CounterName] = 'Available MBytes'
		OR [ct].CounterName = 'Page life expectancy'
		OR [ct].CounterName = 'Index Searches/sec'
		OR [ct].CounterName = 'Full Scans/sec')
			AND DATEPART(HOUR,
					(CONVERT(datetime, LEFT([CounterDateTime],19)))) BETWEEN 9 AND 17
)
SELECT * FROM CTEPerfData
		PIVOT (AVG(CounterValue) FOR SourceData IN ([BaselineSept2013], [TuningDec2013])
											) as pvt

The query yields the following results, so I can compare important counters side by side before and after tuning (Sept and Dec) data.

perfmon-pivot

It worth to explain relationship between counter and how to derive conclusion of system performance in the next post. In the meantime, You may need to read Jimmy May’s post about VitalSign spreadsheet.

Analyzing Perfmon with SQL Query (Part 1)

Performance counter is very useful for analyzing OS and SQL Server performance. Most DBA knows how to collect performance counter, so I won’t write it in this post. Just in case You don’t know, check Brent’s post here. Once the counters are collected, the next step is to analyze to find performance problem. I posted about Perfmon analysis with PAL , but in this post I will show different method by importing to SQL Server and query it T-SQL.
Using T-SQL query has several advantages over PAL:

  • You have full control on what counters to analyze and how to summarize (average, min. max, etc)
  • You can compare several sets of data collection and see their trends. You can filter the query and build crosstab report to see the trending between different time interval

So the steps to do it are as follow:

  1. Create empty database in SQL Server
  2. Setup ODBC System DSN
  3. Import Perfmon to SQL Server with relog tool
  4. Analyze perfmon by using T-SQL

 

Create empty database in SQL Server

Create an empty database in SQL Server, as this will be a storage for imported data. Sizing is important because the data is quite big. In my case, 1GB of Perfmon file becomes 3GB after imported into SQL Server. It’s a massive growth, but You will know why after seeing the information stored in the database. I usually pre-size the data file to 5GB with 100MB growth to improve performance. Don’t forget to change recovery mode to simple to prevent unexpected growth of log file.

 

Setup ODBC System DSN

The next step is to create ODBC DSN to connect to SQL Server. This is straight forward, by using ODBC Administrator in control panel. I choose the 64 bit version because both my client machine and SQL Server is 64 bit version. Make sure also to select System DSN instead of other type of DSN.

odbc-1

For some reason, relog will only work with SQL Server legacy driver instead of SQL Server native client. So You need to choose “SQL Server” driver instead of SQL Server Native Client when selecting driver type.

odbc-2

The next screen will ask for connection name, SQL Server instance name, and credential to connect to SQL Server. Other important note to specify the default database name to the one created in previous step. I created empty database PerfDB, so I specify the same name as my default database here.

odbc-3

 

Once created, test the connection to make sure it’s properly configured.

My machine is Windows 8.1 64 bit with SQL Server 2012 (also 64 bit). I created System ODBC DSN (64 bit) with SQL Server driver version 6.xxx and relog was successfully connected to my SQL Server. Initially, I used SQL Server Native Client with 32 or 62 bit combination, but always encountered error when executing relog command.

 

Import Perfmon to SQL Server with relog tool

Once ODBC is configured, the next step is to import Perfmon data to SQL Server. Relog is installed by default in your Windows, so no need to add additional tool. The syntax is as follow:

relog perfmonFileName.blg -f SQL -o SQL:ODBCDriverName!CounterIdentifier

The CounterIdentifier is any name to tag your data collection set. For example, If I have 3 Perfmon files and all of them was collected during August 2013 then I can group them together under one identifier: DataAugust2013. This is very useful when I want to group several Perfmon files under different category. I can collect baseline data and import it under “Baseline” identifier. Then collect another one and tag it under “Tuning” identifier.

To run the relog command, follow the following steps:

Open command prompt and navigate to the location of perfmon file. To make it easy, You need to be in the same folder with perfmon file. Run the following command:

relog1

Let me explain the command here:

  • The name of Perfmon file is DataCollector01.blg
  • The output type is SQL Server, by using -f parameter
  • The ODBC System DSN name is relog5
  • The identifier is BaselineSept2013

The command will import all counters data to SQL Server. If you want to filter the data by applying additional parameter, please read full relog tutorial here and here.

 

See what’s We got after import

Let’s connect to PerfDB and see what We have after import: there are 3 tables created inside the DB.

relog_db

 

Run the following simple query to see what We got in every tables:

select*from[dbo].[DisplayToID]

selecttop3 *from[dbo].[CounterDetails]

selecttop3 *from[dbo].[CounterData]

 

The query yields the following result:

relog_db1

As You can see here, I put BaselineSept2013 as identifier and It’s reflected in the table under DisplayString column. I also imported another set of files and name it as TuningDec2013. In this case, I have 2 group of imported files that I can compare to see the improvement between the baseline and tuning period.

Now let’s play the interesting part: querying imported data to analyze SQL Server performance. I will post this in Part 2, stay tuned :).

 

 

SQLServer UG: Managing your tempdb

Updated on May 23, 2013

Tempdb is a piece of simple stuff that often overlooked in SQL Server planning. Many DBAs ask for more CPU, RAM, and storage but let alone tempdb stays in drive C. Tempdb can be the big bottleneck in system performance, because it contains many stuff for SQL Server operation.

What’s in tempdb, actually?

Here are the short list of tempdb contents:

  • User objects: your @tablevariable and #tempTable are 2 common example
  • Internal objects: required for internal SQL Server to run, but not directly created by users. Example: worktable in hash join and rebuilding index maintenance
  • Version store: obviously, when snapshot isolation is enable then SQL Server need to keep version store in tempdb

To summarize: when the application query uses alots of temporary operation, tempdb will grow very fast as well. How do we forecast tempdb size? Sad to say but true: there is no exact formula to calculate its size.

Sunil Agarwal from SQL product group has done a series of writing on tempdb monitoring and maintenance. I suggest you to read it for better understanding. His posts collection are here: http://blogs.msdn.com/b/sqlserverstorageengine/archive/tags/tempdb/

I presented this topic for SQL Server Group in MS Office – Jakarta on Febr 28, 2013. Below is the summary of tempdb tuning and optimization:

tempdb

 

The complete PPT and sample code are available for download here:

Jakarta–SQL Server 2012 Community Launch

I’m organizing a community event together with SQLServer-Indo, MUGI, and @meisariarvini: The new SQL PM at MS Indonesia. Now We also invite @cfrandall to come in town! The official invite is below, sorry it is in Bahasa Indonesia. You can di BingTranslate it anyway :).


SQL Server 2012, versi terbaru dari SQL Server, telah hadir. Tentunya rekan-rekan community ingin tahu lebih banyak mengenai fasilitas-fasilitas terbaru yang diberikan, seperti bagaimana SQL Server 2012 dapat membantu penerapan Hybrid IT, sebuah kombinasi Cloud Computing dan On Premise Computing. SQL Server 2012 juga membuka peluang aplikasi-aplikasi yang dibangun menggunakan teknologi non-Microsoft untuk menggunakan fasilitas-fasilitas canggih ini, misalnya bagi para pengembang aplikasi berbasis PHP. SQL Server 2012 Community Launch ini ditujukan untuk rekan-rekan Developer dan Profesional TI yang tertarik untuk mengetahui lebih rinci mengenai fasilitas-fasilitas ini. Manfaatkan juga kesempatan untuk bertemu dengan para pembicara untuk berdiskusi secara langsung Lokasi:
Bursa Efek Indonesia, Tower II, Lt. 18,
PT. Microsoft Indonesia,
Microsoft Auditorium.Waktu:
Sabtu, 7 July 2012, 08.30 – 16.00Audience:
Developer and IT Pro 

 

Profil Pembicara Utama

 

Chris Randall
Director of Training, SolidQ, North America.

Chris Randall adalah Director untuk training di SolidQ, Amerika Utara. Chris dikenal melalui kompetensinya sebagai public speaking di berbagai teknologi konferensi di level global, spesialis untuk training, courseware workbooks, T-SQL, SQL Server Integration Services, SSAS, dan SSRS. Chris adalah mentor SolidQ yang telah familiar dengan SQL Server sejak tahun 1996. Aktif sebagai trainer, konsultan dan juga sebagai pembicara. Chris telah banyak berkontribusi terhadap konten training yang berhubungan dengan T-SQL, SSIS dan Arsitektur Business Intelligence. Chris sudah banyak mengajar ribuan peserta training dalam kelas dan konferensi kelas dunia, termasuk diantaranya SQL PASS, SQL Saturday dan berbagai konferensi untuk Microsoft Certified Trainers (MCT). Chris memiliki MCT dan MCITP sertifikasi untuk SQL Server 2008 portofolio. Chris juga sebelumnya adalah board member Denver SQL Server User Group, chapter PASS. Selain tugasnya mengajar dan sebagai konsultan, Chris sangat bersemangat untuk hadir di Indonesia dan menyampaikan berbagai peluang yang muncul dengan kehadiran SQL Server 2012.

Ikuti Chris di twitter : @cfrandall dan blog http://cfrandall.com/

 

Berikut adalah topik-topik yang akan dibahas pada SQL Server 2012 Community Launch ini :

    Agenda
Main Session
  09.00 – 09.15

Opening by Manish Chopra
M&O Director, Mirosoft Indonesia

  09.15 – 10.00

Case Study:
Implementing SQL Server 2012 Business Intelligence

  10.00 – 10.45

Note from The Field:
PHP Development with SQL Server
Bastian Harahap
CTO Urbanesia

  10.45 – 12.15

Embracing Big Data with SQL Server 2012
Chris Randall
Director of Training of SolidQ – North America

  12.15 – 13.00 Lunch
Breakout Session
    IT Pro     Developer and BI
  13.00 – 14.00

  Best Practices Running SQL Server 2012
on Hyper-V

  Hotma Roland Pasaribu, MCT
  Mirosoft User Group Indonesia

  Build Modern Windows 8 Metro Apps
and SQL Azure
  Rudy Setyo Purnomo
  CEO Erudeye

  14.00 – 15.00

  AlwaysOn High Availability Group
  in SQL Server 2012

  Kiki Rizki Noviandi
  Microsoft MVP (SQL)

  Powerful Data Visualization with PowerView
  Hendra Eka Putra
  Microsoft MVP (SQL)

  15.00 – 15.15 Break
  15.15 – 16.15

  Monitoring SQL Server 2012 with SCOM
  Faisal Susanto
  Senior Consultant, Microsoft Services

  Performance Tuning in SQL Server 2012
  Choirul Amri
  Senior Consultant, Microsoft Services

  16.15 – 16.30 Closing remaks and door prize

 

 

Pendaftaran

Pendaftaran dapat dilakukan dengan mengirimkan email dengan mencamtumkan informasi berikut: Nama Lengkap, perusahaan tempat bekerja, dan nomor telepon yang dapat dihubungi. Acara ini gratis, namun mengingat keterbatasan kapasitas maka diharapkan rekan-rekan komunitas dapat mencantumkan informasi lengkap untuk keperluan konfirmasi kehadiran. Kami tunggu kehadiran rekan-rekan dengan mendaftar selambat-lambatnya tanggal 3 July.

Anggota komunitas dapat mendaftar ke kontak berikut:

SQL Server User Group Indonesia: Cahyo Listyanto| Email: sqlserverindo@live.com

Microsoft User Group Indonesia: Abdul| Email: Abdullah0813@live.com

© 2012 Microsoft Corporation. Panduan Penggunaan | Merek Dagang | Pernyataan Privasi

Jika pesan ini tidak tampil sebagaimana mestinya, silakan lihat versi online
Microsoft menghargai privasi Anda. Mohon membaca Pernyataan Privasi online kami.
Mohon klik disini apabila Anda tidak menginginkan email promosi di kemudian hari dari Microsoft Indonesia.
Pilihan ini tidak akan mempengaruhi aturan langganan newsletter Anda lainnnya ataupun komunikasi
pelayanan wajib yang dianggap sebagai bagian dari pelayanan Microsoft.

Microsoft Indonesia memberikan dukungan terbaik kepada pelanggan dan mitra melalui no. tel +6221 2552 8700, 5155151

P.T. Microsoft Indonesia, Lt. 18 Tower II Indonesia Stock Exchange Building,
Sudirman Central Business District, Jl. Jend. Sudirman Kav. 52-53, Jakarta 12190, Indonesia

What is The Best SQL Resources on Internet?

Well, I always receive this question again and again, specially when delivering talks on technical communities. Veteran DBAs may just simply doing bing or googling, most of newbie find themsselves overwhelmed by tons of info from the net.

Let me share my favourite site that I always visit regularly:

 

SQLServerCentral

I subscribe to their newsletter since 2001, and never get bored since then. I even wrote 2 aqrticles for them, but now I just do ot have enough time anymore. This is the biggest SQL articles repository in the net, with very active community. SQL MVPs Brian Knight, Andy Warren, and Steve Jones are the founder and managing the site.

 

http://www.brentozar.com/

He is a SQL MVP and MCM as well, posted many interesting stuff on IO planning, virtualization and other random stuff

 

http://sqlskills.com

It containts the most advanced SQL Server stuff on the planet, maintained by highly respected SQL expert Paul Randall and Kimberly Trip. Paul is the creator of DBCC CheckDB when he was with Microsoft.

 

Technet and MSDN

Many resources such as video, how to, and whitepaper.

Resources on Capacity and IO Planning

I am posting some collection of links and resources related with capacity planning, plan for performance, and IO planning. This is also a kind of note to self, so it will be easier for me to find out those links in the future.

 

Estimating the Size of a Clustered Index
http://msdn.microsoft.com/en-us/library/ms178085.aspx

Estimating the Size of a Database
http://msdn.microsoft.com/en-us/library/ms187445.aspx>

Proper I/O Capacity Planning and Configuration Resources for SQL Server
http://blogs.msdn.com/b/askjay/archive/2011/03/25/proper-i-o-capacity-planning-and-configuration-resources-for-sql-server.aspx

Predeployment I/O Best Practices
http://technet.microsoft.com/en-us/library/cc966412.aspx

Physical Database Storage Design
http://technet.microsoft.com/en-us/library/cc966414.aspx

Storage and SQL Server capacity planning and configuration
http://technet.microsoft.com/en-us/library/cc298801.aspx

Storage Performance Testing with SQLIO
http://www.brentozar.com/archive/2008/11/storage-performance-testing-with-sqlio/

SQLIO Tutorial: How to Test Disk Performance
http://www.brentozar.com/archive/2008/09/finding-your-san-bottlenecks-with-sqlio/

Tuning SQL Server for SharePoint Performance

Updated on May 23, 2013

I had an opportunity to present on SQL Server User Group Indonesia monthly meeting on Oct 27, 2011 about SQL performance tuning from SharePoint perspective. Most of DBAs might be familiar with SQL stuff, but not on what SharePoint expects from SQL Server. Understanding how SharePoint works is really important in tuning SQL Server.

Most of the SharePoint contents are stored in SQL Server. Documents, picture, pages, Lists, and its configuration. Document and picture are stored as binary in SQL, so do not expect to find the physical files in SharePoint server.

Most of physical files that are not stored in database located under 12 hive (14 hive in SharePoint 2010). This folder contains customization files such as CSS, master pages, features, ASPX page. There are some occasion that developer prefer to store master page and layout in the library so it is actually in your SQL tables. So please communicate closely with the development team to understand exactly on file location. However, rest assure that 90% of SharePoint contents are inside SQL Server.

Below are some guidance to configure SQL Server for SharePoint performance:

Configure instance level index fill factor to 70%. Most of veteran DBAs now that this setting is optimized for read operation with moderate insert and update. You may think to optimize fill factor on table and index level, but this is not possible in SharePoint database. Microsoft does not recommend to alter table and index of SharePoint DBs. Default fill factor is 0 so you need to alter this setting. The 70% is not a magic bullet, do continuous monitoring and adjust fill factor accordingly.

Set MAXDOP to 1; contrary to popular belief, SharePoint does not benefit much from parallelism. There are lots of cases when SharePoint produces ad-hoc queries that SQL Server may struggle find the most optimal path. Setting the value to 1 means forcing SQL Server to not create parallel plan. More discussion and testing about this are here:

http://blogs.msdn.com/b/grahamk/archive/2009/06/15/troubleshooting-sql-server-deadlocks-and-performance-when-running-sharepoint.aspx

http://blogs.msdn.com/b/rmeure/archive/2011/12/23/optimizing-sql-for-sharepoint.aspx

 

Turn OFF update statistics; and configure proper schedule to refresh it. Make sure to set proper schedule for this to avoid outdated statistics.

Plan your data and log file growth. The default data file growth in SQL Server is 1 MB so it does not make any senses for SharePoint system. Imagine somebody is uploading 5 MB file and there are 100 uploads per day. It means your data file will grow 500 times/day; this will create bottleneck in IO system and usually escalated to CPU. I usually prefer to do capacity planning to forecast database size for 3-6 months. So if you think that your data growth will be 50 GB in 6 months, then set the data file size to 50 GB and configure its growth rate to something around 500 MB. Under previous scenario, your database only needs to grow once per day to accommodate 500 uploads. This reduce IO bottleneck significantly and speed up SharePoint performance.

Plan your tempdb; this can be a very long discussion and deserve separate post. There is a reference from Paul Randall about tempdb optimization here:

http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-(1230)-tempdb-should-always-have-one-data-file-per-processor-core.aspx

 

There is a long whitepaper from Microsoft about this:

http://www.microsoft.com/download/en/details.aspx?id=24282

 

KPI Report with Reporting Services 2008 R2

There is a new gem in SQL Server 2008 R2: Indicator control. It enables us to create KPI style report in easy way. You may ask how is it difference with Gauges that already exists in SQL Server 2008?

Well, Gauges is cool by displaying speedometer style indicator. But sometime they are just too much. I don’t want the fancy circle style, instead I want it to be a simple traffic light or dot indicator like this:

image

If you are a Reporting Services veteran then you know what to do: upload couple of traffic light symbols and bind them to a cell. Then write an expression to hide or unhide each symbol accordingly. We don’t need to do this anymore in R2. Here is the way to achieve it in easy way.

First, as usual we need to provide a cell as placeholder for this indicator. Second, place indicator control inside the cell then bind them to corresponding columns in the dataset.

image

The last step is configuring the expression logic for them. By default is only showing 3 state: Red, Yellow, and Green. But we can add as many states according to requirement. Once its done, just render the page to show KPI report.

Here is my sample KPI report. It requires Reporting Services 2008 R2 and AdventureWorksDW2008 database.

The same concept also applied to display trend in R2. I posted about creating trend sparklines in SSRS 2008, which is require to do a bit workaround. Not in R2 version, because It has dedicated sparklines control as well.