mca

SharePoint Club Indonesia: First meet-up

I have been running the SQL Server community quite a while, but [secretly] work on SharePoint in multiple projects. We have quite numbers of SharePoint pro’s here in Jakarta, and also Indonesia in general. Even though the SP practitioner are knowing each others, hangout, and has a very active Whatsapp group, We never had a formal meet-up like SQLServer-Indo.

Hence I initiate a first meeting of SharePoint club. We still do not have a formal name, let the folks decide tomorrow during a meeting. We will be talking about SharePoint Apps in SharePoint 2013. Provider hosted is very interesting, because it allow us to run non .NET app on along with SharePoint. Boike, one of the EBIZ Cipta Technical LEad will be the firs speaker. Let’s keep the ball rolling!

Knowledge sharing – SharePoint Geek.

Topic: Infrastucture Deployment for SharePoint Apps – Provider Hosted.
Speaker: Boike Syahrizal, Technical Team Lead – Ebiz Cipta Solusi

Place: Office88 Lantai 7 – Kota Kasablanka Jakarta (EBIZ Office)
Day/Date: Thursday, 11 Juni 2015
Time: 18.30 – 20.00

Registration via email to sharepointclubin@outlook.com

Light dinner is available, first come first served.

Azure 101 Part (2): Understanding Availibility Sets

This post second installments of Azure 101. I posted about affinity groups a while ago to explain how do We control location of cloud services. Understanding availability sets is very important if We care about high availability of the VMs.

 

Increasing high availability

Availability sets is the way Azure control the physical host location of VMs, so they will be provisioned on different rack in Azure data center. If two VMs were created without specifying availability sets, there is no guarantee that they will be located in different rack. In the case of planned maintenance or rack failure, all of them will be offline.
Grouping multiple VMs in the same availability sets ensures that they will be distributed to different power and network sources. This means If one of them unavailable the one in another rack is still fine. This also ensures the configuration meets 99.95 Azure SLA as stated here.
The following diagram depicts multiple VMs in different rack for the purpose of SQL Server Availability Group cluster.

azure-availabilityset

As outlined in the diagram, each VM connected to different power and network sources. This increases their availability in either planned or unplanned downtime. Multiple VMs with the same purpose should be located in the same availability sets to provide high availability or redundancy, for example:

  • Configuring multiple VMs to create node in SQL Server AlwaysOn Availability Groups or Failover Cluster Instance
  • Providing load balancing between VMs to hosts various services such as IIS or file server

 

 

Creating Availability Set

There are several way to create availability set. The easiest one is by defining its name during VM creation. Make sure to choose From Gallery so the following window will appear:

azure-createavset

Once multiple VMs in the same availability sets are created, they will appear under the same set as follow:

azure-exampleavset

In the case the VM has been created but it does not belong to any availability sets, the following PowerShell command will update the VM to join availability set:

Get-AzureVM -ServiceName "theNameofVMsCloudService" -Name "VMNamehere" |
 Set-AzureAvailabilitySet -AvailabilitySetName "nameofAvailabilitySet" |
 Update-AzureVM

Updating VM’s availability sets causes reboot, so plan accordingly.

June 2014. SQLServer-Indo Meet-Up: Power BI and Azure Survival Guide

We are organizing SQLServer-Indo meet up this month, with a a special guest speaker. Julie Koesmarno is an SQL Server MVP living in Australia. She has presented in various international events such as PASS Summit and SQL Rally in USA. We will setup a Lync conference call so She can do the remote presentation, broadcasted in the room.

As usual, there will be 2 topics presented in the meeting as follow:

Session 1: Azure Survival Guide for [On Premise] SQL Server DBA
By Choirul Amri, Senior Consultant – Microsoft Services
Are you confuse with all Azure buzzwords and new stuff? Struggling to understand affinity groups, availability sets, cloud service, and Vnet? Do You know When to use SQL Server VM instead of SQL Azure Database? What is the catch and trap of configuring Availability Groups in Azure? Join the session and grab the simplified, practical discussion of Azure architecture.

Session 2: Building Your Myth Busting Lab With Power BI
By Julie Koesmarno, SQL Server MVP
Did you know that a large portion of Information Workers’ and Data Analysts’ tasks are akin to “myth busting”? How can Power BI be used for these myth busting tasks? What if myth busting in business applications can be as easily done as few clicks or simply by asking questions in plain English?
Myth busting requires gathering facts, modeling data for repeatable usage and/or testing, exploring data captured, proofing a few theories, coming up with “what-ifs” and making a well informed conclusion. This session will demonstrate how to create an ideal Myth Busting lab for all these aspects all in one suite, Power BI. The demo will show how these activities are played by Information Workers, Data Analysts and IT Administration individually and how Power BI is used for enabling team collaboration. You will learn best practices on delivering self service and mobile Business Intelligence solutions with Power BI, that you can try at home and no safety goggles required!

Speaker Bio
Julie Koesmarno, SQL Server MVP, MCSE Data Platform and Business Intelligence, is a SQL Server and BI consultant at LobsterPot Solutions, with a bachelor’s degree in IT. She has over 10 years of experience working with SQL Server for large-scale and multi-million-dollar OLTP and ETL projects as a SQL Server developer and system analyst/designer. Her recent projects include delivering a SQL Server 2012 data warehouse and BI solutions for a leading finance company in the US and Australia. She regularly blogs in her site: http://www.mssqlgirl.com/

When: Thursday, June 26, 2014
Where: Microsoft Indonesia Auditorium – BEI Building Tower 2 18th Floor Jakarta
Time: 18.45 – 21.00

Agenda:
18.15 – 19.00 Registration
19.00 – 19.45 Session 1: Azure Survival Guide for [On Premise] SQL Server DBA
15 minutes break
20.00 – 21.00 Session 2: Building Your Myth Busting Lab With Power BI
21.00 – 21.15 Wrap up and closing

Registration:
Please send your name and contact detail to sqlserverindo@live.com

Light dinner is available on first come first served, so don’t be late.

See You!
Moderator SQLServer-Indo

Azure 101 Part (1): Understanding Affinity Groups

Azure comes with new keywords that many IT Pro and Developer may not be familiar with. Understanding the key concepts is important in designing and deploying Azure application, especially if We care about High Availability and Disaster Recovery. This post will observe the meaning and relationship between three important concepts: Affinity Groups, Availability Set, and Cloud Service.

 

Affinity Groups

The idea of Affinity Groups is quite simple: any applications, services, and infrastructure should be in the same data center for best performance. Of course this also depends on the connection between locations, but keeping all of them in the same region will give faster accessibility than separate them to multiple region. An Affinity Groups is the way Azure ties various resources as close as possible, by keeping them in the same region. As the time of this writing, Azure data centers are located at the 9 areas as follow:

  • North Central US
  • South Central US
  • East US
  • West US
  • South America
  • North Europe
  • West Europe
  • East Asia
  • South East Asia

 

Creating Affinity Groups

An Affinity Groups can be created either in Management Portal or PowerShell. The setup is located under Settings menu of the portal. Once it is created, it can be assigned to other Azure resources to keep them stay close in the same region.

AG

There are 3 fields to fill in when creating Affinity Groups:

  • Name: the name for affinity group.
  • Description: the description for affinity group.
  • Region: the region where this affinity group will be located

create AG

Creating Affinity Groups with PowerShell is straight forward by using the following command:

New-AzureAffinityGroup -Name "SEA IaaS"-Location "Southeast Asia"-Description "IaaS group in SEA" -Label "SEA IaaS Group"

The name must be unique within an Azure Subscription. This does not limit us to create multiple Affinity Groups with the same region, as long as their names are different. Once created, the region location in it cannot be modified.

 

How Affinity Group relate to another Azure resources?

The relationship between Affinity Groups with other resources can be in many ways, depending on the sequence and approach in provisioning of those resources. Below are two examples:

  • Storage Account; Provisioning storage account requires either Location or Affinity Groups as mandatory parameter. The storage account is where all data related stuff located. These can be BLOB, database file, table, and VHD files belong to Virtual Machine.
  • Cloud Service; Similar with storage account, Cloud Service provisioning also requires some parameters related to location, which can be supplied with Affinity Groups or region name. I will explain this more detail in another post.

So to sum it up, the following diagram depicts logical relationship between Affinity Groups, storage account, cloud service, and another Azure services that may sit on top of that.

 

 

 

Automating SQL Server VM Provisioning on Azure with PowerShell

Azure IaaS provides a flexible way to fire up new VM without worrying about upfront infrastructure cost. We can provision new VM for DEV/TEST quickly, and proceed with development task. The common task I often do is installing new SQL Server VM, setting up, so it will be ready for testing bench or isolated development. It requires couple of clicks and wait, and sometime tedious.

This article explains how to provision new SQL Server VM by using Azure Management portal. It is easy, but still require couple of click and wait between clicks. We can automate those tasks by using PowerShell, so everything can be done with an execution of a script file. This post also has step by step instruction on how to deploy simple SQL VM on Azure, but I will take another approach in this post. The script in this post will do the following stuffs in VM provisioning:

  • Provision a stand alone SQL Server, this means does not join to AD domain
  • Provision a storage account to store VHD
  • Deploy the VM to specific Azure region, by supplying Affinity Group to control its location

Since this is a stand alone deployment, there will be no Virtual Network created. The VM is accessible through public cloud service name and its endpoint port number.

The first step is connect to Azure account from Windows Azure PowerShell console as follow:

Add-AzureAccount

A pop up will appear asking for Azure account credential.

 

Declare all required variables for Affinity Group and storage account

The following lines declares parameter related with cloud service name, affinity group and storage account.

 

$location="Southeast Asia"

$affinityGroupName="SQL-SEA"

$affinityGroupDescription="SQL SEA Affinity Group"

$affinityGroupLabel="SQL SEA IaaS Group"

$storageAccountName="sql2012teststorage"

$storageAccountLabel="sql2012teststorage Storage Account"

$storageAccountContainer="https://"+$storageAccountName+".blob.core.windows.net/vhds/"

 

Declare variables for VM configuration

Azure provides various VM image, so the script need to know which image to pick for provisioning. Executing Get-AzureVMImage and filter based on label name is one way to retrieve What We wanted.

$sqlImageName = (Get-AzureVMImage | where {$_.Label -like "SQL Server 2012 SP1 Enterprise*"} | sort PublishedDate -Descending)[0].ImageName
$sqlServerName = "kintamanisql"
$cloudServiceName = "kintamanicloud" 
$vmAdminUser = "myadmin" 
$vmAdminPassword = "Company!000"

 

Create Affinity Group

An Affinity Group controls the location of cloud service, so We can put the VM in specific region closer to users. USe the following lines to create the Affinity Group:

New-AzureAffinityGroup `
    -Name $affinityGroupName `
    -Location $location `
    -Description $affinityGroupDescription `
    -Label $affinityGroupLabel

 

Create Azure storage account

A new storage account will be created to store VHD image. This account is located in the Affinity Group specified above, which is Southeast Asia.

New-AzureStorageAccount `
    -StorageAccountName $storageAccountName `
    -Label $storageAccountLabel `
    -AffinityGroup $affinityGroupName

 

Create cloud service

Cloud service is the public facing component of VM to external world. It is a service with FQDN, so accessible from internet via endpoint at specific port. The cloud service ties to Affinity Group so bound to specific region in Azure data center.

New-AzureService -ServiceName $cloudServiceName -Label "Cloud for SQL VM" -AffinityGroup $affinityGroupName

 

Create SQL Server VM

Once the storage account and cloud service created, the VM is ready to be provisioned. All required endpoint to access SQL Server services is also included in the VM creation.

New-AzureVMConfig `
    -Name $sqlServerName `
    -InstanceSize Small `
    -ImageName $sqlImageName `
    -MediaLocation "$storageAccountContainer$sqlServerName.vhd" `
    -DiskLabel "OS" | 
    Add-AzureProvisioningConfig `
        -Windows `
        -DisableAutomaticUpdates `
        -AdminUserName $vmAdminUser `
        -Password $vmAdminPassword |
	Add-AzureEndpoint -Name "SQL" `
        -Protocol "tcp" `
        -PublicPort 22001 `
        -LocalPort 1433|
    New-AzureVM -ServiceName $cloudServiceName

We can consolidate all of those lines into one PowerShell script file so it is ready for execution. The following file put all of those tasks together.
 CreateSQLServerVMAzure.ps1

It will take a while for VM provisioning to run. Once completed, select the VM name and click Connect to download the RDP file.

azure-connect

Once downloaded, execute the RDP file to access Remote Desktop in the VM.

 

 

 

 

Running WordPress with SQL Server on Azure

It has been more than a year this blog runs on Azure platform, since I moved from my Linux hosting (WordPress with PHP – MySQL) to Azure. It’s still WordPress (I love it) but sits on top of WAWS with MySQL. I have been experimenting to run WordPress on IIS with MySQL for quite a while, until I read that I can actually run WordPress with SQL Server instead of MySQL.

I’m a SQL Server guy, and decide it to give it a try. Running it on SQL Server means I can do query, tuning, and everything on database that I really familiar with. Running it with SQL Server is even easier now in Azure by using special installer developed by Brandoo.

 

Creating Azure SQL Database

So I started the deployment by creating an SQL database in Azure. It is and SQL Server engine, not MySQL. This is and empty database for WordPress storage in the next step.

createdb0

You may choose Quick Create or Custom, whichever is convenience. Specify the new server If You never create any before.

createdb1

Once the database is created, You need to take note of database name, SQL login, password and also server name. Those information will be used for WordPress configuration in the next step.

 

Install WordPress on SQL Server

I use WAWS to deploy my WordPress site. This is the fastest and easiest way to deploy a web site in Azure, because I do not need to bother with all underlying infrastructure. If You have not create one, You need to create a Web Site by selecting Compute > Web Site > From Gallery.

create_WAWS

The next screen will show different types of web application available in Azure Web Site. Select Brandoo WordPress under Blog category. This distribution runs on SQL Server instead of default MySQL

brandoo-wordpress.

The next dialog will ask for web site URL and database user name. Make sure to specify the same user name and password created earlier so You can connect to existing SQL database.

createwp-2

You will be connected to existing database, and WordPress installation should kick in. Once is completed, the website is listed in web site section list as follow:

createwp-4

You can browse to the web site URL, for example http://mywpblog.azurewebsites.net and the new WordPress blog will be loaded. As a proof that It really runs on SQL Server, open SSMS and connect to SQL database by using server name and credential created earlier.

ssmsdb

Now I have created a new WordPress site on SQL Server. The next step is to import all of my blog contents from old site to new one. This is pretty straight forward by using WordPress importer/exporter plugin.

And http://choirulamri.org runs on WP – SQL Server in Azure now :).

Analyzing Perfmon with SQL Query (Part 2)

I wrote about how to import Perfmon file into SQL Server in previous post, and now I let’s see how to analyze it using SQL query. I will write couple of examples that I usually use. The exact query may varies depending on the requirement and server environment. The most common question are as follow:

  • how high is CPU utilization?
  • Do I have enough memory in the system?
  • Is there bottleneck in the disk?

A quick query will help to give high level idea, but in-depth analysis is required to pinpoint what’s actually happens in the system. High CPU does not always mean a CPU problem, until detail examination is done to other related counters. In many cases, We need to investigate SQL Server internal by using DMV or Profiler to get more comprehensive views. I will blog this separately, so let’s focus on how to mine the data from imported perfmon.

 

Simple Query: How’s my CPU and Memory?

I wanted to see CPU and Memory information in the system. This is quite simple to do by querying the tables as follow:

SELECT ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), 
		CASE
			WHEN ct.[InstanceName] IS NULL THEN ''
			WHEN ct.[InstanceName] IS NOT NULL THEN RTRIM(ct.InstanceName)
		END AS "InstanceName", 
			AVG(cda.CounterValue) as Average,
			Min(cda.CounterValue) as MinValue,
			MAX(cda.CounterValue) as MaxValue,
			STDEV(cda.CounterValue) as StdValue,
			COUNT (cda.CounterValue) as Freq ,
			di.DisplayString
FROM CounterData cda
	JOIN CounterDetails ct ON cda.CounterID = ct.CounterID
	JOIN DisplayToID di ON cda.GUID = di.GUID
WHERE ((ct.CounterName = '% Processor Time' AND ct.ObjectName = 'Processor')
	OR [ct].[CounterName] = 'Available MBytes')
GROUP BY ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), ct.InstanceName, di.DisplayString
ORDER BY di.DisplayString, RTRIM(ct.ObjectName), RTRIM(ct.CounterName)

Here is the query result from my sample data:

counter-simple

It’s clear that average CPU utilization after tuning is lower than the baseline. It goes down from 19.4% to 13.3%. It’s also useful to get not only average, but also MIN, MAX, and Standard deviation. This gives better idea on various spike that might happen during sampling. High variation means the counter is up and down frequently.

I usually use the following formula to assess system variation: Average + 2 x STDEV. Recalling to basic statistical lesson, the interval between average and 2xVariation represents 68% of population. I’m not statistical expert though, You may refer to better sampling theory for more accuracy.

 

Filtering Data only for Working Hours

Let’s continue further by adding additional filter to the query above. Assuming that the busy hours is in daytime, I can add additional filter by limiting CounterDateTime column. There is a small caveat here: the column is in char with many whitespaces so it will fail in Datetime conversion. Combining TRIM or LEFT function with CONVERT to force it become Datetime should solve the problem.

----filter by working hours 9AM to 5pm
SELECT ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), 
		CASE
			WHEN ct.[InstanceName] IS NULL THEN ''
			WHEN ct.[InstanceName] IS NOT NULL THEN RTRIM(ct.InstanceName)
		END AS "InstanceName", 
			AVG(cda.CounterValue) as Average,
			Min(cda.CounterValue) as MinValue,
			MAX(cda.CounterValue) as MaxValue,
			STDEV(cda.CounterValue) as StdValue,
			COUNT (cda.CounterValue) as Freq ,
			di.DisplayString
FROM CounterData cda
	JOIN CounterDetails ct ON cda.CounterID = ct.CounterID
	JOIN DisplayToID di ON cda.GUID = di.GUID
WHERE ((ct.CounterName = '% Processor Time' AND ct.ObjectName = 'Processor')
	OR [ct].[CounterName] = 'Available MBytes')
	AND DATEPART(HOUR,
					(CONVERT(datetime, LEFT([CounterDateTime],19)))) BETWEEN 9 AND 17
GROUP BY ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), ct.InstanceName, di.DisplayString
ORDER BY di.DisplayString, RTRIM(ct.ObjectName), RTRIM(ct.CounterName)

 

Using PIVOT to compare Baseline and Tuning data
I have 2 sets of counter data in my database: BaselineSept2013 and TuningDec2013. Comparing them should give better view to narrow down the analysis. There are couple of important counters related to SQL Server performance, so I usually pull the following for quick analysis:

  • % Processor Time
  • Avg. Disk sec/Read
  • Avg. Disk sec/Write
  • Available MBytes
  • Page life expectancy
  • Index Searches/sec
  • Full Scans/sec

Each counters are related each other, so it’s important to understand their relationship and not to judge system performance by looking to only CPU. Brent has a very good post about this. Jonathan also wrote nice explanation on fundamental SQL Server counter in his post.

I can use CTE as temporary storage before PIVOT, and select only 7 counters that useful for quick SQL Server analysis. The following query compares performance data between baseline and tuning data set:

--summarize Average Value with pivot, and compare between Baseline and Tuning dataset
WITH CTEPerfData(SourceData, CounterID, ObjectName,	CounterName, 
						InstanceName, CounterValue)
AS
(
	SELECT di.DisplayString, ct.CounterID, RTRIM(ct.ObjectName), RTRIM(ct.CounterName), 
			CASE
				WHEN ct.[InstanceName] IS NULL THEN ''
				WHEN ct.[InstanceName] IS NOT NULL THEN RTRIM(ct.InstanceName)
			END AS "InstanceName", 
				cda.CounterValue
	FROM CounterData cda
		JOIN CounterDetails ct ON cda.CounterID = ct.CounterID
		JOIN DisplayToID di ON cda.GUID = di.GUID
	WHERE ((ct.CounterName = '% Processor Time' AND ct.ObjectName = 'Processor')
		OR ([ct].[CounterName] = 'Avg. Disk sec/Read' )
		OR ([ct].[CounterName] = 'Avg. Disk sec/Write' )
		OR [ct].[CounterName] = 'Available MBytes'
		OR [ct].CounterName = 'Page life expectancy'
		OR [ct].CounterName = 'Index Searches/sec'
		OR [ct].CounterName = 'Full Scans/sec')
			AND DATEPART(HOUR,
					(CONVERT(datetime, LEFT([CounterDateTime],19)))) BETWEEN 9 AND 17
)
SELECT * FROM CTEPerfData
		PIVOT (AVG(CounterValue) FOR SourceData IN ([BaselineSept2013], [TuningDec2013])
											) as pvt

The query yields the following results, so I can compare important counters side by side before and after tuning (Sept and Dec) data.

perfmon-pivot

It worth to explain relationship between counter and how to derive conclusion of system performance in the next post. In the meantime, You may need to read Jimmy May’s post about VitalSign spreadsheet.

Analyzing Perfmon with SQL Query (Part 1)

Performance counter is very useful for analyzing OS and SQL Server performance. Most DBA knows how to collect performance counter, so I won’t write it in this post. Just in case You don’t know, check Brent’s post here. Once the counters are collected, the next step is to analyze to find performance problem. I posted about Perfmon analysis with PAL , but in this post I will show different method by importing to SQL Server and query it T-SQL.
Using T-SQL query has several advantages over PAL:

  • You have full control on what counters to analyze and how to summarize (average, min. max, etc)
  • You can compare several sets of data collection and see their trends. You can filter the query and build crosstab report to see the trending between different time interval

So the steps to do it are as follow:

  1. Create empty database in SQL Server
  2. Setup ODBC System DSN
  3. Import Perfmon to SQL Server with relog tool
  4. Analyze perfmon by using T-SQL

 

Create empty database in SQL Server

Create an empty database in SQL Server, as this will be a storage for imported data. Sizing is important because the data is quite big. In my case, 1GB of Perfmon file becomes 3GB after imported into SQL Server. It’s a massive growth, but You will know why after seeing the information stored in the database. I usually pre-size the data file to 5GB with 100MB growth to improve performance. Don’t forget to change recovery mode to simple to prevent unexpected growth of log file.

 

Setup ODBC System DSN

The next step is to create ODBC DSN to connect to SQL Server. This is straight forward, by using ODBC Administrator in control panel. I choose the 64 bit version because both my client machine and SQL Server is 64 bit version. Make sure also to select System DSN instead of other type of DSN.

odbc-1

For some reason, relog will only work with SQL Server legacy driver instead of SQL Server native client. So You need to choose “SQL Server” driver instead of SQL Server Native Client when selecting driver type.

odbc-2

The next screen will ask for connection name, SQL Server instance name, and credential to connect to SQL Server. Other important note to specify the default database name to the one created in previous step. I created empty database PerfDB, so I specify the same name as my default database here.

odbc-3

 

Once created, test the connection to make sure it’s properly configured.

My machine is Windows 8.1 64 bit with SQL Server 2012 (also 64 bit). I created System ODBC DSN (64 bit) with SQL Server driver version 6.xxx and relog was successfully connected to my SQL Server. Initially, I used SQL Server Native Client with 32 or 62 bit combination, but always encountered error when executing relog command.

 

Import Perfmon to SQL Server with relog tool

Once ODBC is configured, the next step is to import Perfmon data to SQL Server. Relog is installed by default in your Windows, so no need to add additional tool. The syntax is as follow:

relog perfmonFileName.blg -f SQL -o SQL:ODBCDriverName!CounterIdentifier

The CounterIdentifier is any name to tag your data collection set. For example, If I have 3 Perfmon files and all of them was collected during August 2013 then I can group them together under one identifier: DataAugust2013. This is very useful when I want to group several Perfmon files under different category. I can collect baseline data and import it under “Baseline” identifier. Then collect another one and tag it under “Tuning” identifier.

To run the relog command, follow the following steps:

Open command prompt and navigate to the location of perfmon file. To make it easy, You need to be in the same folder with perfmon file. Run the following command:

relog1

Let me explain the command here:

  • The name of Perfmon file is DataCollector01.blg
  • The output type is SQL Server, by using -f parameter
  • The ODBC System DSN name is relog5
  • The identifier is BaselineSept2013

The command will import all counters data to SQL Server. If you want to filter the data by applying additional parameter, please read full relog tutorial here and here.

 

See what’s We got after import

Let’s connect to PerfDB and see what We have after import: there are 3 tables created inside the DB.

relog_db

 

Run the following simple query to see what We got in every tables:

select*from[dbo].[DisplayToID]

selecttop3 *from[dbo].[CounterDetails]

selecttop3 *from[dbo].[CounterData]

 

The query yields the following result:

relog_db1

As You can see here, I put BaselineSept2013 as identifier and It’s reflected in the table under DisplayString column. I also imported another set of files and name it as TuningDec2013. In this case, I have 2 group of imported files that I can compare to see the improvement between the baseline and tuning period.

Now let’s play the interesting part: querying imported data to analyze SQL Server performance. I will post this in Part 2, stay tuned :).

 

 

SQLServer UG: Managing your tempdb

Updated on May 23, 2013

Tempdb is a piece of simple stuff that often overlooked in SQL Server planning. Many DBAs ask for more CPU, RAM, and storage but let alone tempdb stays in drive C. Tempdb can be the big bottleneck in system performance, because it contains many stuff for SQL Server operation.

What’s in tempdb, actually?

Here are the short list of tempdb contents:

  • User objects: your @tablevariable and #tempTable are 2 common example
  • Internal objects: required for internal SQL Server to run, but not directly created by users. Example: worktable in hash join and rebuilding index maintenance
  • Version store: obviously, when snapshot isolation is enable then SQL Server need to keep version store in tempdb

To summarize: when the application query uses alots of temporary operation, tempdb will grow very fast as well. How do we forecast tempdb size? Sad to say but true: there is no exact formula to calculate its size.

Sunil Agarwal from SQL product group has done a series of writing on tempdb monitoring and maintenance. I suggest you to read it for better understanding. His posts collection are here: http://blogs.msdn.com/b/sqlserverstorageengine/archive/tags/tempdb/

I presented this topic for SQL Server Group in MS Office – Jakarta on Febr 28, 2013. Below is the summary of tempdb tuning and optimization:

tempdb

 

The complete PPT and sample code are available for download here:

Jakarta–SQL Server 2012 Community Launch

I’m organizing a community event together with SQLServer-Indo, MUGI, and @meisariarvini: The new SQL PM at MS Indonesia. Now We also invite @cfrandall to come in town! The official invite is below, sorry it is in Bahasa Indonesia. You can di BingTranslate it anyway :).


SQL Server 2012, versi terbaru dari SQL Server, telah hadir. Tentunya rekan-rekan community ingin tahu lebih banyak mengenai fasilitas-fasilitas terbaru yang diberikan, seperti bagaimana SQL Server 2012 dapat membantu penerapan Hybrid IT, sebuah kombinasi Cloud Computing dan On Premise Computing. SQL Server 2012 juga membuka peluang aplikasi-aplikasi yang dibangun menggunakan teknologi non-Microsoft untuk menggunakan fasilitas-fasilitas canggih ini, misalnya bagi para pengembang aplikasi berbasis PHP. SQL Server 2012 Community Launch ini ditujukan untuk rekan-rekan Developer dan Profesional TI yang tertarik untuk mengetahui lebih rinci mengenai fasilitas-fasilitas ini. Manfaatkan juga kesempatan untuk bertemu dengan para pembicara untuk berdiskusi secara langsung Lokasi:
Bursa Efek Indonesia, Tower II, Lt. 18,
PT. Microsoft Indonesia,
Microsoft Auditorium.Waktu:
Sabtu, 7 July 2012, 08.30 – 16.00Audience:
Developer and IT Pro 

 

Profil Pembicara Utama

 

Chris Randall
Director of Training, SolidQ, North America.

Chris Randall adalah Director untuk training di SolidQ, Amerika Utara. Chris dikenal melalui kompetensinya sebagai public speaking di berbagai teknologi konferensi di level global, spesialis untuk training, courseware workbooks, T-SQL, SQL Server Integration Services, SSAS, dan SSRS. Chris adalah mentor SolidQ yang telah familiar dengan SQL Server sejak tahun 1996. Aktif sebagai trainer, konsultan dan juga sebagai pembicara. Chris telah banyak berkontribusi terhadap konten training yang berhubungan dengan T-SQL, SSIS dan Arsitektur Business Intelligence. Chris sudah banyak mengajar ribuan peserta training dalam kelas dan konferensi kelas dunia, termasuk diantaranya SQL PASS, SQL Saturday dan berbagai konferensi untuk Microsoft Certified Trainers (MCT). Chris memiliki MCT dan MCITP sertifikasi untuk SQL Server 2008 portofolio. Chris juga sebelumnya adalah board member Denver SQL Server User Group, chapter PASS. Selain tugasnya mengajar dan sebagai konsultan, Chris sangat bersemangat untuk hadir di Indonesia dan menyampaikan berbagai peluang yang muncul dengan kehadiran SQL Server 2012.

Ikuti Chris di twitter : @cfrandall dan blog http://cfrandall.com/

 

Berikut adalah topik-topik yang akan dibahas pada SQL Server 2012 Community Launch ini :

    Agenda
Main Session
  09.00 – 09.15

Opening by Manish Chopra
M&O Director, Mirosoft Indonesia

  09.15 – 10.00

Case Study:
Implementing SQL Server 2012 Business Intelligence

  10.00 – 10.45

Note from The Field:
PHP Development with SQL Server
Bastian Harahap
CTO Urbanesia

  10.45 – 12.15

Embracing Big Data with SQL Server 2012
Chris Randall
Director of Training of SolidQ – North America

  12.15 – 13.00 Lunch
Breakout Session
    IT Pro     Developer and BI
  13.00 – 14.00

  Best Practices Running SQL Server 2012
on Hyper-V

  Hotma Roland Pasaribu, MCT
  Mirosoft User Group Indonesia

  Build Modern Windows 8 Metro Apps
and SQL Azure
  Rudy Setyo Purnomo
  CEO Erudeye

  14.00 – 15.00

  AlwaysOn High Availability Group
  in SQL Server 2012

  Kiki Rizki Noviandi
  Microsoft MVP (SQL)

  Powerful Data Visualization with PowerView
  Hendra Eka Putra
  Microsoft MVP (SQL)

  15.00 – 15.15 Break
  15.15 – 16.15

  Monitoring SQL Server 2012 with SCOM
  Faisal Susanto
  Senior Consultant, Microsoft Services

  Performance Tuning in SQL Server 2012
  Choirul Amri
  Senior Consultant, Microsoft Services

  16.15 – 16.30 Closing remaks and door prize

 

 

Pendaftaran

Pendaftaran dapat dilakukan dengan mengirimkan email dengan mencamtumkan informasi berikut: Nama Lengkap, perusahaan tempat bekerja, dan nomor telepon yang dapat dihubungi. Acara ini gratis, namun mengingat keterbatasan kapasitas maka diharapkan rekan-rekan komunitas dapat mencantumkan informasi lengkap untuk keperluan konfirmasi kehadiran. Kami tunggu kehadiran rekan-rekan dengan mendaftar selambat-lambatnya tanggal 3 July.

Anggota komunitas dapat mendaftar ke kontak berikut:

SQL Server User Group Indonesia: Cahyo Listyanto| Email: sqlserverindo@live.com

Microsoft User Group Indonesia: Abdul| Email: Abdullah0813@live.com

© 2012 Microsoft Corporation. Panduan Penggunaan | Merek Dagang | Pernyataan Privasi

Jika pesan ini tidak tampil sebagaimana mestinya, silakan lihat versi online
Microsoft menghargai privasi Anda. Mohon membaca Pernyataan Privasi online kami.
Mohon klik disini apabila Anda tidak menginginkan email promosi di kemudian hari dari Microsoft Indonesia.
Pilihan ini tidak akan mempengaruhi aturan langganan newsletter Anda lainnnya ataupun komunikasi
pelayanan wajib yang dianggap sebagai bagian dari pelayanan Microsoft.

Microsoft Indonesia memberikan dukungan terbaik kepada pelanggan dan mitra melalui no. tel +6221 2552 8700, 5155151

P.T. Microsoft Indonesia, Lt. 18 Tower II Indonesia Stock Exchange Building,
Sudirman Central Business District, Jl. Jend. Sudirman Kav. 52-53, Jakarta 12190, Indonesia