General discussion of SQL Topics; aimed for the novice-intermediate level Microsoft SQL Server User. Currently focuses on using SQL Server 2005.

Wednesday, June 18, 2008

Listing User-Defined Stored Procedures in SQL Server 2005

You can easily obtain a listing of the User-Defined Stored Procedures that are in any of your databases. In fact, Microsoft had made this information very easily accessible. You only need to access a System Table view called "Sys.Procedures". This information may be easily accessed using a small TSQL code, such as the following...

USE [your_database_name_here];
GO
SELECT * FROM sys.procedures
ORDER BY [name];
GO

Your results will vary based on what Stored Procedures you or anyone else with access to your database have created. The following is a sample of the results I have obtained when running this code on my 'model' database.

User-Defined Stored Procedures

The most important thing to keep in mind with this TSQL code is that each database may contain different User-Defined SPs.

Do you want to get a listing of EVERY single Stored Procedure in your database(s)? If so, then be sure to check out SQLServerCentral.com for one of my upcoming articles! I'll post the direct link to the article as soon as it becomes available!

You'll want to check this article out when it is published...you'll NEVER have to search the Internet again to find out what SPs are in SQL Server 2005!! UPDATE: You can read this article on SQLServerCentral.com (http://www.sqlservercentral.com/articles/T-SQL/63471/). Please leave feedback if you have a few moments.

TIP:

Do you want all of your databases you create (from here on...not previously created databases) to use a specified stored procedure or set of stored procedures? If so, then create the desired Stored Procedure(s) in your 'model' database. Now any new databases created will get the SP(s) you created in the 'model' database because your new databases are based on the 'model' database!

 

-----------------------

Legal Disclaimer/Warning
Please remember that when using an SP you are not sure the outcome or effect it has should be done on test systems; never use anything that has not been thoroughly tested in a production environment. I am not encouraging you to use any type of Stored Procedures (documented or undocumented); I am only informing you of the method that can be used to obtain a listing of all procedures that are found within SQL Server 2005. Microsoft strongly states that undocumented Stored Procedures, Extended Stored Procedures, functions, views, tables, columns, properties, or metadata are NOT to be used and do not have any associated support; and Microsoft Customer Support Services will not support any databases or applications that leverage or use any undocumented entry points. Please refer to: http://msdn.microsoft.com/en-us/library/ms166021.aspx for Microsoft's legal disclaimer and further information in Microsoft's support for use of stored procedures.

Thursday, May 22, 2008

Optimizing Indexes

Summary:

Optimizing Indexes is quite a complicated subject. There are many techniques; and what is the most difficult is that there are so many different scenarios as to when and how to optimize your indexes. Instead of trying to describe a method of how to optimize, I am going to discuss how to identify when optimizing indexes should be considered and where to go to find the different options.

The first step is NOT to just rebuild all indexes, or to defrag them all. There are some people that believe that rebuilding all indexes is the way to solve this; don't listen to them!! PLEASE DON"T!!! I'm rarely the person to give advice on not listening to other people; most people can help you learn about what's good or bad...but if you go the route of rebuilding and defraging all your indexes, you can in fact be causing additional problems that you didn't have to cause. In particular using system resources that don't need to be used, and causing unneeded fragmentation on the OS level! Any decent DBA is always concerned with using system resources, remember that system resources is always a precious commodity and should never be wasted.

The first step is to analyze the database. The next step is to identify the queries that can/will benefit from optimizing the index(es). The third step is to optimize the index. The final step is to periodically review these methods to identify when optimizing indexes will be required; this will be an ongoing project and will ultimately require re-iterrating through your queries to identify when an index has become warranted for optimization. There is no cut and dry method to warn you, prevent indexes from becoming out of tune, or to auto-optimize the queries and indexes.

Analyzing the Database:

So, how do you analyze a database to determine if an index needs to be rebuilt or should run a defrag on the index? Well, first you need to understand how indexes are built and what is causing them to become fragmented. Review my blog entry "Introduction To Indexes" to learn about how indexes are built. Also, take the time to review this white paper called "Microsoft SQL Server 2000 Index Defragmentation Best Practices"; even though it is written for SQL Server 2000, it will still apply the same to SQL Server 2005. It is a long read, but will help out well beyond the scope of this blog entry.

Here, in a nutshell, is how to determine when your index is fragmented. Use SQL Profiler to identify poor performing queries; in particular use the template "SQLProfilerTSQL_Duration", this will already contain the traces needed to identify the offending queries. Once the queries are identified you can then start looking into which indexes there queries are accessing; especially queries that are retrieving ranges instead of singletons. These queries are the highest risk of having fragmented indexes, remember though that this is just a method to determine the potential problem areas. Your own judgement will be best to make the final determination.

When to Consider Optimizing Indexes:

The first key in detecting that you should optimize your index is when you are observing performance degradation and have no absolute culprit to cause this. Now, keep in mind that just because you are seeing performance degradation doesn't immediately indicate that there is index fragmentation. This can only be determined by analyzing your database properly. Next, identify which queries are utilizing the most I/O; these are the next candidates. Then consider queries that have workloads that are suspected to benefit from index defrags; this can be quite difficult and should be carefully determined.

All of these are covered in the MSDN white paper called "Microsoft SQL Server 2000 Index Defragmentation Best Practices"; mentioned earlier. This paper even goes into very good detail on how to make these determinations, which will probably be enough to get your through while you gain experience.

Getting Help:

Sometimes it's nice to have someone more experienced help out, if that is how you feel then go with that feeling and seek out that someone.

If you don't have physical access to an experienced DBA then seek advice from trusted websites, forums, and/or discussion groups. Remember when seeking advice they can only provide an answer with quality that matches the information you provided. Such as stating you have a database that needs indexes rebuilt or defragged, will most likely get you answers stating to use DBCC commands or some other commonly used index rebuilding command. The purpose of seeking advice is to provide detailed information to get an answer that is specific to your scenario; so be sure to provide as much information as possible without breaking any company policies.

Conclusion:

As you can see, Indexes can be fairly simple to optimize. It's determining when to optimize and what to optimize that becomes difficult.

There are methods to fine-tune the auto-index handling of your indexes, this is covered in the white paper I mention in the next section. You'll always find different opinions and experiences, embrace all you can and mix and match what works best for you and your situation. There is no one-size fits all for optimizing indexes, just as there isn't one-size fits all for database solutions. It's all about customizing to your needs, and utilizing your available resources to make your work easier and more enjoyable.

I can't stress enough that reading the "Microsoft SQL Server 2000 Index Defragmentation Best Practices" white paper will help out tremendously. This paper covers the topic so well, that originally I had planned on providing tips on how to identify the queries that required indexes to be rebuilt, and how to find additional help on this topic. During my research I came across this white paper and it covered absolutely everything and more than I had planned to cover!

Until next time, Happy Coding!!

Additional Resources:

Microsoft SQL Server 2000 Index Defragmentation Best Practices (http://www.microsoft.com/technet/prodtechnol/sql/2000/maintain/ss2kidbp.mspx)
SQL Server Best Practices (http://msdn.microsoft.com/en-us/sqlserver/bb671432.aspx)
How To: Optimize SQL Indexes (http://msdn.microsoft.com/en-us/library/ms979195.aspx)
Database Journal: Reducing SQL Server Index Fragmentation (http://www.databasejournal.com/features/mssql/article.php/2238211)
SQLServer Performance: SQL Server Index Fragmentation and Its Resolution (http://www.sql-server-performance.com/articles/per/index_fragmentation_p1.aspx)

Tuesday, May 6, 2008

CREATE INDEX (Transact-SQL)

Summary:

This covers the syntax and some examples on how to create an Index for a table. I’ll finish this blog entry with an alternative method for creating an index using SQL Server Management Studio (SSMS).

Syntax:

The following is the main syntax for CREATE INDEX from Books Online (BOL). You can view the entire syntax by visiting the referenced link.
Reference: http://msdn.microsoft.com/en-us/library/ms188783.aspx
NOTE: If you do not understand how to read this syntax please review my blog entry “Understanding MSDN Syntax Conventions

CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name

    ON <object> ( column [ ASC | DESC ] [ ,...n ] )

    [ INCLUDE ( column_name [ ,...n ] ) ]

    [ WITH ( <relational_index_option> [ ,...n ] ) ]

    [ ON { partition_scheme_name ( column_name )

         | filegroup_name

         | default

         }

    ]

[ ; ]

UNIQUE – The index key used may not be duplicated in another row.

CLUSTERED | NONCLUSTERED – When using CLUSTERED, the logical order of the key values determines the physical order of the rows in the table. You may only have a single clustered index per table. CLUSTERED indexes should be created before any NONCLUSTERED indexes. In cases where the CLUSTERED index is created after the NONCLUSTERED index, the NONCLUSTERED indexes will be rebuilt.
NOTE: CLUSTERED is the default, if you OMIT the [CLUSTERED | NONCLUSTERED] argument then SQL will first look for a CLUSTERED index, if not found it will create the index as CLUSTERED; if it CLUSTERED already exists then it will create the index as NONCLUSTERED.

index_name – Gives your index a name. A common practice is to prefix the name with “idx_” or “ix”. An example of an index for Last Names of a customer table might be called “idx_Customers_Last_Names” or “ix_Customers_Last_Names”.

<object> - Name of the table the index is being created for. This can be up to a four part name, such as Servername.DatabaseName.Schema.TableName, as with other commands you do not always have to fully qualify (type all four parts) the <object>. You only need to be able to determine the table, without causing confusion. Example if you have two tables called Customers, then you’d at minimum need to qualify the table using the SCHEMA_NAME; such as Colorado.Cusotmers and California.Customers.

column_name [ASC | DESC] – Specify the column, or columns, to make the index for. There is no minimum or limit to the number of columns you can specify. Typically it is suggested, for CLUSTERED indexes, to use only the columns that can logically be organized; such as the First_Name and Last_Name columns. In NONCLUSTERED indexes you usually want to add ‘helper’ columns; such as the Street_Address and Phone_Number. See my blog entry “Introduction to Indexes” for additional details in choosing the appropriate columns for indexing.

<relational_index_option> - You can specify additional options with this argument, this goes beyond the scope of this blog entry. I might cover this in a later entry. For now, if you want specific details as to what options you can use with this argument and how to use the argument then review the BOL syntax command at the referenced location above.

partition_scheme_name (column_name) | filegroup_name | default - You can specify the partition scheme and columns to include in your index. This goes beyond the scope of this blog entry. I might cover this in a later entry. For now, if you want specific details as to what options you can use with this argument and how to use the argument then review the BOL syntax command at the referenced location above.

Simple Terminology:

As you can see with just a little bit of the Syntax I’ve posted, this can be quite a complicated T-SQL command. Yet, it will be among the most commonly used throughout the creation and life-span of your tables. You’ll constantly find yourself tweaking your indexes as your needs change and the data defined within the database changes. You can think of indexes as a child to your table, as with all children they will grow in complication and evolve as their experiences grow. Indexes among a table can have the same phases of ‘life’ occur also.

As with the Customer’s table example you might originally only be storing the customer’s first name, last name, street address, city, state, zip. So you may have a CLUSTERED index on the last name, then first name columns. Maybe even create a NONCLUSTERED index for the street address.

Now, let’s say a couple of years later you find that you now want to store the customer’s phone number, fax number, maybe mailing lists columns with an opt-in designator for your mailing lists. Then you decide it would be nice to just look up customer’s by their phone numbers, or find the customer’s who’ve ‘opted-in’ to certain mailing lists. You might then create additional NONCLUSTERED indexes to make these searches more efficient. Especially the Mailing Lists opt-in columns (assuming you have 100s of mailing lists…following Normalization rules would mean this should be in a separate table; but for this example it is in the Customer’s table).

Now, let’s say a year later you decide to Normalize your Customer’s table and separate the Mailing Lists columns into a “Mailing_Lists” table. Obviously the indexes for the Mailing Lists won’t be needed in the Customer’s Table, thus you’d drop those indexes; and most likely you would’ve created the appropriate indexes in the “Mailing_Lists” table when you created the table.

As you can see, the indexes can have different reasons to be tweaked. I find most commonly I will look into tweaking the indexes when I have large queries running that are taking up resources. I can usually find an index that could be added or modified that can help to improve the efficiency of the search results being returned. There are many different methods and ways to determine when to use an index and how to optimize your indexes; I’d suggest trial and error (with test systems only) as a first option. I’d also suggest reading up on optimizing queries and/or SQL Server performance (in that order). Queries are what drive your data, what gives you your results.

It’s usually a good idea to be in the habit of obtaining performance information, especially in large databases and periodically review the usage of your indexes and adjust them as appropriate. There is no perfect formula, but there are many good methods and discussions on how to achieve the best performance. Always be willing to read and try to understand your options; and when possible spend time testing to see how things are affected by your changes. What might look good today, could end up causing problems you won’t see until a few days have passed…this is why I must stress…TEST, TEST, TEST!

Example Syntax:

The following will create a UNIQUE CLUSTERED index on the Customer’s table using the Last Name and First Name columns (notice the order of names is Last then First because of logical searches will typically be performed on the last name, and then the results would be sorted by the first name):

USE myDemoTable;

GO

CREATE UNIQUE CLUSTERED INDEX idx_Customers_Names

ON Customers (Last_Name, First_Name);

GO

The following example will create a NONCLUSTERED index on the Customers table using the Street Address column:

USE myDemoTable;

GO

CREATE NONCLUSTERED INDEX ix_Customers_Addresses

ON Customers (Street_Address);

GO

The following example creates a UNIQUE NONCLUSTERED index on the Customers table using the customer’s phone number column. This will ensure that none of our customers have a duplicate phone number as an existing customer already has:

USE myDemoTable;

GO

CREATE UNIQUE NONCLUSTERED INDEX idx_Customers_Phone_Numbers

ON Customers (Phone_Number);

GO

If you were to attempt to enter a new customer and use a phone number that already exists with another customer you will get a “Msg 2601, Level 14” error code that states you cannot insert a duplicate key.

Also, note that in the second example I used the prefix “ix_” and the other examples I used the prefix “idx_”. First, I used the “idx_” prefix because for my personal uses this means it is a UNIQUE index; thus anytime I see “idx_SOMETHING” I know it is a UNIQUE index and will not allow multiple keys. I use “ix_” to mean that it is NOT unique and is NONCLUSTERED.

Remember that CLUSTERED index is the default index type; however, I strongly recommend stating the type of index in every syntax command for two reasons. First, distinction can be easily made when reviewing the syntax at a later time. Second, just because CLUSTERED is the default right now does not guarantee it will be in future SQL Server releases. The less you leave to be interpreted the more compatible you can make your code for future releases (and for backwards compatibility in many cases).

Using SSMS to create your indexes:

You can create indexes within SSMS in several places. The more common areas to create indexes are: Database Engine Tuning Advisor, the Table Designer, and in Database Diagrams, as well as in Object Explorer.

The easiest method, in my opinion, is to create a new index using Object Explorer. In object explorer you will want to navigate to the table you want to create your index on. Expand the table by clicking on the plus sign to the immediate left of the table icon to show the folders containing the objects for that table (Columns, Keys, Constraints, etc). You will right-click on the folder labeled “indexes” and select “New Index…”, this will bring up a new window called “New Index”. Here you can name your index, add the columns for the index and choose many options to go with the creation of your index.

If the “New Index…” is grayed out when you right-click the “indexes” folder then this means you have the table opened in “Design” mode. If this is the case, you can either close the “Table Designer” window and then access the “New Index…” or you can right-click anywhere in the “Table Designer” and select “Indexes/Keys…”. This will bring up a slightly different window, but just as easy to follow to create your indexes.

To modify or delete an index you have two simple methods. If in Object Explorer you can select the Index name under the table’s “Index” folder and right-click the index name and select “Delete”. Otherwise, when in “Table Designer” you can right-click anywhere and select “Indexes/Keys…”, in the resulting window you will highlight the index name you wish to delete. You will then click on “Delete”. CAUTION: There is NO confirmation or UNDO in this window; once that button has been clicked you have removed your index. You cannot CANCEL out of the window either. So make sure this is what you want to do before clicking that button!

Conclusion:

Indexes are helpful, simple to create and very powerful in making your queries and database operate at a very efficient level. Anyone can quickly learn to create indexes, modify indexes, and to drop (delete) indexes. Most people will spend a fair amount of time reading about indexes when first learning about them, this is because they are so versatile in usage and can provide such a powerful result when leveraged correctly.

I suggest at minimum to try to understand how indexes are determined and how to optimize them. These are key aspects to indexes that can make the most difference. I also suggest for you to schedule in a regular review period for indexes on your most heavily accessed tables and queries. This doesn’t need to be daily, weekly, or even monthly; but, it should be done over some periodic time because your data needs and accessing will evolve as your database evolves.

Until next time, happy coding!

Friday, May 2, 2008

Introduction To Indexes

Summary:

This is a basic introduction to what Indexes are and how to determine what should be indexed. I will cover the basic concept behind indexes, how they are intended to be helpful, why you would want to use them, and how to determine what should be included in your index.

I’ll also cover the differences between Clustered and Non-Clustered indexes, and provide some tips that will help you to know how to differentiate when to use which kind of index, as well as an analogy to break down the differences between Clustered and Non-Clustered indexes in simple terms.

What indexes are and why to use them:

Indexes are intended to help you to efficiently find information within your table; they are meant help you to lower the amount of CPU resources needed to find this information, and also to minimize the amount of Input/Output (I/O) used to access this information. All of this can result in a much faster result being returned.

Indexes in SQL Server can be thought of in a similar fashion as an index in the back of a book or a table of contents in the front of the book. In the book and Index is intended to allow you to specify a word that you are interested in finding and point you to the page(s) that word has been referenced. A Table Of Contents can be used to specify a topic you are interested in, and will point you to the section covering that topic; some table of contents will even point you to sub-sections that refine the context of the topic that can help you to more accurately focus your reading on relevant information.

Indexes within SQL Server are designed to perform these same functions, and provide the same helpful information. Indexes are created either through a T-SQL command or through a form of interface application that connects directly to the database and supports T-SQL commands (such as SQL Server Management Studio, or Enterprise Management, etc). NOTE: I will cover the topics of “Creating Indexes” and “Tuning Indexes” in separate blog entries. Once indexes are created SQL Server will automatically use them to ‘help’ in returning result sets; there are no options you must specify to take advantage of the created indexes…this is all built-in to SQL Server.

There are some minor differences in SQL Server 2000 and SQL Server 2005; since I primarily use SQL Server 2005 my discussion is based on this version. However, all of this information can be confirmed with the use of Books Online (BOL) for SQL Server 2000. I will try to avoid using specific information that can’t be used in SQL Server 2000; however, I am unable to guarantee that all information applies to previous SQL Server versions. Please use BOL if you are unsure of any of this information will work with your version of SQL Server.

How SQL Server Retrieves Data without Indexes:

Before we can go into how an index works, we should really understand how SQL Server finds data without the indexes; this will illustrate the importance of using Indexes. My examples will be based on a Customer Table that holds basic customer information such as the customer’s first and last name, their street address, city, state and zip code. When I use the term “Customer” I am referring to the Customer Table; this also goes for “First Name”, “Last Name”, etc. These are referencing their respective columns within the Customer Table.

What happens when SQL Server attempts to retrieve data from a table that does not contain any indexes is called a “Table Scan”. This is where SQL Server will go through each record (column by column, then row by row) to find matching records. After it has gone through the entire table it will return any matching results it has found. As you can probably image this is no quick task. This would be similar to picking up a book and deciding to find any pages that contain the word “alligator” in it. There might be a page with that word, there might now. In either case, without an index or table of contents you’d have to flip through every page to see if it contains this search word. If the book is under 30 pages, this may not be too bad…but, what if the book is 1500+ pages; that’s a whole different story.

This brings us to Indexes to help lower the amount of time SQL Server needs to spend finding matches in results!

Different Types of Indexes (and the general structure of an index):

There are two basic types of indexes: Clustered and Non-Clustered. Before I get into the specifics of each type and the differences between the types I want to cover the basic structure of an index; and the minor structural differences between the two types of indexes.

Basically all indexes are formulated with what is called “index key(s)” for a column (or combination of columns). These basically are pointers that tell SQL Server were specific words (or data) is contained within a table. Each table is broken into pages (this is physical storage of the data; each page is 8 KB and typically is constructed based on the order of entry…not a sort order). A page can contain hundreds of records, or just a few records. Because of the size limit in a data page, it all depends on the data types and how many columns are being stored for each record. There are formulas that can pin point the number of records being stored on a data page; this is beyond the scope of this blog entry.

Each index key(s) has a definition of the page the indexed word (or data) is stored. SQL Server will use these keys to determine what page to go to; so if the search term shows up in data page # 4 of 27 pages, SQL will skip over the first 3 pages and search page # 4. It will halt its search if there are no other keys specifying that data is stored in other pages; there are often times cases where data will be stored through multiple pages (like page #s 4, 12, 13, 21, 25, 26) or a data record could span multiple pages (such as page #4 & 5). Again, this is beyond the scope of this blog entry. The important thing to know is that data isn’t naturally stored in logical order within the physical file; this is why we need to have indexes to speed up the search. Also, since searches aren’t pre-determined when data is being entered SQL cannot have the data sorted, and in many cases you may have a specific method of sorting the data…but you might not be searching the sorted data (such as in Customer table, you might sort data based on customer’s location like City/State…but want to search for customer’s with Last Name starting with R; this type of a query and sort method can’t be pre-determined…hence the difficulty in storing data logically). Data is basically stored on a First In basis, and the rows just sequentially increase as data is entered, or removed. We need searches to tell us where the data is so that we can easily find this data to match our unknown (at entry time) search criteria and sorting methods.

This brings us to how do we tell SQL Server the best method to search if we don’t know it ourselves?! The answer is Clustered and Non-Clustered indexes, some research and knowledge of the data being stored, and a lot of testing (and refined Tuning as the database is used).

Clustered indexes:

Clustered indexes are a logical sorting and storing of index keys. This can allow SQL Server to very efficiently find data. This is all based on the defined columns within the clustered index. So, why don’t we want to just have every column within every table to be included in the clustered index? Because the more indexes included in the clustered index takes away from performance for your INSERTs, UPDATEs, and DELETE statements.

Basically the idea is that Indexes, especially Clustered, are wonderful to increase the speed to obtain results for Queries and Reports. However, there is a tradeoff which is that it takes more time write and/or modify data within the table. This is because the clustered index needs to update itself.

Imaging a file cabinet holding your customer information, you originally store all data by the customer’s Last name and then the First name. Now, you decide well, for getting information it would be quicker to store all of the data within each file by the date the data was added (and incase of similar dates then by alphabetical titles). So, now you need to find a customer John Smith and a paper that hold his Personal Address information…you look under Smith, find John, then in his folder look up Address section, then Personal and you quickly get the information.

Now, what if you have to update his file to include his birthdate, so that might go under Personal Information…not personal address though! So, now you find the last name, first name, then information section, then personal information page and add the file. Wouldn’t it have been quicker to just find his file and add her personal information to the end of his folder? That’s where the balance comes in, so question is partly how well your system performs with just reading queries/reports versus entering/modifying information…but the other part is what will be done more often. Are you going to query the information more often or are you going to update/modify the information more often? If you are querying, then siding on extra columns in your cluster MIGHT be ok; but if you are inserting/modifying data then you want only to use the number of columns it is to filter the result sets into manageable sections (maybe filter only name; last name, then first name…or maybe name and location; such as last name, first name, then city/state). It all just depends on what you are doing, how much data is being stored, and how you anticipate accessing this data will occur (i.e. often, seldom, with reports, lots of updates, lots of inserts, few inserts, etc).

Now, an important note is that you can ONLY have 1 clustered index per table! So, this can bring up the point of where you want the data to be accessible, but you don’t want to bog down the system each time you need to add/modify/delete data. So, how do you do this? How do you walk this fine line of optimal performance? Here come non-clustered indexes to the rescue!

Non-Clustered indexes:

You can have up to 249 Non-Clustered indexes; however, just as with Clustered Indexes there is such as thing as too many! Non-clustered indexes have its intended usage, which will get covered in the Tips section.

Non-Clustered indexes are indexes that aren’t sorted at the physical data page layer. This means that SQL Server can be pointed to the data page containing the data matching the search page; but the index stops after that point; it then becomes up to SQL to search that page and pick out the data. Remember Clustered Indexes point to the exact location of the data, and is quick because it is sorted. Non-Clustered indexes only point to the page; the Index Keys in the index are sorted also…but not the logical locations. So, there is some performance loss between Clustered and Non-Clustered. Always try to think of non-clustered indexes as an alternative to listing every column in a clustered index, and a method to allow for data that is less accessed or returns more Exact matched results.

So what does all of this mean? Well, basically if you were to tell SQL Server to find results of last names that start with ‘RE’ and used a non-clustered index to index the last name column then the index would point you to the page(s) that contain the last name(s) starting with “RE”, but there could be names also that start with “RA” or “RI” or “RH”, it all depends on what is actually stored at the time the query is executed. This then means that SQL would go through these other results until it finds the matching result; this is still faster than an entire table scan (which a table scan would start with Last Name “A” and End with Last Name “Z”).

Tips (putting this all together):

Now, that we have an understanding of what Clustered and Non-Clustered indexes are, and how SQL uses them let’s look at how we can determine what we should index (and what type of index to use). Remember:

  • · A table can only have a single Clustered Index
  • · Up to 249 non-clustered indexes
  • · Clustered Indexes point to the exact location of matching results
  • · Non-Clustered Indexes only point to the data page containing matching results, so some scanning will occur upon searching for matching results (however, much less scanning than not having any indexes)
  • · Clustered Indexes can retrieve data much more efficiently; BUT have a cost of slower data INSERT, UPDATE, and DELETE
  • · Non-Clustered indexes will use index key(s) if available, if none are available it will then use the data page ID or Row ID to perform its search for matching data

So now let’s put this all together into a simple understandable analogy. I will use a Library as our ‘database’, each section of the library (such as Romance, History, Computers, etc) will be our ‘Tables’, and of course each book will be our ‘record, or row’.

A catalog will contain two types of information. A ‘clustered’ catalog that will utilize the Dewey Decimal System; and is stored in the index box in Alphabetical Order by Book Title, followed by Author Name. A ‘non-clustered’ catalog will simply tell us the Section (‘Table’), the Shelf Number where top shelf is # 1 and bottom shelf is # 5 (This simulates our ‘data pages’), and finally the title of the book and the author (this is our First Name and Last Name columns). This is not sorted but does contain a listing of all book titles with Author Names that we can look at on a separate sheet to quickly get the index card # it is stored on.

Now with this in mind; imagine searching for a book called “James’ Awesome T-SQL Blog Book” by James R (no such book by the way exists). Now if we did this with a ‘clustered’ catalog. We could quickly look in the Title section and find the book that matches, if there were two books with this title we could then further define the result by looking at the author name.

Now imagine this same scenario but with the ‘non-clustered’ index. Let’s say the book title is somewhere around index card # 100 of 15000. Since we can quickly review the index cards listed we can see the card is around #100. So, we open the index drawer…we find our book information and now we are off.

Since the ‘clustered’ index card includes the Dewey Decimal System we can simply find the closest matching book numbered and quickly jump up or down as need to the exact book. Now with the ‘non-clustered’ index information we only know the section, title and shelf number. Now we get to the proper shelf, we then go through each book one at time to find the proper book. This of course is much quicker than starting at the beginning of the section and looking through each book, or even worse at the beginning of the first book in the library and sequentially going through the entire collection of books until we found the information (which may not even be there!).

Choosing Type of Index:

When to choose Clustered index and when to choose Non-Clustered indexes…

When to choose Clustered Indexes:

  • · Columns that are frequently accessed
  • · Columns intended to be stored sequentially (such as Last Name, First Name, etc)
  • · Columns that will be queried in ranges (when you use WHERE clause with the BETWEEN, >, or < type of operators)

Once you have determined the columns that should be in your clustered index it then becomes ESPECIALLY CRITICAL to consider what type of queries will be ran most often and what queries MUST have optimal performance. If these columns are not within these queries then you may want to reconsider revising your clustered index NOT to include unnecessary columns (consider creating non-clustered index for these columns). If you find that you require columns within these queries that meet the above Index selection tips and they are not already included, and then consider revising your index to include these columns.

When to choose Non-Clustered Indexes:

  • · Columns frequently in the WHERE clause that return EXACT matches
  • · Columns that contain many distinct values (Such as Last Name, or Street Address; but not City, State or Zip Code).
  • · Queries that do NOT return large result sets
  • · Especially in columns that are needed for critical queries, and aren’t already covered in your Clustered Index; or may be queried in a non-sequential manner that isn’t already covered in your Clustered Index

Keep in mind that once you have created your Indexes, rather it be Clustered or Non-Clustered you can always revise them to meet your current needs. Also, remember that your needs will change and this often times will require revising your Indexes to meet those needs!

Conclusion:

Indexes are here to help you; however they are a complicated concept to master. In most cases it is easiest to figure out what columns qualify for being indexed and then to simply try them out. Don’t be afraid to try out a few different combinations of Clustered and Non-Clustered Indexes; performance is never a one size fits all. Your indexes should not be determined with that type of mind set.

I would also suggest setting a personal reminder or adding to your occasional checklist to check your indexes and how the table is accessing your indexes and to see if there is a way to improve the performance of your queries with the indexes for that table. Indexes will always be evolving, your queries will always be evolving, and your tables will always be evolving…don’t let me evolve without your intervention.

Remember that Speed = Happy Users…to me a Happy User = a Happy DBA! =)

Until next time, happy coding!

Monday, April 21, 2008

A brief introduction to Transaction Logs

Note: This entire article has been published by SQLServerCentral.com. Please find the link to the article at the end of this blog in the "Update" section.

Summary:

This is a very brief introduction to transaction logs. I am currently working on an article (titled "Introduction to the Transaction Log") that will go into a lot of detail, show the ‘ins’ and ‘outs’, and also discuss some ‘good practices’ (and how truncating works). This blog will just give you a basic introduction that discusses what a transaction log is, how it basically works, and what the primary use of the transaction log is (these can be more than just used for backup/restore procedures; that will be discussed in my upcoming article).

What a transaction log is:

Every single database MUST have at least one transaction log. A database can have more than one transaction log. A transaction log can be used over multiple physical files.

A transaction log is a log file (physical and virtual) that records each and every transaction that occurs. This includes data modifications, database modifications, and rollback (restoring) modifications. This file records each transaction in sequence using a Log Sequence Number (LSN). Each transaction is appended to the end of the physical log file and always uses a LSN that is higher in value than the last written LSN. These are best thought of as a journal entry for every action that occurs within your database.

Some of the operations that are recorded within the transaction log are data modifications, database modifications, rollback modifications, the start/end of a transaction, and even creating/dropping of tables and indexes.

Logical Architecture (concept):

Transaction logs operate as if they are being stored with a string of log records. Each record is identified by an LSN. The steps used to recover data can differentiate on how the data is logged within the transaction log (this is covered further in my article). Each transaction reserves enough space to support a successful rollback, either by an explicit action requesting the rollback or from an error occurring within the database. This amount of space can vary; however, it typically mirrors the amount of space that is used to store the logged operation. Transaction logs are loaded into a virtual log; this cannot be controlled or set by the administrator (my article will include a ‘good practice’ to help prevent system performance decreases from Virtual Logs).

Physical Architecture (concept):

Transaction logs can span multiple physical files. You can truncate the transaction log to free internal log space (this will not shrink the log file; I’ll cover shrinking the log file in my article). The basic concept of the physical file is it will append to the end of the log file, once the physical end of the log is reached the transactions will wrap around to the beginning of the log file (assuming there is free space).

If the log file does not contain any free space (internally); there are two possible outcomes:

1) If FILEGROWTH is enabled and there is free disk space on the hard drive, the file will automatically increase in accordance to the settings and then will re-organize the transaction log and append the new record to the end of the log, OR

2) If FILEGROWTH is NOT enabled, or there is NOT enough free disk space on the hard drive, then SQL Server will return a 9002 error code.

Primary Use of a Transaction Log:

A transaction log is typically used to allow recovery of a database to a certain point in time; or to allow the recovery of a database to the last successful transaction committed prior to the database failure. The usage of the transaction log will greatly depend on how the database recovery model is setup and the backup plan that is used.

Additional Notes:

Transaction logs should be periodically backed up. The size of the log is typically recommended to be about 1 ½ the size of your database. Example: If your database is 10 MB in size, then the transaction log would be about 15MB in size. This should be treated as a very general recommendation; there are many instances where this formula is not appropriate and should be modified to the appropriate amount based on your particular situation.

You want to ensure the transaction log is the proper size (the expected ending size) and has a relatively large growth increment value. This will provide you with the best performance and can ensure the proper use of the transaction log (much less likely to have the log expand without your knowledge of the event occurring).

Conclusion:

Transaction logs are a very important aspect of maintaining and recovering your database. If set properly, and maintained properly, they can give you the additional backup support you need without impacting system performance. They can help to give you peace of mind during daily operations. They can be an integral file to recovering a database to a point in time.

Additional Resources:

MSDN: Introduction to Transaction Logs (http://msdn2.microsoft.com/en-us/library/ms190925.aspx)
MSDN: Transaction Log Logical Architecture (http://msdn2.microsoft.com/en-us/library/ms180892.aspx)
MSDN: Transaction Log Physical Architecture (http://msdn2.microsoft.com/en-us/library/ms179355.aspx)

Update:

Posted on 7/2/08 by James:

SQLServerCentral.com has published the article in full. You can read the article at: http://www.sqlservercentral.com/articles/Design+and+Theory/63350/. You can join the discussion of this article at: http://www.sqlservercentral.com/Forums/FindPost527448.aspx. I hope you enjoy the article and find it helpful!

Wednesday, March 26, 2008

Types of backups

Overview:

There are different objects you can backup using SQL Server; in most cases when DBAs refer to backups they are referring to backing up databases. You can also backup the physical files and folders, but that will go beyond the scope of this blog posting. First I'll cover the three "Recovery Models" that you can choose for you database (Full, Simple, and Bulk-logged, respectively), then I'll cover the three different types of database backups: Full, Differential, and Transactions. Then I'll conclude with a quick overview of how to create backups for your database.

The most important thing to know is that you should always have a current backup in place that matches the needs of your company, you should have set procedures that detail how to backup your database and how to restore your database, and most importantly you should practice the database restoring plan you use (so you can verify it actually works, the data being backed up is actually being backed up, and that when the time comes for that once-in-a-lifetime emergency restore you will remain calm and collected so you can ensure the data is restored in a very prompt manner).

What is a backup? When do I make them?

Backups are copies of your database that is stored on your hard drive, on a server, on a removable media source, or any other type of location (physical and/or virtual). The basic idea behind a backup is to have a copy (or copies) of your data in a secure and easily obtainable location, and for the business critical information to have another copy stored somewhere offsite that is secure and obtained within a reasonable timeframe. Backups can contain all the information that is stored within your database, these are called "Full" backups. Most common backups contain entire copies of the database; there are also times when you will only retain copies that contain only the data that has been added since your last complete database backup. These are called "differential" and "transaction log" backups. Both accomplish the same thing in different ways and have different positives and negatives to them. I'll cover those later in this posting.

What you are aiming to accomplish is the ability to quickly recover from an incident that causes your current database to become corrupt or inoperable. Examples would be: Your database was hacked into the previous night, the database has somehow become corrupt and cannot be started, the data was accidentally deleted from a careless system maintenance procedure, or worse yet the building has been burned down or otherwise destroyed with all the computers and backups in it!

In most cases, you will access a backup copy that is stored in a secure location within your business location for the unforeseen events that can occur at any moment. In the extreme cases you will obtain your backup from your offsite location for the catastrophic events that require the business to be temporarily or permanently relocated without any prior notices.

So, now that we know what the backup is and where it is stored we now need to know when and how do we make them. The great thing about SQL, regardless of the version you get, you can make a backup at anytime. SQL uses online backup technology, so this means that your users and database can be accessing and modifying your database at anytime, even during your backup cycle! Do be aware that to create a backup takes up some resources, such as CPU, Memory, and I/O. It is most common to make backups during the business' "down cycle"; usually in the middle of the night for traditional 9-5 companies...and the 24x7 companies that are always accessing data will require a little more planning by determining the data usage trends and adjusting the backup procedures to occur during the lowest peaks or generalized time frame that matches the expected lower usage peaks.

Recovery Models:

There are three recovery models that can be chosen at the time you create your database: Full, Simple, and Bulk-logged. Each database can have it's own recovery model; and this recovery model may be changed from within the database in a future time.

Full: This model allows you to recover your database to a specific point in time, or to the point of the failure. This model will record every transaction that occurs within your database, as well as the stored data, structure, and every other object; this includes the bulk operations and bulk loading of data. For this model you will typically want to use Full and Differential backup types at a minimum; to maintain a more complete backup solution you would also use the Transaction Log backup type.

Simple: This model will allow you to recover to the point of your last backup creation; it's very important to understand that this model does NOT allow the use of Transaction Log backups. This model uses the 'TRUNCATE LOG ON CHECKPOINT' option; which in effect deletes all old transactions when the database gets to a checkpoint. This is more ideal for system databases because the transaction log is deleted at each checkpoint.

Bulk-logged: Bulk operations and bulk loads are logged at the most minimal levels. This means that during a restore you will need to repeat the bulk operations and bulk loading of data, should the database fail prior to a Full backup or a Differential backup. The ideal strategy will implement the Full and Differential backup types at a minimum, and include Transaction Log backups on a regular re-occurrence. 

Types of Backups:

I'll clarify now that I am discussing using the SQL Server Management Studio (SSMS) backup options; and that there are many different methods to create backups, but this discussion is limited to performing file backups using the BACKUP syntax & wizards. Some other options that are commonly used, but are beyond the scope of this posting, are: Log Shipping, High Availability (i.e. Redundancy hard drive configurations, standby servers, etc), and many others using a combination of hardware/software solutions.

The three types of backups that can be accessed from within SSMS are: Full, Differential, and Transaction Log. Here is a brief breakdown of each type:

Full
Creates a backup that contains all objects, system tables and data from the database, it also includes portions of the transaction log that required because of database being used at the time the backup was created. A Full backup is intended to bring you right back to complete working order up to the point in time the backup was created.

Differential
This type of backup is designed to create a backup of only the objects, tables and data (including transaction logs) that have been changed or not recorded on the last Full backup. It is very important to understand that you can NOT create a differential backup until after you have created at least 1 full backup.

On a personal observation note: you'll notice that a Differential is quite a bit faster than the Full backup because it requires less information to be created and is recording much less information, however, as with everything else there is a trade off. If you choose to create only 1 Full backup and then from there on use Differential backups to save time and resources, you will have to use all of these Differential files to restore to the last point of your Differential backup. What I'm saying here is that assume you make a daily Differential every single day for the next 2 years; on the first day of the 3rd year you have a database failure and must restore the database. You will have to first restore the Full backup file you originally made and then you will have restore all 730 Differential files.

You can choose to create a Full backup at anytime and it will automatically replace your original Full backup and ALL differential backups made between the original Full backup and the current Full backup you are making.

Transaction Log
Transaction Logs are a special type that compliment the Full and Differential backup types. These are serialized records of all database modifications made since the last transaction log backup. They are used during the recovery process to roll-back or commit transactions. Unlike the Full and Differential backups, Transaction Logs backups are recording the state of the transaction log at the START of the backup operation. Transaction Logs are best to think of a way you can have backed up checkpoints between the time you have made your last Full or Differential backup. A most common method of use is in OLTP databases that have daily Differential backups and weekly Full backups; you would use the Transaction Log backup on an hourly basis to help minimize the lost of transactions in case of a failure within the middle of the day. It is very common to be used with live websites that have many transactions occur per hour; instead of losing a full days worth of transactions you can minimize it down to an hour, or in highly volatile situations could even get down to the seconds (however, it is very uncommon to have a transaction log backup on a per second basis).

How do I make a backup?

In SSMS you can easily create a backup of your database by right-clicking on your database and selecting "Tasks" then "Back Up...". This will open the "Back Up Database" dialog box that allows you to easily fill in the appropriate information and then either have SSMS perform the backup, or create a script using the scripting wizard from within the "Back Up Database" dialog box.

From the "Back Up Database" dialog box you can choose the backup type, the name of the backup file, the location to place the file (including using devices that contain the file location and other information). By click on "Options" in the left pane, you will be brought to another screen that will allow you to give even more specific options for your backups. Some options include creating a new set of backups, appending to an existing set, verifying the backup reliability, and for transaction logs you can specify how to handle the log after being backed up.

Conclusion:

As you can tell, the backup options and wizard are each powerful in their own rights. I can't stress enough the importance of having a backup plan and a recovery plan in place now, it can save you time and also provide you with the confidence that you know when a disaster occurs you are ready and have reliable backups that can be restored in an instance.

You have now seen that backups can cover most any situation that you can come across, and depending on the need can back up a database down to the second in the highest volatile uses.

This is just the tipping of the iceberg, there are many vendors out there that specialize in create backup/recovery solutions that are based on software, hardware, and hardware/software combinations.

With some careful planning, execution and understanding you can handle any situation that will cross your path and know that in the worse of conditions you are prepared to get your business back up and running faster than they can get a grasp on what just happened to their business!

Until next time....Happy Coding!

Monday, March 10, 2008

BACKUP (Transact-SQL)

This is a simple reference to the "Backup" T-SQL syntax. This syntax is probably one of the most simple, complicated and yet important syntaxes to know.

It's simple because you have a few different options, first you can use SQL Server Management Studio to do your complete backup (full, differential, and transactional). You can also create a backup using the T-SQL syntax directly and it is quite simple even though the below T-SQL syntax can make it look scarier.

Here's an example of a very simple T-SQL backup (you can find more at the link to the MSDN Books Online reference below the syntax):

DATABASE   AdventureWorks  
TO   DISK   = 'Z:\SQLServerBackups\AdvWorksData.bak'  
WITH   FORMAT ;
GO

It's complex because...well, just look at the full syntax! It can get to be very complex if you want to enable a lot of features; and this can be a very, very good thing if you need to leverage power when the SSMS wizards/built-in tools just can't give you what you want.

It's important because...well, I hope you already know this. If you don't have a backup and something happens to your data, then you are up a creek without a paddle and just about starting to go over the Niagara Falls! If you ever hear someone tell you that you are backing up too frequently or you are overkill on backups, then I'd simply walk away from that person as quickly as possible. If you value your position, and don't want to be the one to explain to the owner (or your boss) why it is that at 4 AM the system went down and the $100,000 in transactions didn't get saved because you are only backing up daily instead of after EVERY transaction then make sure you understand backups, how to restore, and what different options are available to you.

Ok, enough of the CYA talk. Now lets get to the syntax. As mentioned already, it looks scary and complicated. Hopefully, with the above example of 1 usage you will see that it is simple to use. The thing to keep in mind is that Microsoft is providing all of your backup needs rolled into 1 command. So this means that there are many options you won't use unless you are doing a differential backup, likewise with full and transactional backups.

Keep your backups fresh..treat them like Milk..don't let one sit around for too long without checking it, and...

Replace your backups on a regular basis!

Until next time...happy coding!

SQL Server 2005 Books Online (September 2007)

BACKUP (Transact-SQL)

Updated: 1 February 2007

Backs up a complete database, or one or more files or filegroups (BACKUP DATABASE). Also, under the full recovery model or bulk-logged recovery model, backs up the transaction log (BACKUP LOG).

Topic link icon Transact-SQL Syntax Conventions

Syntax

Backing Up a Whole Database  BACKUP DATABASE { database_name | @database_name_var }   TO <backup_device> [ ,...n ]   [ <MIRROR TO clause> ] [ next-mirror-to ]   [ WITH { DIFFERENTIAL | <general_WITH_options> [ ,...n ] } ]
[;]

Backing Up Specific Files or Filegroups
BACKUP DATABASE { database_name | @database_name_var }  <file_or_filegroup> [ ,...n ]   TO <backup_device> [ ,...n ]   [ <MIRROR TO clause> ] [ next-mirror-to ]   [ WITH { DIFFERENTIAL | <general_WITH_options> [ ,...n ] } ]
[;]

Creating a Partial Backup
BACKUP DATABASE { database_name | @database_name_var }  READ_WRITE_FILEGROUPS [ , <read_only_filegroup> [ ,...n ] ]   TO <backup_device> [ ,...n ]   [ <MIRROR TO clause> ] [ next-mirror-to ]   [ WITH { DIFFERENTIAL | <general_WITH_options> [ ,...n ] } ]
[;]


Backing Up the Transaction Log (full and bulk-logged recovery models)
BACKUP LOG { database_name | @database_name_var }   TO <backup_device> [ ,...n ]   [ <MIRROR TO clause> ] [ next-mirror-to ]   [ WITH { <general_WITH_options> | <log-specific_optionspec> } [ ,...n ] ]
[;]

Truncating the Transaction Log (breaks the log chain)
BACKUP LOG { database_name | @database_name_var }   WITH { NO_LOG | TRUNCATE_ONLY }
[;]

<backup_device>::=  {    { logical_device_name | @logical_device_name_var }  | { DISK | TAPE } =      { 'physical_device_name' | @physical_device_name_var }  }

<MIRROR TO clause>::=  MIRROR TO <backup_device> [ ,...n ]

<file_or_filegroup>::=  {    FILE = { logical_file_name | @logical_file_name_var }  | FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }  }

<read_only_filegroup>::=
FILEGROUP = { logical_filegroup_name | @logical_filegroup_name_var }

<general_WITH_options> [ ,...n ]::=
--Backup Set Options       COPY_ONLY   | DESCRIPTION = { 'text' | @text_variable }  | NAME = { backup_set_name | @backup_set_name_var }  | PASSWORD = { password | @password_variable }  | [ EXPIREDATE = { date | @date_var }         | RETAINDAYS = { days | @days_var } ]  | NO_LOG

--Media Set Options    { NOINIT | INIT }  | { NOSKIP | SKIP }  | { NOFORMAT | FORMAT }  | MEDIADESCRIPTION = { 'text' | @text_variable }  | MEDIANAME = { media_name | @media_name_variable }  | MEDIAPASSWORD = { mediapassword | @mediapassword_variable }  | BLOCKSIZE = { blocksize | @blocksize_variable }

--Data Transfer Options    BUFFERCOUNT = { buffercount | @buffercount_variable }  | MAXTRANSFERSIZE = { maxtransfersize | @maxtransfersize_variable }

--Error Management Options    { NO_CHECKSUM | CHECKSUM }  | { STOP_ON_ERROR | CONTINUE_AFTER_ERROR }

--Compatibility Options    RESTART

--Monitoring Options    STATS [ = percentage ]

--Tape Options    { REWIND | NOREWIND }  | { UNLOAD | NOUNLOAD }

--Log-specific Options    { NORECOVERY | STANDBY = undo_file_name }  | NO_TRUNCATE




BACKUP (Transact-SQL)