General discussion of SQL Topics; aimed for the novice-intermediate level Microsoft SQL Server User. Currently focuses on using SQL Server 2005.

Friday, December 12, 2008

Books Online could be better...and we, the users, already can make it better!

Yesterday, Kalen Delaney wrote a small article discussing if MS should or shouldn't build in more warning messages or taking better actions in warning users about potential problems from performing tasks/commands within SQL Server. You can read her article at: http://www.sqlmag.com/Articles/ArticleID/101033/101033.html

I agree that to a certain extent the DBA is responsible for the database and should know how to use the software thoroughly...so ideally the database would never be at an unknown risk of data loss. Unfortunately, SQL Server has jumped from the realm of an application to the realm of a data platform! This changes things quite a bit; there is no realistic expectation that a single DBA (or even a small team of DBAs) can possibly know the database software and all of its commands and internal workings 100%. So there is some portion of relief that Microsoft must provide...rather it is a method of warning the DBA from messages, errors, or warnings I don't know or speculate a foreseeable method. I do agree that Books Online (BOL) is considered the SQL Server documentation and it is Microsoft's responsibility to ensure that the documentation provides all of the information (including inner workings where necessary) of commands and features that have potential unsafe effects.

The question is where does this responsibility end? Also, where does it fall onto the DBA (or end-user) shoulders to piece together all of this information and understand how they are utilizing the data platform and the data loss potential is created by this usage?

The first thing that comes to my mind is that Microsoft has taken a step in adding a way for the user's of BOL to add in their content that can point this type of stuff out. Why is it that we DBAs aren't doing this? I hope after reading this maybe our community will take a moment or two to place a useful comment (especially warning of potential data loss areas) in BOL for the other less knowledgeable DBAs.

Until next time, Happy Coding!

Thursday, October 23, 2008

TSQL Code: Compare Tables of Databases on Separate Servers

NOTE: This script was originally developed to compare table row counts, but with the below mentioned modifications this could be helpful if you have a database that is shipped to another server and often times needs to be ETL into the other database and you are worried there may be records that aren't getting transferred properly (or at all).

The other day I came across a question on how to compare the tables in one database to the tables in another database, this was in concerns to a migration project. This person had a database that existed on an older server running SQL 2000 and they chose to migrate their database to a new server running SQL Server 2005. They had wanted to be able to somehow be able to visually display a comparison of the two databases to prove the conversion was successful, and also to display the comparisons of the tables. Results meeting above requirements are shown in Figure 1.

AW_Compare_All 
Figure 1 (results of running sp_CompareTables_All)

In response to this I had started to develop a TSQL script that created a stored procedure that would allow you to run it from the current database and specify the location of the original database. The script would then collect the tables and count the records within each table for both databases. It will display the table names on the outside (left and right) of the results, and the record counts will be next to each other on the inside (next to their table names); which will allow for a very simple and pleasing visual comparison. I wanted, and did, avoid using cursors or other techniques that will potentially bog down the system resources (such as the stored proc 'sp_MSForEachDB' and 'sp_MSForEachTable').

Now, this is a rough draft that I had thrown together and tested over a lunch period; so, there are some issues that can still be cleaned up on the script and it lacks the ability to detect if a table exists on one database, but not the other.

At the end of the script I'll also provide a quick comment that will allow you to change the results from displaying all tables and their row counts to display only the tables with mismatched row counts (which may be useful if you want to use this script as a method to troubleshoot databases that tend to not migrate all records).

The first problem faced is how to best access the server with the original database; while there are many options I chose to use the built-in stored procedure "sp_addlinkedserver". This procedure seems to be simple to implement and allows for seamless integration into TSQL code. Testing for this script was performed on the AdventureWorks database (developed for SQL 2005) on both test systems. The 'originating' database was on a SQL 2005 instance (although testing was briefly performed on SQL 2000 and SQL 2008 to validate compatibility). The 'originating' server is called TestServer35, the database is on an instance called Dev05; the database for both instances is called AdventureWorks2005. This information becomes important when using the sp_addlinkedserver command. I used the following TSQL code:

EXEC sp_addlinkedserver   
   
@server='TestServer35-5', 
   
@srvproduct='',
   
@catalog='AdventureWorks2005',
   
@provider='SQLNCLI', 
   
@datasrc='Server35\Dev05'
 

As you can see, the linked server is referenced as TestServer35-5. We will use this reference, in a four-part identifier (Server.Catalog.Schema.Table). The next obstacle is to obtain a listing of tables and their row counts. I used a script I had modified last year to perform this since this script will run both on SQL 2005 and SQL 2000 (you can view my script on SQLServerCentral.com's Script section at: http://www.sqlservercentral.com/scripts/Administration/61766/). I then take the results of this and store them into a temporary table; I also do this for the new database (on local server where this stored proc is running at).

Then comes the simple part of joining the two temp tables into a final temp table. I chose this route because I wanted to have the two database in separate temp tables in the event I want to work with that data, which I will be working with the data in my update to determine if a table is missing from one of the databases.

Here is the TSQL code I used (remember if you want to use this you will need to change the linked server information to the correct information; as well as to create this stored proc in the appropriate database):

--Change the database name to the appropriate databse
USE [AdventureWorks2005];
GO

CREATE PROCEDURE sp_CompareTables_all
 
AS
 
 
CREATE TABLE #tblNew
( tblName varchar(50), CountRows int )

INSERT INTO #tblNew
( tblName, CountRows)
SELECT o.name AS "Table Name", i.rowcnt AS "Row Count"
FROM sysobjects o, sysindexes i
WHERE i.id = o.id
AND indid IN(0,1)
AND xtype = 'u'
AND o.name <> 'sysdiagrams'
ORDER BY o.name

CREATE TABLE #tblOLD
( tblName varchar(50), CountRows int )

INSERT INTO #tblOLD
( tblName, CountRows)
SELECT lo.name AS "Table Name", li.rowcnt AS "Row Count"
--********
--Replace TestServer35-5 and AdventureWorks2005 below with your appropriate values
--********
FROM [TestServer35-5].[AdventureWorks2005].[dbo].[sysobjects] lo, 
   
[TestServer35-5].[AdventureWorks2005].[dbo].[sysindexes] li
WHERE li.id = lo.id
AND indid IN(0,1)
AND xtype = 'u'
AND lo.name <> 'sysdiagrams'
ORDER BY lo.name

CREATE TABLE #tblDiff
( OldTable varchar(50), OldRowCount int, NewRowCount int, NewTableName varchar(50))

INSERT INTO #tblDiff
( OldTable, OldRowCount, NewRowCount, NewTableName )
SELECT ol.tblName, ol.CountRows, nw.CountRows, nw.tblName
From    #tblNew nw
JOIN #tblOLD ol
ON (ol.tblName = nw.tblName AND ol.CountRows = nw.CountRows)
        
SELECT * FROM #tblDiff
        
DROP TABLE #tblNEW
DROP TABLE #tblOLD
DROP TABLE #tblDiff

 

You simply execute the code with the following TSQL:

EXECUTE sp_CompareTables_All
 

The results of this script are shown in Figure 1 (above).

Now, this is great if you want to have a list that you can go through yourself to verify each table matches in row counts. But, what if that database has 1000 or more tables? What if you are just, simply put, lazy? Why not utilize SQL Server to process this information for you?

Well, I sure enough did just that. With a very small modification to this script you can easily have it only display any tables that don't match up in record counts.

All you have to do is change the INSERT INTO #tblDiff block's "ON" statement to join if the CountRows are NOT equal. The following is the modified block of code; the remaining stored procedure remains the same:

ON (ol.tblName = nw.tblName AND ol.CountRows <> nw.CountRows)
 

I did also rename the stored procedure from "sp_CompareTables_All" to "sp_CompareTables_Diff", but this is optional for your own ability to clarify which stored proc is being used.

To get some results I had made a few modifications to the AdventureWorks2005 database. I had added a couple of rows to a table, and removed some rows from two tables. The results of the stored proc showing only the different tables are shown in Figure 2.

AW_Compare_Diff
Figure 2 (results of running sp_CompareTables_Diff)

As you can see the ability to change this script to show all tables or only different tables is very simple. Even setting up this script is simple, where the hardest part of the whole thing is adding a linked server (which is fairly simple also).

In a future post I'll revisit this script and include the ability to display tables that exist on one database, but not in the other. Be sure to check back for this update.

Until next time...Happy Coding!!

Friday, October 17, 2008

Searching text of programmable objects...

This is a follow-up posting to my article "Finding and Listing ALL Procedures, Functions, and Views in SQL Server 2005" I wrote for SQLServerCentral.com (http://www.sqlservercentral.com/articles/T-SQL/63471/). This will work with both SQL Server 2005 and SQL Server 2008.

I've found a few instances where I would've liked to be able to search through a Stored Procedure or a View for specific text in the definition; specifically to check to see if there was a reference to another Stored Procedure or View. Originally to solve this problem I had just queried the syscomments legacy view for the search string joining with sysobjects, while this was effective it has its limitations...such as this only searches objects created post-release of SQL Server. So, what happens if you want to search for a specific string in ALL of the programmable objects? Well, then we fall back to our trusty view of sys.all_objects! We can join it with sys.all_sql_modules and get almost all of what we need...almost.

Now, we can do a clever query that will join sys.all_objects and sys.all_sql_modules. Something like:

DECLARE @SearchString NVARCHAR(MAX)
SET @SearchString = 'ENTER_SEARCH_TEXT_HERE'
SELECT [ao].[name], [asm].[definition]
FROM sys.all_objects ao
JOIN sys.all_sql_modules asm
ON [ao].[object_id] = [asm].[object_id]
WHERE [asm].[definition] LIKE '%' + @SearchString + '%'
--AND [ao].[is_ms_shipped] = 0 --Only search user & post SQL release objects, or
--Change the above value to 1 to only include objects provided by Microsoft with the release of SQL Server
--Optional  to limit search results....
--AND [ao].[type] = 'P' --Search only Stored Procedures
--AND [ao].[type] <> 'V' --Exclude Views
--See referenced article for listing of additional object types

This would do the job for the most part, the only missing items would be constraints and rules for the most part; which how many times do you really need to search a constraint for a specific string of text?

Now, the execution plan shows that this can be a little bit of a resource monger. Here's the estimated execution plan I get for this:

Plan_For_Defintion_Search_by_Joins

This Execution Plan is OK; there's a lot of Clustered Index Scans and Compute Scalars. You might be able to improve upon this with some ingenious Indexing, hints and such...but, again...why bother? You'll have to upkeep it and monitor yet another query plan. So, I'm not too keen on this plan myself and I don't like that this query still doesn't do everything...I want everything or nothing (start my pouting and tantrum).

As I said, I want it all. So the problem for me here is that there is that 1 in 1,000,000 chance I'll need to search a constraint and then that means I'll have to either modify my code or discover a new method...why bother doing it again later if we can just make it do what we want now. Plus, and this may be more important for myself; I want something that is easier to read and understand at a later time and uses less resources. Time and resources is the name of the game we play as DBAs!

How do we solve this? Can it be done? It sure can! Enter the sacred "OBJECT_DEFINITION" function! This little function helps us out a lot by eliminating the need to manually join tables and the return results is the exact column (definition column) that we are wanting also! Let's take a look at the same search function using this syntax instead of joins:

DECLARE @SearchString NVARCHAR(MAX)
SET @SearchString = 'ENTER_SEARCH_TEXT_HERE'
SELECT [name], OBJECT_DEFINITION(OBJECT_ID([name]))
FROM sys.all_objects
WHERE OBJECT_DEFINITION(object_id([name])) LIKE '%' + @SearchString + '%'
--Optional  to limit search results....
--AND [is_ms_shipped] = 0 --Only search user & post SQL release objects
--Change the above value to 1 to only include objects provided by Microsoft with the release of SQL Server
--AND [type] = 'P' --Search only Stored Procedures
--AND [type] <> 'V' --Exclude Views
--See referenced article for listing of additional object types

As you can see, this is a lot cleaner to read. We can easily tell what is being queried and how the query results should be returned to us. Not much to look at here; we query the sys.all_objects view to find the proper object and then tell SQL that we want to view the object's definition...simple enough.

Now, let's look at the execution plan for this query. We know the query is straight forward, I'm guessing the execution plan will be also! Let's see:

Plan_For_Defintion_Search_by_Object_Definition

Well, it definitely is a smaller execution plan. Almost 1/2 the plan of the original query! Not bad at all. I'd venture a guess that this will be much less resource intense on our machine, and that means less time! I like less time!

Now, we can see that using SQL Server's "Object_Definition" function improves the ability to search the definitions. I'm sure you can see with the above that when possible use a SQL built-in function, these are here to help us. As they say...'why reinvent the wheel?'; I'd say for sure that this object_definition function is already optimized to run in SQL and that it's a fairly safe bet that we DBAs wouldn't be able to optimize our JOIN statement to this level...and if you happen to be able to even come close to the same optimization, I'd be willing to wager that you spent more time than it was worth to get to that point. With that said, it brings me back to my original statement...why bother?

Keep looking for those built-in functions, you'll find there are many of them inside SQL Server that will not only make your DBA life easier and quicker, but also more fun!

Until next time, Happy Coding!!!

Friday, August 22, 2008

VS/SQL 2008 Developers need more space!

I’m not just talking about needing a larger office (although, that couldn’t hurt). I’m talking about hard drive space!

With the release of SQL Server 2008; I’ve officially moved over my trusty development machine to the latest and greatest MS technology, Visual Studio 2008 Professional Edition and SQL Server 2008 Developer Edition.

Before you do the same, make sure you have plenty of Hard Drive space[1]. If you already have Visual Studio 2008 installed and then you try to install SQL 2008 you’ll get a friendly little message that states you MUST have VS 2008 Service Pack 1 installed. I’m not quite sure why you MUST have VS 2008 SP 1 installed to run SQL Server 2008 and VS 2008 on the same machine; but, if I do find a reason (other than the guesses of to enable some sort of feature or the ability for the two applications to see each other) I’ll post an update. So, now you know the approximate hard drive space it will take to install these applications.

Here is a simple break down of the hard drive space requirement for each product I installed:

Product Name Hard Drive Space Required URL with Hard Drive Requirement Specifications
Visual Studio 2008 (Professional Edition) 2.2 GB http://msdn.microsoft.com/en-us/vs2008/products/bb894726.aspx
Visual Studio 2008 Service Pack 1 3.8 GB (plus 600 MB on installed Hard Drive for VS 2008) http://download.microsoft.com/download/A/2/8/A2807F78-C861-4B66-9B31-9205C3F22252/VS2008SP1Readme.htm
SQL Server 2008 (Developer Edition) 2.0 GB (required even if installing components to other hard drives) http://msdn.microsoft.com/en-us/library/ms143506.aspx

As you can see, VS 2008 SP 1 is no small update. Also note that these hard drive space requirements are based on installing all features/options of each application and the final installation space used will typically be smaller than the requirements specifications. Requirements typically include space required for temporary files during installation of the product.

This comes to a total of around 8 GB of space during installation, and around 6.5 GB upon completion of installation. This, of course, does not include any project files, database files or add-ons.

What if hard drive space is a premium and you just want to test out these new applications? How do you use less space? The simplest way is to use the Express editions of each product and only install the languages/features you absolutely need or want to use/learn. A language such as C++ being installed for a developer who only knows and uses VB would seem pointless, unless that developer is intending to learn and develop with C++.

Is there a way to bypass the requirement to install VS 2008 SP 1? Well, I haven’t tried it myself; but, conceptually, if you install SQL 2008 first, then install VS 2008 you won’t get any messages or requirements during either product installation to have SP 1 installed. Now, that doesn’t mean that VS and SQL will play nice with each other. It also doesn’t mean that you won’t get some sort of message or error at a later time stating that you need to install SP 1. As mentioned earlier, I don’t know why SQL 2008 installation requires VS 2008 SP 1 to be installed with a machine that has VS 2008; I’d imagine there is a good reason though. Try at your own risk.

Until next time, Happy Coding!


[1] The installation hard drive size requirement is based on my personal experience and may vary per installation.

Tuesday, August 12, 2008

SQL Server 2008 Officially RTM!

On August 11, 2008 Microsoft announced through their MSDN Flash newsletter that SQL Server 2008 has been RTM (Released To Manufacturing). This has been a long awaited release for many of us in the SQL community! Congratulations to Buck Woody and the many, many others on the SQL development team on this release!

While August 11, 2008 is probably the official RTM date, those of you that have an MSDN subscription may have been enjoying the RTM a week earlier (August 6th, 2008 through the MSDN Subscriptions website) as announced by Buck Woody's blog (Carpe Datum). You can view the post at: http://blogs.msdn.com/buckwoody/archive/2008/08/06/news-flass-sql-server-2008-is-here.aspx.

Undoubtedly, you can find many blogs and posts on the new features of SQL Server 2008. This looks to have a great improvement on SQL Server 2005; in just about every area. There is improvement on security, developing, and administering; to name a few. There seems to be a little bit of something for everyone to get excited about, ranging from Business Intelligence for the enterprise users to Intellisense in TSQL syntax for the programming DBA to Resource Governing for the IT Administrators.

I look forward to using SQL Server 2008, and to enjoying a little more easily administrated database with tons of new features, flexibility, and control!

Until next time...Happy Coding!

Wednesday, July 2, 2008

Publication: Introduction to the Transaction Log...

Today SQLServerCentral.com (SSC) had published my article that introduces the Transaction Log. The primary purpose of this article was to help the newer DBAs understand how the transaction log works, and to clarify some commonly misunderstood parts of maintaining the transaction log.

This article covers a very broad and general introduction; including how the log is filled by SQL Server, easy to follow analogies to help grasp how transaction logs work with SQL Server, and some Good Practices to use when Transaction Logs are in use by SQL Server.

You can read the entire article at: http://www.sqlservercentral.com/articles/Design+and+Theory/63350/

Please feel free to join in the discussion for this article; there are some great posts that help to further clarify the topic and also provides additional information that was not included with this article. You can join the discussion at: http://www.sqlservercentral.com/Forums/FindPost527448.aspx.

Tuesday, July 1, 2008

Concept: Finding the Right Order...

The other day I was helping out a friend of mine with a space issue on their server, and one of the topics I discussed was the importance of find the best order to perform the operations in to utilize the least amount of extra hard drive space. While discussing this it came to mind that often times while plugging away with SQL we can get so intent on getting the job done (often times in the first way that works, while this is not necessarily the best way), that we might take for granted all of that hard drive space and server power available to us!

With just a little bit of forethought, planning, and understanding what we are trying to accomplish, we can actually use less disk space...and we could also potentially improve the performance we get out of SQL Server! I don't know about you, but getting better performance always ranks high on my list of desired accomplishments. I think this can be best demonstrated with an example....

For simplicity reasons I will use a database with a single table that each record being inserted will equal 1 MB of hard drive space. Thus, 1,000,000 records in our "simple database" will result in a database size of 1,000,000 MB. Now, let's say that on average this database grows by about 100,000 records per year. This would mean that with 1,000,000 records this database has been used for about 10 years. Now, assume there is a decent DR in place and the transaction log is kept at a fairly decent size of maybe 1,000 MB to accommodate for a months worth of data before the FULL backup operation. Simple enough, right?

So, 10 years has passed and the boss walks in and says they just merged with another company that has an identical database setup and stats with only different data and the two databases will be merged (since this is hypothetical, we will pretend that a merge of the data can be done without any issues and it would simply double our database size). Also, the boss says that they are archiving 5 years worth of data from both databases to match the business rules for hard copy storage on site and archiving. And lastly, the boss says "...because you don't look busy..." that they only want to have enough space in the database to allow for 2 years of data to be added and that they need the extra hard drive space available for other needs.

From a first glance you might say not a problem. We'll just merge the two databases, take out 5 years worth of data, perform a FULL backup, shrink the transaction log (leaving enough free space for a month worth of data), and then shrink the database down (leaving enough room for 2 years worth of data to be added later). I'm going to ignore the transaction log size in this analysis because there will typically be one out come of the size regardless of the steps taken; and this will differ based on the database recovery mode and method taken to perform these steps; and to analyze each possibility is beyond the scope and purpose of this blog. Now, let's look at how this might look on our system after each process...

First step in the above scenario would be to merge the two databases (remember that since the databases are originally from different companies, they are physically stored on separate hard drives).

HD Space Used: 2,000,000 MB
SQL Record Count: 2,000,000 records
SQL Space Used: 2,000,000 MB

Second step would be to take out 5 years worth of data.

HD Space Used: 2,000,000 MB
SQL Record Count: 1,000,000 records
SQL Space Used: 1,000,000 MB

Third step would be to perform a FULL backup (this is based on a FULL recovery model).

HD Space Used: 2,000,000 MB
SQL Record Count: 1,000,000 records
SQL Space Used: 1,000,000 MB

Fourth and Fifth steps to shrink the log and database.

HD Space Used: 1,200,000 MB
SQL Record Count: 1,000,000 records
SQL Space Used: 1,200,000 MB

 

From the above we can see that the maximum HD space being used would be 2,000,000 MB. Which depending on our available HD space this wouldn't be too much of a problem. The real issue would be the amount of resources needed; since this is hypothetical we can only imagine from a step-by-step that the resources being used would be fairly moderate. There are too many variables to be accurate when it comes to actual performance measuring, so I won't even attempt to measure it.

Now, what if we put a little thought towards these requirements from the "boss", and especially what if the HD space available to use was 1,800,000 MB. Now, we have a problem that can easily be resolved with just finding the right order. In this case, just making a simple change of placing Step # 1 at the very end and performing an extra (final) FULL backup and a (final) transaction log shrink will allow us to remain under 1,800,000 MB and also can save our server a lot of processing time, because it will have to process less records. We can also utilize the secondary server (the server holding the data from the company being merged with) processing power to cut total time being required to archive and thus decreasing the time required to merge because of the lower number of records being merged. Here is how the steps would look....

Step # 1: Remove data from each database at it's original location

HD Space Used (per Server): 1,000,000 MB
SQL Record Count (per Server): 500,000 records
SQL Space Used (per Server): 1,000,000 MB

Step # 2: Perform FULL backups for each database

HD Space Used (per Server): 1,000,000 MB
SQL Record Count (per Server): 500,000 records
SQL Space Used (per Server): 1,000,000 MB

Step # 3: Shrink the transaction log and databases

HD Space Used (per Server): 500,000 MB
SQL Record Count (per Server): 500,000 records
SQL Space Used (per Server): 500,000 MB

Step # 4: Merge the two databases

HD Space Used: 1,000,000 MB
SQL Record Count: 1,000,000 records
SQL Space Used: 1,000,000 MB

Step # 5: Perform FULL backups for database and shrink transaction log

HD Space Used: 1,200,000 MB
SQL Record Count: 1,000,000 records
SQL Space Used: 1,200,000 MB

 

Conclusion:

Now, you can easily see in the above scenario that we only use a maximum of 1,200,000 MB of space and that is because we are reserving that free space. Logically, by performing the data archiving before merging the two databases we save our server processing power because it will process ONLY that of which will remain in the database. Sometimes the order of operations will already be pre-determined from outside factors, sometimes it may be required to perform certain tasks in a particular sequence.

I hope this illustrates that it's not always good enough to just find a way to make things work; and that with little planning in the beginning you can save yourself time, lower the requirements to perform the tasks at hand, and also improve the performance of your SQL Server!