Data modeling gone wrong

For the coming T-SQL Tuesday I want to share some of my experiences with a database with data modeling gone wrong where I was the DBA. To know where all the bad data modeling came from you have to know the history of the piece of software. About 10 years ago there was hype around saving data in XML files. XML was supposed to be a good solution to save data in a formatted way without the hassle of using databases or formatted text files.

Should I support this database?

As a DBA one of the jobs is making sure all the databases and database servers are working properly and you have a controlled environment. Once in a while, you have a skeleton in the closet where suddenly a database server comes around and you’re scratching your head where it came from. Your manager (and maybe your colleagues) expect you to support it because you’re the DBA. You don’t know the server, your colleague doesn’t (really) know the server.

SQL Cruise 2015 Mediterranean

I know I should have posted this a long time ago but I have the same excuses as everybody, work, too busy, kids, wife, etc, etc, etc ;) . Last year I entered the contest for the worst day of DBA’s life from Simple-Talk. The prize was a ticket to the SQL Cruise in the Mediterranean. That sounded good and I started thinking if I had a really bad day in the past.

Find columns with special characters

Problem Imagine a database where you want to know how many times a certain column contains a special character. The reason I had to do this was that I needed to know the impact of an export interface that wasn’t working properly with special characters. But as lazy as I am when it comes to doing repetitive tasks, I had to find a way could easily get all columns with special characters in a data set without a lot of hassle.

Get ASCII characters from string

I had a situation where I got an error converting a value to a decimal which had some white text and a valid number. During the conversion, I used the LTRIM and RTRIM functions to remove any white spaces which were present at the beginning and the end of the value. Unfortunately, the characters in the string didn’t get removed with these functions. This led me to create a little script to get the different character strings.

Split large data file into smaller data files

Problem The business uses a database that is 350 GB database in size and has only one data file and one transaction log file. The previous DBA didn’t increase the amount of data files which lead to a large data file. Due to the size of the data file, I got into trouble with the performance, and restoring the database wasn’t a walk in the park either. So how can split a large data file into smaller data files?