CI your database with Redgate tools

How we got here

We all know there are all kinds of people around with all kinds of interests and that inadvertently makes for all kinds of developers. Some like working with views, some with the business algorithms and some with the databases directly. According to the good old “hammer and nail” argument the databases in that last instance become more than just a data store but also contain a lot of business logic and we generally end up with a complicated schema. Queue my reaction:

At least we can all agree that everything that we deploy needs to be source controlled, right? Ok, let’s do this.

We have had a manual and rigorous process for synchronising everything between our dev, stage and live environments. Even though it was meticulous, some differences ended up in different environments at times, small ones but they did pile up. You could not trust your local or dev database schema to figure out a problem in stage or live. You would have to connect to them directly and debug things (eek!). Also, the process for getting something to live had to go through multiple approvals for making sure the copying was done properly. That took a long time stretching the lead times of multiple teams to uncomfortably long times. Not very agile as you might have noticed. It was obviously time for a change. The database would be source controlled and managed like the rest of our code. That is, when you want to make a change to it you pull the latest version from the repo, make your changes, put a PR in and after this one and only approval the build pipeline would propagate those changes in all the environments (dev — stage — live).

We decided to use the Redgate database source control solution. We also purchased the SQL Compare and SQL Compare SDK products.

Adding everything to the repo

We assume that the “truest” version of the database is the one in live. We took snapshots of it using SQL Compare and applied them to a local empty DB instance. From there we used SQL Source control to commit everything to the repo.

Deploying everything

We developed a custom tool that used the SQL Compare SDK and practically synchronised everything one way, like we would have if we used the SQL Compare app, selected all and hit “deploy”.

Or if we used the SQL Source control plugin in SQL Management Studio and did “Get Latest” for the database.

We had to build in some preconfigured filters in order to exclude some specific security related objects from these migrations. The extra convenience that this tool provided is that it read all of our databases from a file and did this for all of them. That helps with local migrations (not having to apply latest changes to each database manually) and the automatic migrations in our deployment pipeline. It is worth noting that the tool does a differential comparison, so it deploys only the objects that are out of sync, reducing significantly the deploy times after the first run. As an added bonus it can also spit out all the sql that it runs so that we could do dry runs without committing anything and thus be able to preview any major changes to an environment if needed.

Automating the deployment

We use TeamCity for our CI so we created a dedicated project. The first step gets triggered by any commits to the master branch of the repo. It pulls the latest schema (SQL Source control saves everything in *.sql files, one per object, so one for every table, user, view, role etc.) and the latest build of the custom tool. It will then run the tool, giving it all the files and the database list, applying any needed changes to whichever SQL server we pass as a parameter. So we have replicated this step for each of our environments, passing in different connection strings every time.

Putting in environment specific changes

The SQL Source Control product also has an option for static data. That means that if a table has data that do not change over time in volume or content (like configuration data), you can add them to the repo too, along with the schema that supports them. Getting latest from SQL Source Control or using the tool will apply them as migrations to any database missing them and will delete any data that is not included in them.

That is a problem if you have multiple environments though. Some static data might need to be different between them because they are environment specific. That is where we hit a brick wall for a bit as there are not many solutions for environment specific static data using the Redgate tools. There are a few articles out there about deploying slightly different schemas using custom SQL Compare filters and the SQL Compare command line app. But nothing about static data specifically.

It took a while but the eureka moment finally came. It all comes down to a very simple realisation: what all Redgate tools use are standard *.sql files! That means that if we messed with those files before our custom tool took over, we could manipulate the static data that it would use. It was as simple as this: we created an extra folder that holds the custom static data file for each environment and commit that to the repo along with the original data. Since the tool gets called from a Powershell script in the same repo (remember your proper CI rules people: source control all the things!) we amended that to overwrite the static data files with the custom ones according to the environment name that we passed to it and voila! The correct data ends up in each environment :)

Holding back changes

Another case in continuous integration that we usually come to is when we need some new features or changes to not be propagated in all the environments but only in one (let’s say local and dev) in order to test them. The CI pipeline will push the same changes to all environments though. In your run of the mill server-client code we solve this problem with feature toggles that we can enable conditionally per environment. The code is there, on dev stage and live, but it’s not enabled so even if it’s incomplete there’s no harm done. In our case we decided that implementing feature toggles for the database schema might end up being too complicated partly because SQL is not strongly typed and partly because we would have to implement some pretty complicated SQL migration scripts to make sure we kept the database consistent at all times. Now that the database is source controlled we needed to think of the developers that would also have to deal with it in the future, it’s not just the problem of our brilliant DBAs any more!

We acknowledged that any changes we push into the databases will always move forward in our environments and never backwards, i.e. we will never have to have a feature active on stage but not on dev. So it would make sense (and everyone’s lives easier) if we could just hold back changes per environment. And that was extremely easy: just apply a filter like the ones in SQL Compare but using a simple .txt file. Our custom tool will create one in memory while running and then apply it. So each database folder in the repository has an “Exclude.txt” file with one object name per line followed by the environment names to exclude it from like so: [dbo].[TableName] stage live . This table will be deployed to local and dev when the tool is run but not to stage and live until we remove the environment names from the exclude file and commit it. Job done!

Last thoughts and where to go from here

This setup seems to be working for us pretty well. We have a few databases to go through in each run of the CI pipeline but it is pretty efficient. It takes seconds to pull the latest changes from the repo and about three minutes to compare databases and apply them to each environment. (We have a long list of databases to check and potentially update, so even if there are no changes it still takes a few seconds per database).

With our old procedure, for a database change, it meant opening an infrastructure/DB ticket, a few approvals and at the minimum a few hours (maximum a few days) to get a change to live, assuming there was no feedback for changes. It now takes about 12 minutes after the code is committed. It has changed our process for the better and our attitude against database management in all levels of the company.

We are not done yet though. One of the major milestones we need to go through is educating all developers to take more responsibility of the changes they make because now the whole schema is in their hands and those little fingers are not being held by DBAs any more but rather they just help and mentor good database design.

The Redgate suite also has a tool for running automated tests against a database and our pipeline at the moment misses that. All the testing done to it is through the end-to-end tests of the website. Anyone could argue that these are enough since that is where all the user value is but as a fanatic about TDD I can’t help to think that I could apply its benefits to database design as well. I haven’t tried it! Just dreaming about it :) it might turnout to be a pipe dream.

The last point I will make is about the custom migrations utility in SQL Source Control. You have the ability to commit custom migration scripts that will run along side your commits. They are tricky though, you do not have control over exactly when they will run and if a schema does not adhere to what the script expects it will fail. Editing and removing them is also a pain, making the SQL Source Control plugin fail in many occasions. The documentation on Redgate about them is also not so good, it seems like it is assuming they work 100% of the time and they do not provide much information in the way of troubleshooting them.


I hope you enjoyed this article, hope it got some of you out of a rut and inspired a few more to face this sometimes daunting issue. It has been an amazing experience source controlling a legacy database and we have certainly learned a lot. The Redgate tools have definitely made a difference as well. And after the 100th cup of coffee working on this I can safely say the reaction is more like this:

(I know, I know, you hardcore Futurama fans, this gif is a tiny bit out of context but you get the point :)


To view or add a comment, sign in

More articles by Michael Tomaras

  • The journey towards quality

    I am pleased to be able to say I was a member of the Tombola team for a whole year and sad to have to leave right where…

    1 Comment
  • Visualise and create build chains in TeamCity, the easy way

    We all like nicely organised and efficient deployment pipelines (at least the really OCD among us do :). There are the…

  • The importance of finishing work

    The premise So, we’ve arrived at this sweet spot as a team and are able to break down work effectively into small…

  • Meet everyone

    You’ve just started at your new job. You’ve got your new laptop, on your new desk, with two extra new screens and a…

  • OCD TDD

    So, one fine morning at work we were writing some unit tests to assert that a query object, given the right “name” and…

  • The curious incident of the configuration file in the system32 folder

    The premise There are some little devOps details that you don’t read so often in programming blogs because, well…

  • Neat little trick to debug your build agent’s operations

    Ever been in the situation where you’re using a CI server (like TeamCity in our case) and the build step is failing…

  • Great Developers

    I came across this post by David Starr recently and while reading it I could hear the voices in my head going “Yes…

Others also viewed

Explore content categories