Sql-server – Fastest way to shrink a datafile

migrationshrinksql serversql-server-2008-r2

I have a 4 Tb SQL Server 2008 R2 database. I'm going to move this DB to a new server with a new release of SQL Server 2012. (the application does not support SQL Server > 2012, but that is not the subject of my question).

Before that, I'm going to move almost all objects from the main datafile of 4To to multiple datafiles on the same filegroup so I can get back the free space on the 4Tb file.

After moving objects, free space take like forever to be reclaimed with a dbcc shrink file with blocs of 256Mo.

What's the fastest way to get that space back to the OS?
Is there a way to use a backup restore to shrink the file?

Best Answer

Is there a way to use a backup restore to shrink the file?

If this means you are fine with it being an offline operation, could you combine the shrink with the move and do away with moving the data into other files in the file group (unless you also have other reasons to want to do that) by migrating the data to a new database during a maintenance window:

  1. Script out the schema of the current DB.
  2. Create the new DB in the 2012 instance using that, but just the tables and their clustered indexes. Leave out foreign key constraints (this makes the data copy less faf), non-clustered indexes (this makes bulk data import faster), triggers (you don't want these firing in the data copy step), views (these may have index hints that reference indexes you have skipped created) and procedures & functions (as with views).
  3. If you want a new file/filegroup structure in the new DB, arrange this now.
  4. Copy the data in bulk to the new DB. Use SSIS or script this up using a linked servers setup rather than doing this more ad-hoc - that way you can tweak & repeat the process easily in case something goes wrong or as a test before the real event.
  5. Create all the non-clustered indexes, then foreign key constraints, then triggers, then views, procedures & functions, that you skipped in step 2. If you created a multi-file arrangement in step 3, take care to make sure the indexes go in the place you want them.
  6. Test the new DB.
  7. Switch app over to using the new DB and put the old one out of reach (rename, and/or make read-only, but don't drop just in case you need to roll back to it if problems are found early).

You should now have a complete copy of the DB in the SQL2012 instance and you have not had to modify the 2008 instance's copy at all (no juggling data between files and such) so it should be both faster and safer. As you are not modifying the source DB in any way you have an automatic roll back plan (just don't use the new DB if something fails, try again later). As you are importing the data newly into the new DB there is nothing to shrink - it all went into the target files the size it should be.

I would strongly recommend doing this with a copy of the source DB in a test environment first, so you can verify the process and run your applications against the new DB to make sure there are no unexpected regressions before you do this in production.

performance drops due to differences in query planner estimates that might require index tweaks etc. - though really as the application is officially supported on SQL2012 one would hope your supplier has dealt with anything that might crop up it is still worth being paranoid and checking

Another option that may be much less faf: if creating a .bacpak using a recent enough version of SSMS supports SQL instances as old as 2008 then you may be able to just do this against the source database and restore into the SQL2012 instance. This will effectively do the process I described above.