I am attempting to run a dbcc shrinkfile in chunks of 1GB against a database where 95% of the data has been archived and deleted. I'm leaft with a 235GB file where 9GB is data/indexes. I want to shrink this down to 50GB. I know that shrinking database files is bad, it causes fragmentation etc. As part of the data purge/shrink we also have a rebuild idnex script.
When I run my dbcc shrinkfile script against the database on my own workstation (quad core, 12GB RAM, 2 x SATA drives), the shrink takes around 8-10 minutes.
When running the identical code against an identical copy of the database post data purge, in our testing envrionment, (80+ cores, 128GB RAM, SSD SAN), it takes 70 minutes. To note, there is little activity on this server at the time of the shrink file running. It has been run 4 times with identical results.
I then took a different approach, of moving the remaining 9GB to a different filegroup and physical file. Running dbcc shrinkfile on the empty 230GB file to shrink it down to 50GB, on my own workstation takes < 1 minute.
With this same approach, on the testing environment, it again takes 70+ minutes.
I have taken a snapshot of waitstats before and after as per Brent Ozar's script during the 70 minute run on the test environemnt, and the waittypes returned showing nothing to be conerned about. Top 3 rows below:
Second Sample Time Sample Duration in Seconds wait_type Wait Time (Seconds) Number of Waits Avg ms Per Wait 2013-05-28 11:24:22.893 3600 WRITELOG 160.8 143066 1.1 2013-05-28 11:24:22.893 3600 CXPACKET 20.9 13915 1.5 2013-05-28 11:24:22.893 3600 PAGELATCH_EX 11.1 443038 0.0
Windows event log shows nothing unusual. I am heading scratching at this point, why it is taking so long on the ninja hardware, compared to my standalone workstation.