Mysql – High Disk I/O after batch deleting from innodb table


I needed to delete many records from a 500GB innodb mysql table. I thought I made the right choice by running many delete statements to batch delete the data.

I deleted 10,000 rows at a time and it took around 28,000 iterations (280 million rows deleted) to complete. I had breaks between each delete to allow the read slave to not get too far behind.

Now that the deletions are complete, there are two mysql threads that are hammering the disks. One of them is reading, the other is writing. 85% of the I/O are writes.

I've read that there is a reordering operation that happens after a delete but I'm not sure if that's it. The processlist doesn't show anything out of the ordinary. Monitoring tools that have been watching the DB for some time don't show much of anything different either, with the exception that queries are taking longer, disk I/O is maxed out, and there is an increase in dirty buffer pool pages.

Is there anyway to find out what is going on and causing the high disk I/O? I'm also looking for a way to make it stop or decrease it's priority since the database is working fine otherwise.

MySql info: mysql Ver 14.14 Distrib 5.6.30-76.3, for Linux (x86_64) using 6.2

The table in question does not have a primary key or any unique identifier. It has ~925 million rows in it after the delete.


Server has 32GB of ram.

The rest of the mysql data on this server is 150gb.

CREATE TABLE tablename (
column1 bigint(11) NOT NULL DEFAULT '0',
column2 bigint(11) NOT NULL DEFAULT '0',
column3 bigint(11) NOT NULL DEFAULT '0',
column4 datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
column5 varchar(64) DEFAULT NULL,
column5 int(11) NOT NULL DEFAULT '32',
column7 varchar(64) DEFAULT NULL,
column8 bigint(20) DEFAULT '0',
column9 bigint(20) DEFAULT '0',
column10 bigint(20) DEFAULT '5',
column11 date DEFAULT NULL,
column12 enum('src','dst','unknown') NOT NULL DEFAULT 'unknown',
UNIQUE KEY uniquename (column1,column2,column5,column7,column4,column12),
KEY column1 (column1,column2,column5,column7,column4),
KEY column11_index (column11)

Column names changed to generic names.

The goal of doing small batch deletes was to try and not interfere with normal daily operations on the database.

There is a read slave, it performed all the delete operations and it is not suffering from the same high disk IO that the master is experiencing.


The server has been rebooted. Once the mysql service started (even with no clients connected), it continued to use all available disk resources to write to disk.

Is there anyway to tell why it's writing to disk constantly?

Best Answer

First and foremost if your data had some sort of identification (ie PRIMARY KEY, or TIMESTAMP) then you could have done:

-- or 

As for what to do now. I would run the following processes:

  1. In a WHILE TRUE make sure that the mysql isn't hanging

mysql -h $HOST -P $port -uroot -e "show processlist;"

  1. Check the row count to see if you are increasing or decreasing

SELECT COUNT(*) FROM table_name;

  1. If the row count is increasing then you should check what other processes are being utilized. ps fux | grep mysql

However, if the row count is decreasing or consistent then it may mean that your deletes are not fully finished.

Wish I can be more help, but your question felt incomplete and thus not very clear.