# Postgresql – newer identical index is used, older one used to be but is not anymore

postgresql

Postgres version 11.2

I've got a table with about 30 million rows and a particular index that up until recently was actively being used.

And then it stopped being used and the query that the index was made for, is now doing sequential scans.

I discovered this when I was investigating the query and forgot to check which indexes the table already had, so I made a new one. Which turned out to be exactly identical in construction to an existing one.

explain showed that the new one was being used just fine. Removing it again, vacuum analyze-ing the table and doing the query again, showed the old index not being used. A sequential scan was opted for instead.

Deleting and re-creating the index worked. Now it suddenly was being used again.

It stayed like this for a few weeks and now I'm at this scenario again.

What's going on here?

Googling for this stuff just gives the old "why postgres may decide not to use your index" which isn't exactly the scenario at hand, I think.

Also, I used recent module "amcheck" https://www.postgresql.org/docs/11/amcheck.html

It comes back fine for this index.

EDIT:

1. Size of the index not being used is 101MB.
2. Size of the index being used is 16MB.

Query used for determining this: https://wiki.postgresql.org/wiki/Index_Maintenance#Index_size.2Fusage_statistics

While both small, it seems like a significant difference and I wonder where it came from.

There's other indexes on that table which are over 10GB, that show as being used…

1. Wasn't sure how to find that, just did select * from pg_index where indisvalid = false; and that returned an empty list.

2. Here is the output for index used: https://explain.depesz.com/s/H5X9y

3. Here is the output for index not used: https://explain.depesz.com/s/n6bP

PostgreSQL grossly overestimates how many rows will be returned by the index scan. It estimates the cost for the index scan and the sequential scan are about the same, so the slightly larger size of the index when it gets bloated probably tips the scales.

You should improve the estimates. Since it is a simple inequality condition on a timestamp, PostgreSQL should be able to come up with a decent estimate.

First, run an ANALYZE on the table and see if that does the trick. If it does, make sure that autoanalyze runs more often:

ALTER TABLE atable SET (autovacuum_analyze_scale_factor = 0.02);


If not, run

SET default_statistics_target = 1000;


Then ANALYZE again and see if that improves the estimate.

If yes, use a statement like the following to persist the setting:

ALTER TABLE atable ALTER acolumn SET STATISTICS 1000;