Improve sqlite mapping query


I have an SQLite DB that has couples of tables. one of them looks like this:

(id integer primary key, location integer, name text, desc text, version integer)

select id,other_colum from table where location in (?,?,?) and name=? and desc=? and version=?

The order is not important and there are more columns that not relevant.

I have a query that mapping data from an external system to inner id. The mapping is done with all the above fields. I’m using an "equal" operator with all fields except location which I have a list of possible values (so I am using operator "in"). The list is not constant in values and length.

I receive data from the external system at a rate of a minute so latency is important.
This query runs 550 times on average each time (throughout range between 500 to 600 depends on the hour).

I started without any index and the query took 400 mili on average, so I added index only on location and it drops to 30 mili. I tried to add columns to that index but it didn’t help.

I created the index using the following query

create index mapping_index on table(location)

I tried to add more columns to the index to improve performance but nothing helped.

The data is mostly read-only, with few insert (20 per day) and few updates (50 per day including soft delete). the updates never touch the columns used for mapping.

There are almost 25000 rows in the table. Also always should be one and exactly one row that matches the mapping but I need to be able to identify errors such as no row or more than one row.

This is the heaviest part of my system and almost everything operation goes through this and I need to improve this.

I’m open to new technologies but prefer to stay with SQLite for simplicity


The complete relevant DB schema:

CREATE TABLE t(id INTEGER PRIMARY KEY, otherId INTEGER, location INTEGER, name TEXT, desc TEXT, version INTEGER, createDate INTEGER, removeDate INTEGER);
CREATE INDEX mapping_index ON t(location,name,desc,version);


0|0|0|SEARCH TABLE t USING INDEX mapping_index (location=? AND name=? AND desc=? AND version=?)

I just understand that I wrote something confusing. The whole transaction take something like 30 millisecond. the query itself take between 1 to 2 millisecond (with strange spikes of 100 millisecond). The query run in average 15 times per transaction (500 each time).

Best Answer

To find matching rows, the database must first search in the index. If the WHERE clause also uses columns not in the index, the database also has to read candidate rows from the actual table to be able to check those other columns.

Therefore, you would get the largest further speedup by adding all columns from the WHERE clause to the index.

You can also create a covering index by adding all columns from the SELECT clause to the index. Then no rows have to be read from the table. This might halve the query time.

Adding columns makes the index larger, which might slow down updates, which is probably not a problem for you.