Postgresql – use hypertables in timescaledb merely to get better insert rates


I have a PostgreSQL db that I have a large bulk load running into. I wish this load to be as fast as possible. I'm already using the copy command etc.

I have been reading about timescaledb and how it offers improved insert performance. However, I wonder if there is any downside to using hypertables instead of regular tables, if I only care about insert performance?

Best Answer

(Timescale person here.)

Yes, you should be able to get much higher insert rate in a TimescaleDB hypertable than a normal table.

The primary downside of hypertables is that there are a couple limitations they expose related to the way we do internal scaling. In particular:

  • We only allow a key to be marked as UNIQUE if it includes all partitioning keys (in its prefix). So if you partition on a time column, the time column could be unique, or you could build a unique composite key on (time, device_id). But this means you can't build a standard auto-increment id as the primary key (note primary keys are be definition UNIQUE). But we find that doesn't typically make sense for time-series data.

  • You can define a foreign-key constraint from your hypertable to a regular table, but we don't currently allow the opposite: a FK from a regular table to a hypertable. (But same as the UNIQUE constraint limitation above, this rarely makes sense or can be designed around.)

If you have other questions, Docs ( or community Slack ( are great resources.