I use PostgreSQL. I have many database tables which contain data of the kind that is regularly "replaced entirely", as in, I first clear (delete) all existing records, then just insert all-new records.
Previously, I did this with:
DELETE FROM the_table;
However, this kept increasing the serial column (id) all the time, eventually leading to it reaching the maximum number. I'm still unsure what happens at that point; I hope that it just resets automatically, but I fear that it probably makes the table stop working and start spitting out errors whenever new records are attempted to be INSERTed.
To solve this for these tables, I have started doing this instead:
TRUNCATE TABLE the_table RESTART IDENTITY[ CASCADE];
This does the same thing as DELETE FROM, but also resets the serial column. Great. Exactly what I wanted.
However, I also have many tables where the data needs to be preserved, and cannot be cleared like the ones I described above. I obviously cannot run TRUNCATE TABLE on those tables, but they will also eventually reach a point where either the serial or bigserial maximum is reached.
It doesn't matter if this "probably will take years". Just knowing that this will/might one day happen is enough to give me constant stress. Is there something I can do to avoid that scenario? Does PG handle this automatically? Can it be told somehow how to behave when this inevitably happens?
Since many old records likely will have been deleted by that point, I would like it to start using those unused ids automatically. For example, if ids 0-1000 are empty when it reaches the maximum serial value, instead of failing, it would start using 0, 1, 2, 3, 4… until it gets to 1000 and then it looks ahead for any unused ids until it finds one.
If the above idea is stupid, I want to hear what you all do about this problem.