# Sql-server – How to handle too many inserts

insertsql server

I have a database which is accessed from remote locations in the world with very long delays and poor transfer speeds. I currently transfer the data using a transaction, since either all or no data should be inserted. Currently the values are inserted one at a time. It takes a total of about 37 seconds for 200 inserts before the commit.

The problem is that there may be 10 sites continuously trying to insert data within a 2 minute interval. If they fail (due to concurrency errors) they will retry until they succeed. This adds up to an impossible situation as they will start growing a backlog.

I understand I can use a snapshot isolation level to allow reads in parallel, but how should I handle the parallel insets? How do banks and other systems do it since they should also get a backlog if they do it in a queue?

I use SQL Server and on the remote client I use C# to talk to the SQL Server.

The inserts are of 10 different types and are they are linked in pairs of two. The inserts are connected in pairs. The first insert is a generic one which is the same for all and are inserted into the same table – let's call it table_A.
Then the second insert is specific depending on the type of insert data. The second insert is made to 1 of 10 tables and the data varies greatly but that table also store the row index of the first insert in table_A. This is how they are linked.

The transfer speed from a good connection is about 1 second and that is no problem. The problem is the remote part that takes about 40 seconds. Table_A contains about 10 million rows and is well indexed. All other tables are somewhat smaller and well indexed.