Nobody wants slow SQL queries, and database administrators and SQL developers know all sorts of techniques to make their databases perform as fast as possible. However, there’s an easy tactic that some DBAs tend to overlook or might not even know: deleting and updating in small batches.
How Deleting and Updating in Batches Work
A seasoned SQL server consultant said that updating or deleting huge chunks of data from massive tables could be a challenge when done improperly. The problem is that both statements operate as one transaction. When something goes wrong with your system or when you have to kill these operations while they’re still running, your system would need to roll back the transaction, which could take a long time.
Also, this setback could likewise block other transactions, which in turn bottlenecks your system. For example, if you delete a routine a couple of times daily and deleted data from 16 tables in only one transaction, your system handled the 16 tables in a single transaction and locked each table until the system finished all the deleting operations.
Deleting and updating in smaller, more manageable batches solves this issue in different ways. For one, if the transaction fails or gets stopped for some reason, it would only have to roll back a few rows, returning your database online much faster. Likewise, while smaller batches are writing to disk, the others could still carry on and operate, which significantly enhances concurrency.
You could break each table’s deleting and updating operations into individual transactions so that every transaction could only lock one table. It frees up other tables, decreasing blocking, and enabling other transactions to continue operating. You don’t always have to do all your deletes and updates in a single day, most particularly if you’re just archiving data. You could stretch these transactions as long as you possibly can, which gives you more time to ensure that your system operates at peak efficiency.