New versions: Dev (3.15) | Latest (3.14) | 3.13 | 3.12 | 3.11 | 3.10 | 3.9 | 3.8 | Old versions: 3.7 | 3.6 | 3.5 | 3.4 | 3.3

Throttling

Applies to ✅ Open Source Edition   ✅ Express Edition   ✅ Professional Edition   ✅ Enterprise Edition

Not all RDBMS offer the same optimisation capabilities. Please refer to your database manual to learn how these tuning capabilities may affect your data import performance. Also, actual measurements may help improve these numbers. Do not optimise prematurely, or based on assumptions. Always measure if your optimisation has the desired effect!

The commit size

Committing a transaction can be a costly operation if done too often, or not often enough. If there are too many commits, this can lead to a lot of logging overhead on the server. If a too many changes are left uncommitted for too long, there may be too much locking in 2PL transaction models, or log contention in MVCC transaction models. An empirically discovered, optimal commit size that leads to committing e.g. 1000 rows (or 10000, or 100, please measure what works best for you) may produce best results.

There are 3 possible, mutually exclusive configurations of specifying the batch size:

create.loadInto(BOOK)

      // Commit all statements (batch, bulk, or not) in a single large transaction. 
      .commitAll()
      
      // Put up to 32 statements (batch, bulk, or not) in a transaction.
      .commitAfter(32)
      
      // Commit each statement (batch, bulk, or not) in a transaction, just like commitAfter(1)
      .commitEach()
      
      // Do not commit any statement, leave committing to client code
      .commitNone()
      
      .loadCSV(inputstream)
      .fields(BOOK.ID, BOOK.AUTHOR_ID, BOOK.TITLE)
      .execute();

Feedback

Do you have any feedback about this page? We'd love to hear it!

The jOOQ Logo