7.5.4. Bulk Data Loading for InnoDB Tables

These performance tips supplement the general guidelines for fast inserts in Section 7.2.2.1, “Speed of INSERT Statements”.

  • When importing data into InnoDB, turn off autocommit mode, because it performs a log flush to disk for every insert. To disable autocommit during your import operation, surround it with SET autocommit and COMMIT statements:

    SET autocommit=0;
    ... SQL import statements ...
    COMMIT;
    

    The mysqldump option --opt creates dump files that are fast to import into an InnoDB table, even without wrapping them with the SET autocommit and COMMIT statements.

  • If you have UNIQUE constraints on secondary keys, you can speed up table imports by temporarily turning off the uniqueness checks during the import session:

    SET unique_checks=0;
    ... SQL import statements ...
    SET unique_checks=1;
    

    For big tables, this saves a lot of disk I/O because InnoDB can use its insert buffer to write secondary index records in a batch. Be certain that the data contains no duplicate keys.

  • If you have FOREIGN KEY constraints in your tables, you can speed up table imports by turning off the foreign key checks for the duration of the import session:

    SET foreign_key_checks=0;
    ... SQL import statements ...
    SET foreign_key_checks=1;
    

    For big tables, this can save a lot of disk I/O.

  • Use the multiple-row INSERT syntax to reduce communication overhead between the client and the server if you need to insert many rows:

    INSERT INTO yourtable VALUES (1,2), (5,5), ...;
    

    This tip is valid for inserts into any table, not just InnoDB tables.

  • When doing bulk inserts into tables with auto-increment columns, set innodb_autoinc_lock_mode to 2 or 3 instead of the default value 1. See Section 13.6.5.3.2, “Configurable InnoDB Auto-Increment Locking” for details.

Copyright © 2010-2024 Platon Technologies, s.r.o.           Home | Man pages | tLDP | Documents | Utilities | About
Design by styleshout