Backup large databases with mysqldump by splitting up the backup files

The primary responsibility of MySQL professionals is to establish and run proper backup and recovery plans.

The most used method to backup a MySQL database is the mysqldump utility. This mysqldump utility creates a backup file for one or more MySQL databases that consists of DDL/DML statements needed to recreate the databases with their data. To restore a database, the backup file is simply read back into the MySQL utility command prompt as an input file.

This is fine for smaller databases, but for large databases the resulting sql file will be very, very large. If this file is too large to handle, it will be impossible to open it in an editor and fix problems in the script if you need to. This will make the recovery of your database very difficult. For example my trusted vim editor would not load a 2GB file at all, and handling slightly smaller files is too slow to do any work within the file.

One way to solve this is to split up the restore script into multiple parts. The most logical way is to split it up by table name.
You will end up with a large amount of sql scripts that you need to restore.

Since this is a cumbersome job to do, we will write a script that does the following:

  • backup multiple databases. For each database:
    • Backup the DDL of all tables in a separate sql file.
    • Make a backup per table name.
    • Write a script that we can run to restore the DDL and all tables in one command.
    • Compress the generated scripts.

As an added value you will have the possibility to edit the restore script and uncomment the generation of the DDL, exclude some tables for restoration etc. without opening one very large file.

I will not cover sending the compressed file by FTP and/or sending emails and scheduling cron jobs. The script below will be stripped from all but the essential code to perform the task of backing up the databases in small parts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
#!/bin/bash
### change the values below where needed.....
DBNAMES="MyDb1 MyDb2 MyDb3"
HOST="--host=localhost"
USER="--user=root"
PASSWORD="--password=MyPassword"
BACKUP_DIR="/MyBackupDirectory"

#### you can change these values but they are optional....
OPTIONS="--default-character-set=latin1 --complete-insert --no-create-info --compact -q"
RESTORESCRIPT="$BACKUP_DIR/__restoreData.sql"
DATE=`/bin/date '+%y%m%d_%H%M%S'`

#### make no changes after this....
#### start script ####
echo removing old temporary files if they exists...
rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1
rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1
cd ${BACKUP_DIR}

for DB in $DBNAMES
do
    echo "=========================================="
    echo ${DB}
    echo "=========================================="
    echo 'SET FOREIGN_KEY_CHECKS=0;' > $RESTORESCRIPT

    mysqldump --no-data $HOST $USER $PASSWORD $DB > ${BACKUP_DIR}/__createTables.sql
    echo 'source __createTables.sql;' >> $RESTORESCRIPT

    for TABLE in `mysql $HOST $USER $PASSWORD $DB -e 'show tables' | egrep -v 'Tables_in_' `; do
        TABLENAME=$(echo $TABLE|awk '{ printf "%s", $0 }')
        FILENAME="${TABLENAME}.sql"
        echo Dumping $TABLENAME
        echo 'source' $FILENAME';' >> $RESTORESCRIPT
        mysqldump $OPTIONS $HOST $USER $PASSWORD $DB $TABLENAME > ${BACKUP_DIR}/${FILENAME}
    done

    echo 'SET FOREIGN_KEY_CHECKS=1;' >> $RESTORESCRIPT

    echo making tar...
    tar -cf ${DB}_${DATE}.tar *.sql  > /dev/null 2>&1

    echo compressing...
    gzip -9 ${DB}_${DATE}.tar > /dev/null 2>&1

    echo removing temporary files...
    rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1
    rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1

    echo "done with " $DB
done

echo "=========================================="
echo "            done with all database!       "
echo "=========================================="

Before you run this script:

  • Make sure the backup directory exists and you have the permissions to write and delete files in this directory otherwise the script will hang.
  • Have a list of database names and the credentials to make the backup, usually this is the root user.

You can uncompress the files with:

$ gunzip MyDatabaseName.tar.gz
$ tar -xf MyDatabaseName.tar

And you can restore the database with:

$ mysql -uroot -pYourPassword -h YourHost YourDatabaseName < __restoreData.sql

And finally the most important thing in the whole process; Test your backups now!
If you make daily backups but you do take the time to test your recovery plan regularly, you will end up with a big problem when you have to do it under pressure of time.

Related posts

Tags: , , ,

21 Responses to “Backup large databases with mysqldump by splitting up the backup files”

  1. Hi Jason,

    You can always run the script in Cygwin or convert itto a Powershell script.

    Cheers, Ronald

  2. jovenjoao says:

    Thanks, very usefull.

  3. Pablo says:

    Thats great job.

    I will modify it to include Stored Procedures and Triggers also.

  4. jason says:

    can this script be used in a windows environment? windows 2008 to be exact. What changes need to be made if any? love the idea, hoping it might help with my backup requirement on a 300 gig DB!, thanks

  5. Super helpful, thanks!
    Copied this into a gist for posterity: https://gist.github.com/dergachev/6827356

    Also consider checking for ‘gzip –rsyncable’ support and using that.

  6. Adam says:

    Great script. Howewer instead of echo “done with all database” i’d put echo “all Your base are belong to us” :D

  7. Emannuel says:

    Guys, the restore is taking too long.
    I think it has something to do with “closing tables”.
    90% of the time that I have monitored there is a table in this state. What can be done?

    Thanks!

  8. Emannuel says:

    Thanks!
    You just salve me… Your script solved the slow restore caused by the repairs of key… Many many thanks…

  9. Files backup says:

    Thank sharing sharing this post, now i can easily restore my data.

  10. Espoire says:

    Thanks! This was unbelievably useful today. I might skip the compression though; it took thousands of times longer than the rest of the script, and storage space is far less of a premium than computation time for us.

  11. jordiv says:

    SET FOREIGN_KEY_CHECKS=0; <– this already added
    SET UNIQUE_CHECKS=0;
    SET AUTOCOMMIT=0;

    You could add this, helpful before importing back database. And activating back again after dump

    SET FOREIGN_KEY_CHECKS=1; <– this already added
    SET UNIQUE_CHECKS=1;
    SET AUTOCOMMIT=1;

  12. engine9 says:

    Great script,

    this
    gunzip MyDatabaseName.tar.gz
    tar -xf MyDatabaseName.tar

    could be done in one command:
    tar -zxvf MyDatabaseName.tar.gz

  13. Pat J says:

    My __createTables.sql file didn’t get created properly; apparently my version of mysqldump was having issues with locking the tables. I’ve successfully created a version of __createTables.sql by adding ‘–lock-tables=false’ to the arguments passed to the ‘–no-data’ mysqldump. I suspect this isn’t the Right Way to do it, but it seems to be working for me. Just thought I’d pass this little hack along in case it helps anyone.

    Warning: I am not a MySQL expert, and I provide no guarantee that this will work for you. Also, if there’s a better way to do what I’m trying to do, I’d love to hear it.

  14. Syam.s says:

    Its good . Thanks for the info

  15. System backups aren’t really a hassle. A half-decent backup plan will work for months!

  16. recepti says:

    Thank you for your help!

  17. kabelky says:

    Hi thank you for the script. Good work, this really did help me.

  18. Shawnda Samiec says:

    That is a very good idea to consider for backing up. It is really some thing that’s usually on our mind as we deal with tons of files on a every day basis.

  19. Ben says:

    Thanks so much, this is exactly what I was looking for and is easy to setup :)

  20. Howard Banks says:

    Thanks for this script, this is a lifesaver for me! I have a huge backup file with a lot of errors when I try to restore them. I’m now able to restore my backup table by table.

    Many thanks, Howard.

  21. Hockenbury says:

    Great info, thanks for useful article. I am waiting for more

Leave a Reply