The primary responsibility of MySQL professionals is to establish and run proper backup and recovery plans.
The most used method to backup a MySQL database is the mysqldump utility. This mysqldump utility creates a backup file for one or more MySQL databases that consists of DDL/DML statements needed to recreate the databases with their data. To restore a database, the backup file is simply read back into the MySQL utility command prompt as an input file.
This is fine for smaller databases, but for large databases the resulting sql file will be very, very large. If this file is too large to handle, it will be impossible to open it in an editor and fix problems in the script if you need to. This will make the recovery of your database very difficult. For example my trusted vim editor would not load a 2GB file at all, and handling slightly smaller files is too slow to do any work within the file.
One way to solve this is to split up the restore script into multiple parts. The most logical way is to split it up by table name.
You will end up with a large amount of sql scripts that you need to restore.
Since this is a cumbersome job to do, we will write a script that does the following:
- backup multiple databases. For each database:
- Backup the DDL of all tables in a separate sql file.
- Make a backup per table name.
- Write a script that we can run to restore the DDL and all tables in one command.
- Compress the generated scripts.
As an added value you will have the possibility to edit the restore script and uncomment the generation of the DDL, exclude some tables for restoration etc. without opening one very large file.
I will not cover sending the compressed file by FTP and/or sending emails and scheduling cron jobs. The script below will be stripped from all but the essential code to perform the task of backing up the databases in small parts.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | #!/bin/bash ### change the values below where needed..... DBNAMES="MyDb1 MyDb2 MyDb3" HOST="--host=localhost" USER="--user=root" PASSWORD="--password=MyPassword" BACKUP_DIR="/MyBackupDirectory" #### you can change these values but they are optional.... OPTIONS="--default-character-set=latin1 --complete-insert --no-create-info --compact -q" RESTORESCRIPT="$BACKUP_DIR/__restoreData.sql" DATE=`/bin/date '+%y%m%d_%H%M%S'` #### make no changes after this.... #### start script #### echo removing old temporary files if they exists... rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1 rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1 cd ${BACKUP_DIR} for DB in $DBNAMES do echo "==========================================" echo ${DB} echo "==========================================" echo 'SET FOREIGN_KEY_CHECKS=0;' > $RESTORESCRIPT mysqldump --no-data $HOST $USER $PASSWORD $DB > ${BACKUP_DIR}/__createTables.sql echo 'source __createTables.sql;' >> $RESTORESCRIPT for TABLE in `mysql $HOST $USER $PASSWORD $DB -e 'show tables' | egrep -v 'Tables_in_' `; do TABLENAME=$(echo $TABLE|awk '{ printf "%s", $0 }') FILENAME="${TABLENAME}.sql" echo Dumping $TABLENAME echo 'source' $FILENAME';' >> $RESTORESCRIPT mysqldump $OPTIONS $HOST $USER $PASSWORD $DB $TABLENAME > ${BACKUP_DIR}/${FILENAME} done echo 'SET FOREIGN_KEY_CHECKS=1;' >> $RESTORESCRIPT echo making tar... tar -cf ${DB}_${DATE}.tar *.sql > /dev/null 2>&1 echo compressing... gzip -9 ${DB}_${DATE}.tar > /dev/null 2>&1 echo removing temporary files... rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1 rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1 echo "done with " $DB done echo "==========================================" echo " done with all database! " echo "==========================================" |
Before you run this script:
- Make sure the backup directory exists and you have the permissions to write and delete files in this directory otherwise the script will hang.
- Have a list of database names and the credentials to make the backup, usually this is the root user.
You can uncompress the files with:
$ tar -xf MyDatabaseName.tar
And you can restore the database with:
And finally the most important thing in the whole process; Test your backups now!
If you make daily backups but you do take the time to test your recovery plan regularly, you will end up with a big problem when you have to do it under pressure of time.
Great script! I created a version that is faster using zip instead of tar that it takes forever…. My version simple creates a folder for each database, export inside the dump, then zip the folder.
The script works great, HOWEVER, if your table has a space in it (e.g. DB FORMAT) then it will backup to two tables: DB and FORMAT. Both files will contain the same data though.
does some one know how to make the backup without locking the tables ?
I don’t have problems loosing one or 2 records but i can’t lock the tables while its doing the dump and my backup is very extensive, this table spread helped me but still have some tables that take too long because of the amount of records.
If you don’t want to split up the restore and if it is taking too long to restore the large MySql database, try this technique
After connecting to the remote MySql database…
- Generate a query to create table schema, procedures and functions of the source DB
- Generate a query to find all indexes of all tables except foreign key constraints of the – source DB
- Generate a query to drop all the indexes found in step 2
- Generate a query to insert all data of the source DB
- Generate a query to create all the indexes found in step 2
- Write all queries in the above order in one .sql file which is your new MySql backup. Zip it using LZ4 compression.
- Now just restore the DB from this file using normal restore utility of MySql.
More on
http://axiomnext.com/blog/how-to-restore-large-mysql-database-faster/
Great script, thanks
Hi Jason,
You can always run the script in Cygwin or convert itto a Powershell script.
Cheers, Ronald
Thanks, very usefull.
Thats great job.
I will modify it to include Stored Procedures and Triggers also.
can this script be used in a windows environment? windows 2008 to be exact. What changes need to be made if any? love the idea, hoping it might help with my backup requirement on a 300 gig DB!, thanks
Super helpful, thanks!
Copied this into a gist for posterity: https://gist.github.com/dergachev/6827356
Also consider checking for ‘gzip –rsyncable’ support and using that.
Great script. Howewer instead of echo “done with all database” i’d put echo “all Your base are belong to us”
Guys, the restore is taking too long.
I think it has something to do with “closing tables”.
90% of the time that I have monitored there is a table in this state. What can be done?
Thanks!
Thanks!
You just salve me… Your script solved the slow restore caused by the repairs of key… Many many thanks…
Thank sharing sharing this post, now i can easily restore my data.
Thanks! This was unbelievably useful today. I might skip the compression though; it took thousands of times longer than the rest of the script, and storage space is far less of a premium than computation time for us.
SET FOREIGN_KEY_CHECKS=0; <– this already added
SET UNIQUE_CHECKS=0;
SET AUTOCOMMIT=0;
You could add this, helpful before importing back database. And activating back again after dump
SET FOREIGN_KEY_CHECKS=1; <– this already added
SET UNIQUE_CHECKS=1;
SET AUTOCOMMIT=1;
Great script,
this
gunzip MyDatabaseName.tar.gz
tar -xf MyDatabaseName.tar
could be done in one command:
tar -zxvf MyDatabaseName.tar.gz
My __createTables.sql file didn’t get created properly; apparently my version of mysqldump was having issues with locking the tables. I’ve successfully created a version of __createTables.sql by adding ‘–lock-tables=false’ to the arguments passed to the ‘–no-data’ mysqldump. I suspect this isn’t the Right Way to do it, but it seems to be working for me. Just thought I’d pass this little hack along in case it helps anyone.
Warning: I am not a MySQL expert, and I provide no guarantee that this will work for you. Also, if there’s a better way to do what I’m trying to do, I’d love to hear it.
Its good . Thanks for the info
System backups aren’t really a hassle. A half-decent backup plan will work for months!
Thank you for your help!
Hi thank you for the script. Good work, this really did help me.
That is a very good idea to consider for backing up. It is really some thing that’s usually on our mind as we deal with tons of files on a every day basis.
Thanks so much, this is exactly what I was looking for and is easy to setup
Thanks for this script, this is a lifesaver for me! I have a huge backup file with a lot of errors when I try to restore them. I’m now able to restore my backup table by table.
Many thanks, Howard.
Great info, thanks for useful article. I am waiting for more