Posts Tagged ‘backup’

Rename a MySQL database schema safely

Wednesday, April 18th, 2012

This example uses a backup to rename a database in a MySQL instance.

It takes only 5 easy steps:

First copy the schema (db1) with mysqldump:

% mysqldump db1 > db1.sql

Create the a new schema (db2):

% mysqladmin CREATE db2

Insert the backup into the new schema:

% mysql db2 < db1.sql

Drop the old schema:

% mysqladmin drop db1

Now update the user rights:

mysql> UPDATE db SET Db = 'db2' WHERE Db = 'db1';
mysql> UPDATE tables_priv SET Db = 'db2' WHERE Db = 'db1';
mysql> UPDATE columns_priv SET Db = 'db2' WHERE Db = 'db1';
mysql> UPDATE host SET Db = 'db2' WHERE Db = 'db1';

Done! That was easy wasn’t it?

Backup large databases with mysqldump by splitting up the backup files

Sunday, May 30th, 2010

The primary responsibility of MySQL professionals is to establish and run proper backup and recovery plans.

The most used method to backup a MySQL database is the mysqldump utility. This mysqldump utility creates a backup file for one or more MySQL databases that consists of DDL/DML statements needed to recreate the databases with their data. To restore a database, the backup file is simply read back into the MySQL utility command prompt as an input file.

This is fine for smaller databases, but for large databases the resulting sql file will be very, very large. If this file is too large to handle, it will be impossible to open it in an editor and fix problems in the script if you need to. This will make the recovery of your database very difficult. For example my trusted vim editor would not load a 2GB file at all, and handling slightly smaller files is too slow to do any work within the file.

One way to solve this is to split up the restore script into multiple parts. The most logical way is to split it up by table name.
You will end up with a large amount of sql scripts that you need to restore.

Since this is a cumbersome job to do, we will write a script that does the following:

  • backup multiple databases. For each database:
    • Backup the DDL of all tables in a separate sql file.
    • Make a backup per table name.
    • Write a script that we can run to restore the DDL and all tables in one command.
    • Compress the generated scripts.

As an added value you will have the possibility to edit the restore script and uncomment the generation of the DDL, exclude some tables for restoration etc. without opening one very large file.

I will not cover sending the compressed file by FTP and/or sending emails and scheduling cron jobs. The script below will be stripped from all but the essential code to perform the task of backing up the databases in small parts.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
#!/bin/bash
### change the values below where needed.....
DBNAMES="MyDb1 MyDb2 MyDb3"
HOST="--host=localhost"
USER="--user=root"
PASSWORD="--password=MyPassword"
BACKUP_DIR="/MyBackupDirectory"

#### you can change these values but they are optional....
OPTIONS="--default-character-set=latin1 --complete-insert --no-create-info --compact -q"
RESTORESCRIPT="$BACKUP_DIR/__restoreData.sql"
DATE=`/bin/date '+%y%m%d_%H%M%S'`

#### make no changes after this....
#### start script ####
echo removing old temporary files if they exists...
rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1
rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1
cd ${BACKUP_DIR}

for DB in $DBNAMES
do
    echo "=========================================="
    echo ${DB}
    echo "=========================================="
    echo 'SET FOREIGN_KEY_CHECKS=0;' > $RESTORESCRIPT

    mysqldump --no-data $HOST $USER $PASSWORD $DB > ${BACKUP_DIR}/__createTables.sql
    echo 'source __createTables.sql;' >> $RESTORESCRIPT

    for TABLE in `mysql $HOST $USER $PASSWORD $DB -e 'show tables' | egrep -v 'Tables_in_' `; do
        TABLENAME=$(echo $TABLE|awk '{ printf "%s", $0 }')
        FILENAME="${TABLENAME}.sql"
        echo Dumping $TABLENAME
        echo 'source' $FILENAME';' >> $RESTORESCRIPT
        mysqldump $OPTIONS $HOST $USER $PASSWORD $DB $TABLENAME > ${BACKUP_DIR}/${FILENAME}
    done

    echo 'SET FOREIGN_KEY_CHECKS=1;' >> $RESTORESCRIPT

    echo making tar...
    tar -cf ${DB}_${DATE}.tar *.sql  > /dev/null 2>&1

    echo compressing...
    gzip -9 ${DB}_${DATE}.tar > /dev/null 2>&1

    echo removing temporary files...
    rm -f ${BACKUP_DIR}/*.sql > /dev/null 2>&1
    rm -f ${BACKUP_DIR}/*.tar > /dev/null 2>&1

    echo "done with " $DB
done

echo "=========================================="
echo "            done with all database!       "
echo "=========================================="

Before you run this script:

  • Make sure the backup directory exists and you have the permissions to write and delete files in this directory otherwise the script will hang.
  • Have a list of database names and the credentials to make the backup, usually this is the root user.

You can uncompress the files with:

$ gunzip MyDatabaseName.tar.gz
$ tar -xf MyDatabaseName.tar

And you can restore the database with:

$ mysql -uroot -pYourPassword -h YourHost YourDatabaseName < __restoreData.sql

And finally the most important thing in the whole process; Test your backups now!
If you make daily backups but you do take the time to test your recovery plan regularly, you will end up with a big problem when you have to do it under pressure of time.