Categories
Uncategorized

Backing up WordPress to S3

This blog is hosted on a virtual machine at Linode, who provide a backup facility of their own. I’m using that, but I think it’s worthwhile to push an additional snapshot somewhere else at least once a week for additional redundancy.

I already have a personal account at Amazon Web Services, so each week, I’m sending a database dump and a tarball of the WordPress filesystem to an S3 bucket.

The high level steps:

  1. Create an S3 bucket taking care not to allow public access.
  2. Configure the bucket’s Lifecycle Policy to expire old backups after 95 days.
  3. Create an Amazon IAM user and grant API access to the S3 bucket, but not to any other resources on the AWS account.
  4. Install s3cmd on the Linode box where WordPress is hosted. It’s a command line interface for S3, written in Python.
  5. Deploy a shell script to create the filesystem tarball and dump the MySQL database before pushing both files to S3. Guy Rutenberg published a simple, but effective, WordPress backup script in 2008. I added a new command at the end to clean up the local copies of the backup files once they’re pushed successfully to S3.
  6. Download phusion-server-tools to the Linode box. These include a script called silence-unless-failed. Used as a wrapper around cron jobs, it suppresses output unless a script exits with an error code. Combined with cron’s own MAILTO command, it generates email alerts when something goes wrong, but only then.
  7. Create the cron job. I think weekly is enough to balance risk, storage cost and my blogging frequency. I’m relying on Linode’s backups in the first instance, so the S3 backups will only become important if something really bad happens.

Here’s Guy’s shell script, with the new cleanup command at the end:

#!/bin/bash

# (C) 2008 Guy Rutenberg
# This is a script that creates backups of my blog.

DB_NAME=database_name
DB_USER=database_user
DB_PASS=****
DB_HOST=db_host

#no trailing slash
BLOG_DIR=/path/to/blog
BACKUP_DIR=/path/to/backups
S3BUCKET=s3://bucket-name/

# end of configuration - you probably don't need to touch anything below
DUMP_NAME=${DB_NAME}-$(date +%Y%m%d).sql.bz2

echo -n "dumping database… "
mysqldump --user=${DB_USER} --password=${DB_PASS} --host=${DB_HOST} ${DB_NAME} \
| bzip2 -c > ${BACKUP_DIR}/${DUMP_NAME}
if [ "$?" -ne "0" ]; then
    echo "failed!"
    exit 1
fi
echo "done"

TAR_NAME=${BLOG_DIR##/}-$(date +%Y%m%d).tar.bz2
echo -n "Creating tarball… " tar -cjf ${BACKUP_DIR}/${BLOG_DIR##/}-$(date +%Y%m%d).tar.bz2 ${BLOG_DIR}
if [ "$?" -ne "0" ]; then
    echo "failed!"
    exit 1
fi
echo "done"

echo -n "Uploading SQL dump to Amazon S3… "
s3cmd put ${BACKUP_DIR}/${DUMP_NAME} ${S3BUCKET}${DUMP_NAME}
if [ "$?" -ne "0" ]; then
    echo "failed!"
    exit 1
fi
echo "done"

echo -n "Uploading tarball to Amazon S3… "
s3cmd put ${BACKUP_DIR}/${TAR_NAME} ${S3BUCKET}${TAR_NAME}
if [ "$?" -ne "0" ]; then
    echo "failed!"
    exit 1
fi
echo "done"

# additional command to delete local copies
echo -n "Deleting local copies of SQL dump and tarball… "
rm ${BACKUP_DIR}/${TAR_NAME} ${BACKUP_DIR}/${DUMP_NAME}
if [ "$?" -ne "0" ]; then
    echo "failed!"
    exit 1
fi
echo "done"

Leave a Reply

Your email address will not be published. Required fields are marked *