Drupal backup - My simple strategy to backup database and files

You might ask me why don't I just use the Backup and Migrate module to back up my Drupal database and files, and my response would be why to use a module with around 200 issues and who knows how many lines of code when I can do it with a simple shell script in less than 15 lines of code.

Judging by how many Drupal websites use the Backup and Migrate module it might be a good module. I don't know. I never used it and never will. I hate adding too many contributed modules to my projects, especially if they have too many unresolved issues.

And Backup and Migrate is a Drupal module with a lot of issues. Also, as far as I can tell the latest version doesn't even support uploading to Amazon S3. And I really need that feature.

So, let's see how I back up databases and files for all my Drupal 8 and 9 projects.

Database backup shell script

#!/usr/bin/env bash

if [ -z "$1" ] 
  then
    echo "Please provide backup name."
    exit 1
fi

if [ -z "$2" ] 
  then
    echo "Please provide S3 path."
    exit 1
fi

suffix=$(date '+%Y-%m-%d_%H-%M-%S')
bin/drush sql-dump --gzip --structure-tables-key=common > "$1-$suffix.sql"
aws s3 cp "$1-$suffix.sql" "s3://$2"
rm "$1-$suffix.sql"

The script depends on Drush and AWS S3 command-line tools, so if you want to use it make sure to have these tools installed. In my case, Drush is installed on a per-project basis using Composer. That's why I use bin/drush path. In some cases, depending on your server setup, you might have to specify the full drush path, e.g. /var/www/example.com/bin/drush

AWS S3 is a small command-line app that allows you to upload files to S3 from your terminal.

The database dump is gzipped to make the file smaller, and the filename is suffixed with date in YMD format that makes sorting easier.

Files backup shell script

#!/usr/bin/env bash

if [ -z "$1" ] 
  then
    echo "Please provide backup name."
    exit 1
fi

if [ -z "$2" ] 
  then
    echo "Please provide S3 path."
    exit 1
fi

suffix=$(date '+%Y-%m-%d_%H-%M-%S')
tar -zcf "$1-$suffix.tar.gz" --exclude='css' --exclude='js' --exclude='php' --exclude='styles' -C "web/sites/default/files/" .
aws s3 cp "$1-$suffix.tar.gz" "s3://$2"
rm "$1-$suffix.tar.gz"

This script doesn't depend on Drush but it depends on the tar command-line tool. I don't have anything in my private files directory so only public files are backed up. The path to Drupal files in my case is web/sites/default/files but it might be different in your case.

I excluded directories like css, js, php, and styles because they can always be created again by Drupal.

Usage examples:

sh backup-database.sh example.com my-s3-bucket/databases/;
sh backup-files.sh example.com my-s3-bucket/files/;

Both shell scripts are placed in the root directory of a project:

Image

You can execute the scripts manually or even better by configuring a cron job on your web server. That way your backup is automated and you don't have to remember to run it manually.

To import gzipped database dump with Drush use the following command:

gunzip < database-dump.sql | drush sql-cli

From time to time make sure that scripts are working and also try to restore your website from backup just to be sure that everything works fine. Without actually testing the backup thoroughly, you are making an unwarranted assumption that backup and recovery will work when you need them.

About the Author

Goran Nikolovski is an experienced web and AI developer skilled in Drupal, React, and React Native. He founded this website and enjoys sharing his knowledge.