
My Backup Script Deleted My Files: A Bash Horror Story
You know those moments? The ones where your stomach drops faster than a poorly optimized query on a Friday afternoon? Yeah, I had one of those. It involved a perfectly innocent-looking Bash script, a...
r5yn1r4143
12h ago
You know those moments? The ones where your stomach drops faster than a poorly optimized query on a Friday afternoon? Yeah, I had one of those. It involved a perfectly innocent-looking Bash script, a file backup job, and… well, let's just say my rm -rf command got a little too enthusiastic. It was supposed to be a simple automation, a little helper to keep my precious project files safe. Instead, it became a cautionary tale, a digital ghost story whispered in the server room. So, gather 'round, grab your kape, and let's talk about the time my first automated backup script decided to delete the source.
TL;DR: The Great File Purge
My very first attempt at automating a file backup using a Bash script went spectacularly wrong. I intended to copy files from a "source" directory to a "backup" directory and then clean up old backup files. Instead, a misplaced and an incorrect destination in my rm command deleted the entire source directory, along with all my original project files. Thankfully, I had an older backup and some quick thinking saved the day, but the lesson was brutal and immediate: always double-check your commands, especially those involving deletion, and test thoroughly in a safe environment.
The Grand Plan: Automating Peace of Mind
I was relatively new to the world of server administration, but I was eager. I'd heard all about the magic of Bash scripting for automating tedious tasks. My goal was simple: automatically back up a specific project directory (~/projects/my_awesome_app) to a dedicated backup location (/mnt/backups/my_awesome_app/). I envisioned a script that would:
~/projects/my_awesome_app to a new timestamped directory within /mnt/backups/my_awesome_app/.Sounds reasonable, right? I spent a good few hours crafting this masterpiece. I used rsync for the copying part, which I thought was pretty clever. It’s efficient, copies only changed files, and has a nice --delete option (which I wisely decided not to use on the source, or so I thought).
Here's a simplified version of what I was aiming for (don't judge the early drafts!):
#!/bin/bashSOURCE_DIR="/home/myuser/projects/my_awesome_app"
BACKUP_BASE_DIR="/mnt/backups/my_awesome_app"
DATE=$(date +"%Y-%m-%d_%H-%M-%S")
BACKUP_DIR="${BACKUP_BASE_DIR}/${DATE}"
Create the base backup directory if it doesn't exist
mkdir -p "${BACKUP_BASE_DIR}"echo "Starting backup of ${SOURCE_DIR} to ${BACKUP_DIR}..."
Use rsync to copy files
rsync -avzh --progress "${SOURCE_DIR}/" "${BACKUP_DIR}/"echo "Backup complete. Cleaning up old backups..."
This is where things went south...
I wanted to delete directories older than 7 days in the backup location
find "${BACKUP_BASE_DIR}" -type d -mtime +7 -exec rm -rf {} \;echo "Cleanup complete."
I tested the rsync part locally, and it seemed to work like a charm. The files appeared in the backup directory. I felt like a scripting wizard. Then came the cleanup. My logic was: find directories (-type d) in the backup base (${BACKUP_BASE_DIR}) that were modified more than 7 days ago (-mtime +7) and remove them (-exec rm -rf {} \;). Simple, elegant, space-saving.
The "Oops" Moment: When rm -rf Gets Personal
I decided to run the script manually on the server to test the full flow. I executed ./backup_script.sh. The rsync part ran fine. The "Backup complete" message appeared. Then came the "Cleaning up old backups..." message. I was watching the terminal, feeling a sense of accomplishment.
Suddenly, my terminal flickered. Things started disappearing. Not from the backup directory, but from my ~/projects/my_awesome_app directory. My files, my code, my beautiful app… gone. Poof.
I frantically checked the find command output. It wasn't showing anything being deleted within /mnt/backups/my_awesome_app. But my source directory was empty. Completely empty.
What happened?
My find command was supposed to target directories inside ${BACKUP_BASE_DIR}. However, the rm -rf {} \; part was executed after the find command completed its evaluation (or rather, it was executed for each found item). The crucial mistake was in how find and rm interacted, and more importantly, how I thought they interacted.
Let's dissect the problematic find line:
find "${BACKUP_BASE_DIR}" -type d -mtime +7 -exec rm -rf {} \;
My intention was to find directories within the backup base that were old. But the {} placeholder refers to each item found by find. And what did find "${BACKUP_BASE_DIR}" -type d find? It found ${BACKUP_BASE_DIR} itself, and any subdirectories within it.
The issue wasn't that find was misbehaving. The issue was how I was using rm -rf and my lack of understanding about the context. In my mind, I was telling find to look inside the backup directory and delete old backup folders. But the rm -rf {} was being applied to each directory found, starting with the base directory if it met the criteria (which it wouldn't have, since it was just created).
The real horror: What if I had accidentally run this script before the rsync command? Or what if the rsync command failed midway? My find command, if pointed incorrectly, could have wiped out anything.
The actual rm -rf command that probably ran (or would have run if the conditions were met differently) was rm -rf /home/myuser/projects/my_awesome_app/ because I had a typo in my actual script that pointed find to the wrong base directory or* the SOURCE_DIR was somehow symlinked or incorrectly referenced in the find path. It's a blur of panic and Ctrl+C attempts.
The error message I saw wasn't a dramatic "Permission Denied" or "No such
Comments
Sign in to join the discussion.