It’s been a while since I’ve publicly made updates to this script, but I did make some tweaks over the years that I’d like to share.
Here are some changes over the last version:
- Delete old backups via FTP
- Backup to multiple FTP servers
- More efficient backups
- Add time in filename (allows for multiple backups/day)
- More verbose/better error detection
I have moved the development of this to the GitHub repository below:
You can view the previous version here.
If there’s a feature that’s missing that you’d like to see, leave a comment below.
Ameir, I’m very glad to read that you’re keeping this AMAZING script updated. Here are some stats for you, and others that might be helpful and interesting:
1) I’ve been using your MySQL backup script for about 5 years!
2) I use it to backup about 14 of my most important DB’s
3) My largest DB is 3 GB in size and compresses down to a 1gb file. NOTE: the special version you created allows me to ignore the search index tables… with those tables, my DB is 6.7 GB!
4) I backup all the DB’s every day around 4:00 am EST without any problems
5) These backups have saved my butt about 30 times… I mean REALLY REALLY saved me, my businesses, not to mention all my tens of thousands of users!
I’m interested in updating my script for the regular backups that I do, but is there a chance you can create a skip tables version of this update for me?
Also, I’ve learned the hard way, “If it ain’t broke, don’t fix it!” That being the case, since your earlier version is working so amazingly well, do you think the pros of updating outweigh any potential cons?
Keep up the great work, and please shoot me an email when you have a second.
Ameir, a question: I see in the new version you have “Delete old backups via FTP”. With the old version, I keep 3 days locally and then I’ve been manually deleting backups over 30 days off the FTP site.
I only see one spot for “delete old files” and “DAYS=”. Does this apply to local, FTP or both? If it applies to both, would it be possible to break it out so we could set different options for local and remote? Example:
Hey Rob! Wow, I’m glad to see that the script has gone so far for you. It’s saved me plenty as well, but I’m definitely not pushing the amount of data you are; I’m happy to hear that this scales well.
There aren’t many significant changes between this version and the previous one; if you don’t need to back up to multiple FTP servers or delete FTP backups automatically, this version doesn’t bring in anything too enticing.
Regardless, if you’d like for me to merge this version with the one I modified for you a while back, it shouldn’t take too much effort. There aren’t any cons with upgrading. Just let me know what you’d like to do and I can work at it in my free time.
It’d be great if I could do that, but I unfortunately can’t think of a straightforward way to do it. I cheat a little bit here with the FTP deletions because I check for the “old” files on the local server, and blindly attempt to delete them from the FTP server. Those files might not even exist on the FTP server, but we try to delete them anyway.
In short, if you delete files off of the local server every 3 days, we’d only be able to delete them from the FTP server every 3 days. Does that make sense?
A bit over my head, but I think I understand. Maybe I should stick with your previous version since I really like having many more day backups on my remote server, but only keeping a few days locally.
The script doesn’t delete the mysqldump(.sql file) file after it gzip’s it. Can you work this into the script?
Excellent script BTW!
What operating system are you running on? I’ve tested this script on Debian and RedHat-based OSes, and the gzip included doesn’t leave behind the uncompressed file. On one of my Ubuntu 10.04 servers, gzip -V gives version 1.3.12.
Dear Amier I have used your old and new version in our live servers. It’s pretty good to take complete backup of mysql. But I’ve a problem only on FTP server. This is the error msg “invalid compressed data–crc error & invalid compressed data–length error” – while extracting the compressed .gz file. FYI: in local machine it gunzip properly. Please help us to rid of it.. Thank you..
My guess is that the files are being transferred in ASCII mode and not binary, which is the default on my machines. Try this:
In the section that starts with “ftp -nv <
Any chance you could make a tandem script, or modifications to this one, so that it can also make backups of files located on the server? I’m about to use this script to routinely backup my mysql database of my phpBB forums… however, those backups are worthless if I do not have a copy of the phpBB directory structure, with all of the included modifications to the phpBB source files!
Any help or guidance would be greatly appreciated!
I wrote what you’re looking for a while back and it still serves me well. Check out:
Let me know how it works out for you.
Your script is great, i would like to backup some selected tables from one db, how could i do this?
Any chance of exporting it as csv?
There’s no native support, but you can do this:
Just add those options to the line in the script that does the dump. Let me know how it works out.
Please see this post and those surrounding it; it should resolve what you’re looking for.
Thanks very much Ameir, this is just what I was looking for!
Great script, but I think I have found a small bug in 2.2……
The OLDDBS variable is only set if you have FTP=y. So if you have FTP=n and DELETE=y then the delete functionality will not work as OLDDBS is unset.
I found your script on another site where a guy had made a few improvements to it. I’ve added it to Github because I never want to lose it – I use it a ton!! I still have your name on the project of course. Any chance you’d be willing to apply your latest updates to the copy of the script he modified?
Thanks for contacting me about this. I’m glad to hear that the script has been useful to you.
I’ll devote some time this week to comparing my script to the one in Github and merge my changes wherever I can.
I’ll ping you here once that is complete.
Hello! I’ve been using the 2.1 version of this script for a few years, and have been very happy with it. I just recently updated to the 2.3 version – thank you for all your hard work!
The new script does a count of bytes transferred, but I don’t see a setting / option to turn this on or off… it produces a rather verbose email on some of my larger databases. Is there a setting to turn it off?
I should probably remove this by default, as it’s really only helpful when running the script interactively.
To turn off the byte count, you simply have to remove the line that says ‘tick’.
Let me know if you run into any issues.
I’m seeing the same issue – I do not want it to delete files off my FTP server, but it’s not deleting them from the local server, either.
I’m running the 2.3 version at the moment.
Thank you for all your work on this – it’s been a lifesaver!
Works fine – thank you! 🙂
Or… never mind me, it’s working fine. For some reason, it seemed to take an extra day to kick in and clean up after itself.
Question – is it possible to set the script to clean up the local files, but not the remote ones? It’s good that it’s freeing up space on the server, but with 9 TB on my remote FTP server, I don’t need to delete as often, if at all. (Maybe different times for local and remote?)
I’m glad to hear it’s working out well for you.
Not deleting files on the remote FTP server is a piece of cake; simply remove the line “$REMOVE”. You can also remove the line starting with “REMOVE” (without the ‘$’), but it doesn’t help or hurt to leave it.
I hope that helps!
Been using the script for while now, thanks!
However, I started using the email function, but it spits out an error:
[code]Error sending message, child exited 75 (Deferred.).
/backup/mysqlbackup-2.3.sh: line 111: 13033 Done echo “$BODY”
13034 Segmentation fault | mutt -s “$SUBJECT” $EMAILS $ATTACH
ERROR: Your backup could not be emailed to you! [/code]
I had a similar problem in the past. It turned out to be that my backups were larger than Postfix allowed for a message.
An easy way to check is to telnet to your server on port 25 and type ‘ehlo localhost’. You should get a line with something like ‘250-SIZE 10240000’.
If the size given is smaller than your combined backups, you’ll need to change your MTA’s setting; just make sure the recipient address can handle the filesize too.
We have been using the script for sometime and works great for us. Recently I am having some trouble in its configuration due to change in our servers. My questions is can I use IP instead of host-name at
“# your MySQL server’s name
and If I am using IP then should I also provide this IP at HOST of Mysql server location.
# your MySQL server’s location (IP address is best)
One of the best backup script I did ever use.
Helped me so much, thank you Amier!
The SERVER variable is just for naming purposes (filename). The actual HOST variable needs to be the IP or hostname where your MySQL server is. I hope that helps!
first of all thank you for this script, it’s really great!
Maybe it’s just a mistake I don’t see but I face an unpleasant issue: this script works perfectly when launched via ssh but doesn’t work when in crontab.
System is Ubuntu LTS server 64 bit edition; it seems that in “interactive” mode database gets backed up, while in “cron” mode it doesn’t, but I get no specific error or warning.
What are the minimum user permission require for this script? I have looked online and found various permissions necessary. I want to have the least amount as possible for the mysql dumps.
I have an EC2 instances connected to an RDS MySQL instance. I run a cron from a local server to run this mysql dump and it has started spiking the CPU and taking down the EC2 server that is connected to the MySQL RDS instance. I get a high volume of DB connections on the RDS but not sure how that is causing a the EC2 instance that is only connected to the RDS MySQL database to spike the CPU and take it down.
Thanks for any ideas related to permissions.
Hello, firstly, thanks for the script.It’s really cool.The script send email reports with backup.sql.gz file.I don’t want this.I just want to email report not sql.gz file.How can I do this, thanks.
This is a great question. I found the following that should be of use:
It would also be helpful to monitor processes as the backup is being taken to get some insight on what processes are consuming the most resources. For example, gzipping a large database dump can be quite intensive, and if you don’t have enough CPUs on your server, it can grind to a halt.
I’m glad the script is working out well for you.
To not send the dumps as an attachment, you can simply put a # in front of the line that starts with “ATTACH=”. You will then get the same email, but without the attachments.
Hope that helps!
I haven’t experienced this issue before; are you sure the script is being executed? It will help if you put “[email protected]” in your crontab so you can see what’s going on.
I’d recommend to use FULL paths in your crontab just to be safe, and ensure that the script is executable. You could also just try, e.g., “/bin/bash /full/path/mysqlbackup.sh”.
Let me know if you have any updates.
Thanks, it’s okay
hello! Can i use another ftp port ? may be 2121 ?
FTPHOST=”host:2121″ not work
ftp: host:2121: Name or service not known
Ameir, old friend, I hope you are doing very well! Your script is still going strong after all these years and it continues to be one of my favorite downloads of all time!
Two questions for you:
I’m in the process of making some changes to websites and am going to be putting them all on the same platform, which means they will all have almost exactly the same database structures. You created the special version of your script that allows me to list the tables to exclude (brilliantly implemented BTW) and I’ve been using it for my largest single site. I’m thinking about now using this one script for all of my sites. My question: Is there a way to edit the script to work with multiple DBs? I know it is currently set to only work with one DB. How difficult would it be to set it to work with multiple DB’s?
My webhost where I store backups is no longer working with FTP and I must connect with SFTP. Is SFTP an option or should I just find a different webhost for backups?
Thanks for your great work on this script. I’d also like to connect via email, so please message me!!!
Hey Rob, long time! I’m glad to see you’re back; I hope you’re doing well.
1. That’s definitely doable. Will you be excluding the same table for each of the databases, or will that vary? To clarify, will the databases be on the same host, or spread among multiple?
2. From a quick search, it looks like SFTP can be supported without a ton of extra modification. I will dig a bit deeper and do some testing on this. Be on the lookout for an email with a script to test 🙂 . I’ll try to support both, but will you be using passwords or SSH keys? It’s probably about time this supports SSH-based protocols anyway, for security reasons.
Thanks for your message! I’m glad to have you back.
I’ve re-implemented the script I gave you before to include/exclude certain tables from a database backup. Would you mind testing that it works as expected? Once that’s complete, I’ll start tackling the SFTP side of things.
The updated script can be found here:
Ameir, I’m very sorry for the delay, I didn’t receive an email notification of your replies (even checked my spam folder) and just came back here for a check and was pleasantly surprised to see your replies!
Table Skip: Yes, I’ll be excluding the exact same table for all the DB’s. I just double checked and they all have the exact same name and prefix: xf_search_index
I see in your update that I can list the DB.table to skip per db/table combo, is that correct?
SFTP: I’m not too familiar with it, so I may not be using passwords or ssh keys, but maybe I should?
I did a quick backup and it seems to be working fine, but I’ll need to try a more detailed restore to fully test it. I’ll keep you posted!
BTW, is it me or did the filenames get much longer? 😉
Hey, Rob! No worries, I switched to WordPress from Serendipity, and lost the email-on-reply feature in the process. I’ll have to look into that.
Has the alternate script been holding up okay? If so, I can go ahead and add the SFTP feature to it. SFTP relies on SSH, so it uses the same auth schemes as SSH. That (typically) means you can login with a username and password to use SFTP, or you can create a public/private keypair, and give the SSH server your public key to trust. When you connect to the SSH server for SFTP, a key exchange occurs, and SSH knows it should trust you, and let you in. This is definitely the preferred way if you’re planning on automating things (or using a script like this 🙂 ).
And yes, the filenames did grow in size a bit. Some folks do backups multiple times a day, and relying on just the day in the filename wasn’t sufficient, as consecutive backups on the same day just overwrote the previous backup.
I look forward to hearing back!
Actually, no need to worry about SFTP! This was a good kick for me to change my whole backup process to include Amazon S3 into the lineup, and so far your script + some S3 magic is working GREAT! I linked to your script from my writeup here: http://www.nifty-stuff.com/amazon-s3-backups-large-files-from-vps-to-s3-using-s3cmd.php
Skipping selected tables is working GREAT and saving me a ton of processing time and storage space!
Regarding file naming convention: When I try to restore backups I run into problems because of the super long host name and that it contains periods, etc. I’m like to strip the hostname from the script, but worry about breaking the script. Can you point me to the line in the code and what I can remove so the filename is simply DATABASENAME_DATE.sql.gz ?
Hello! I recently moved web hosts, and have found that on the new host, Passive FTP doesn’t seem to work so well. Other forms of backup I use (Akeeba Backup for Joomla) require me to specify Active over Passive FTP to make the download work. When CRON runs the backup script, I am seeing the following in the results email:
250 CWD command successful
local: serv01.vm2284.sgvps.net-eodra_db-12-13-2013-0645-mysqlbackup.sql.gz remote: serv01.vm2284.sgvps.net-eodra_db-12-13-2013-0645-mysqlbackup.sql.gz
227 Entering Passive Mode (71,167,153,20,4,1).
ftp: connect: Connection timed out
FTP transfer complete!
The file is generated locally, and stored there, but not FTPed down to the remote server successfully.
Can you assist in enabling Active FTP?
Ah, I think I figured it out – by adding the command “passive” to the FTP process, it toggles away from Passive mode (the default) to Active instead. This resolved my issue.
I like the new split-out Config Script, BTW! If I could make a suggestion – make both the Tick, and the Passive / Active FTP, items that could be toggled in the Config File as Y / N.
I’m glad you got it sorted out! I’ll make some updates in order to be able to toggle this.
Also, passive FTP is defined in the script with ‘ftp -nvp’; removing the ‘p’ there would make it active.
Hey Ameir, I was wondering if you had any suggestions on my question above regarding file naming convention and which bits I should edit to remove “host” & “mysqlbackup” from the filename?
Hey, Rob! My apologies for not seeing your last message earlier. Those things can definitely be updated; I should probably allow for them to be more customizable down the road.
The date format did get longer, and can be changed here:
If you want to remove ‘mysqlbackup’ from the filenames, it’ll be a tad more work, but doing a find/replace to remove it should do. You can do that with the $SERVER variable as well if you don’t need it, or simply set it to something nice and simple here:
Awesome blog post, btw! I might start looking into using S3 for backups in lieu of these storage VPSes I have.
Excellent, thanks Ameir!
What’s incredibly is the tiny cost of using AWS Glacier for longer-term backup storage. I almost can’t fathom it being only $0.01 per GB! My VPS charges $3/month per extra GB… that’s 300x more expensive than Glacier!! :O