Moving Databases

I recently moved my personal website from GoDaddy to my home server. I have a business connection at my house, and my site gets little enough traffic that hosting at home on my static IP makes sense. Moving the files wasn't really difficult, I FTP'd them down from the old server, and SFTP'd them up to the new server. Moving the database was a bit more challenging, however.

If you have shell access, it's a pretty simple process. On the old server, type:

mysqldump -u username -p databasename > databasebackup.sql

You'll be asked for the password assigned to "username", and then mysqldump will create a file that contains all the information needed to restore your database. One thing to note, however, is that going between different versions of mysql can be problematic. That's where the --compatible flag is handy. You can specify what type of database software you'll be importing to, and mysqldump will (try) to give you a compatible file. Some options are mysql323, postgresql, mysql40, etc.  Check the man page for more options and explanations about what they all do.

To restore your database file on the new server, it's just as easy.  Simply type:

mysql -u username -p newdatabasename < databasebackup.sql

That should transfer your data simply and easily. If you get errors, you might have to check that --compatible flag, or even do some more work to your database in order to make it compatible.  One of the frustrating things with GoDaddy, however, is that you don't get shell access to your hosting account. Since my account was disabled, any mysql tools that might be available via their website were also unavailable.  That's why it's important to have some backup software running regularly on your website. I was able to take an automated backup from a week ago, and simply import it into my new server.

The moral of the story, like most, is that backups are VERY important!  It's great to know the tools to make a dump of your mysql database, but if something is corrupt, you'll want a backup rather than a fresh dump. If you have any other tips for moving databases from one server to another, feel free to leave them in the comments.


Shawn is Associate Editor here at Linux Journal, and has been around Linux since the beginning. He has a passion for open source, and he loves to teach. He also drinks too much coffee, which often shows in his writing.


Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

I've just thought that it

c++ software development's picture

I've just thought that it would be a good idea to write a php script that takes advantage of table and row locking.

Best online store

bnofd8as9's picture

----- -----

Hi,Dear Ladies and Gentlemen,
1. sport shoes : Jordan ,Nike, adidas, Puma, Gucci, LV, UGG , etc. including women shoes and kids shoes.
2. T-Shirts : BBC T-Shirts, Bape T-Shirts, Armani T-Shirts, Polo T-Shirts,etc.
3. Hoodies : Bape hoody, hoody, AFF hoody, GGG hoody, ED hoody ,etc.
4. Jeans : Levis jeans , Gucci jeans, jeans, Bape jeans , DG jeans ,etc.
----- -----
----- -----

Service is our Lift.

enjoy yourself.

thank you!!


blob fields

bdquick's picture

If you have blob fields, I would also add the "--hex-blob" option to the mysqldump. I've never had a successful move/restore without it.

Use SQLite

Anonymous's picture

People often overlook the advantages of using SQLite (assuming that you are using PHP5), which has no limitations on database size and handles nearly all standard SQL (except for some DDL operations), but whose data is all contained in a single text file for each database, making it ridiculously simple to migrate and backup. One thing missing in SQLite, though, is password security, so if your application is sensitive, you have to carefully consider how your application can protect your data (locating the file outside the web server document root, etc.).

SQLite performance

tracker1's picture

SQLite doesn't perform very well under even moderate load, for a smaller site, it may be enough, for a standalone, or workgroup app, sure... but if you are doing a lot of commits/writes on an application with more than a few dozen simultaneous users, it can become a bottleneck, unless you run SQLite on an SSD that is.

A service based RDBMS is really good for having data caching, as well as good performance, I'm pretty fond of FirebirdSQL myself as it can run stand-alone embedded, and migrate to a service based instance. Though it's a bit less popular, it does have some great features. I also like PostgreSQL a lot as well. I'm not much of a fan of mySQL to begin with.

Michael J. Ryan -

Watch out for databases with

Jacques's picture

Watch out for databases with stored procedures/functions in which case you should run mysqldump with the --routines switch. Pain in the backside if you don't do so

moving a hot database

Jackie Craig Sparks's picture

How about writing a script in php that takes advantage of table and row locking?? Just a thought what does everyone else think?

moving the mySQL files

Ashlin's picture

Well, I had to move several databases with users priviledges (not just the root one) from an actual to a new server. The dump wasn't great to me (maybe i am wrong...) because of the mySQL user priviledges definition that i was affraid to lost and to redefine from scrash. So the solution I have found was to copy the data folder of mySQL (with the mdy files) to the mySQL folder of the new server. The result : everything works fine, without to many things to do ! But still a question : since i have seek for MySQL migration on the net, everytime the answer is the mysqldump method, but never the simple files copy method. Is there any inconvenient that I don't know with this ? Or is it just the barbarian (but valuable ?) way to do a mySQL backup ?

There's no garentee it will

Anonymous's picture

There's no garentee it will work across versions or even patches.

Thanks for that precisions.

Ashlin's picture

Thanks for that precisions. Of course, I have forget to tell that I shutdown the services before doing such migration. As I told, everything that I have red is about mysqldump. Even today, I have found a post about "Copy and Backup MySQL Databases with mysqldump".
Check this link :
It could be very valuable as a complement of this article :-)

Thanks Shawn ! Your posts are really great !

datafile copy ...

Anonymous's picture

a database dump, like oracle expdp utility ,is always consistent itself.
Plain copy of the datafile maybe inconsistent dues to transaction is running and no all the blocks on file system are alligned at the same state of process, so if you start mysql with data corrupted files it can not works properly at data level.
( maybe , it works in most of cases but .. )

Choose always the dump solution, or shutdown mysql server and then copy data file not while it is running or configure a mysql replica or so on.

that's it.

Since he mentions he shuts

cowardly Larry's picture

Since he mentions he shuts down all services first, I do not see the logic of your post! There should be no more transactions.


Anonymous's picture

his post on Tue, 08/16/2011 - 10:04.
my post on Mon, 08/15/2011 - 19:15.

c ya


Shawn Powers's picture

I've actually done it that way too, but you have to have intimate control over your servers in order to have the sort of access. Also, the mysql server process needs to be shut down before moving the files. Since you're just plopping the data down, I'm guessing there might be some serious problems with versioning if you don't use similar versions.

But that said, I've done it too, and it worked. The mysqldump stuff is important when you don't have full access to the servers.

Shawn is Associate Editor here at Linux Journal, and has been around Linux since the beginning. He has a passion for open source, and he loves to teach. He also drinks too much coffee, which often shows in his writing.

not only mysql version is the

Anonymous's picture

not only mysql version is the problem but also mysql configurations.
In some worst cases you not able to access datas at all from tables : same version, same datafiles , different configuration.
I tried it on my skin when a colleague tried to tune mysql production database changing some configuration parameters: so no more data was available from some particular tables. Shutdown,reconfigure and startup did work but ... what the hell!

The second question is about server access but if you want to copy datafiles directly you must have necessarily the server access. More, you have to be root on unix/linux machines in most of cases ( in the standard mysql configurations ), so where's the problem about the server control? Mysql on windows machines is a movie that i don't want to see anymore.

Yes, if you don't have server access mysqldump allows you to dump datas remotely,but only if you have IP's/host's permissions ( and certainly you cannot copy datafiles ).
Do that only if you do not have other way to do it because it is not smart to do dues to network load that can be very important on production and large databases: you have to schedule it in low load temporal window to make dump over the net: writing dump on 15k rpm disk can takes dozens of seconds, over the net it can takes hours ...
If you have 10MB database simply don't care and go :)

I understand people like

Mozai's picture

I understand people like phpMyAdmin, but it gives me the willies. Especially when I get a dozen to a hundred attempts daily from bots to get into my phpMyAdmin files... despite the fact that I don't have PHP nor MySQL installed on this server.

I do look after a wordpress site. I do daily backups of the database with the following:

/usr/bin/mysqldump ${DBSCHEMA} |gzip - --stdout \
 >/home/${SITEOWNER}/backups/${DBSCHEMA}.$(date -I).sql.gz

Eventually I'll figure out something with temp files and zdiff for a form of incremental backups. (I know, I know, real incremental MySQL backups would be if I used binlogging, but those can only be loaded back into MySQL databases, and I'm not confident they can even be reloaded into MySQL databases with different version numbers.)

I'm moved my blog, many...

tuanht's picture

Hello, I'm using Wordpress. Because it's host at free hosting(s), I usually moving my blog from server to server. I've done this many times. For database backup purpose, export tool in phpMyAdmin is so good. It don't have compatible problems, with me. Beside, backup all blog files is very easy with cPanel, compress them in server, and download this compress file.

Copying the data dir itself

Anonymous's picture

Copying the data dir itself works especially well for me for backups using rsync.