Synchronizing FTP Files with Perl

The author explains ftpsync, his script that automatically uploads changed files and directories from a local site.
A Brief Discussion of the How

In order to execute the synchronization, the script collects information from two filesystem trees, one local and one remote. I chose to collect all the information from both trees and perform the synchronization later. In my experience, this translates to cleaner and more maintainable code than if I tried to do everything in one pass. I'll store the data about the remote and local trees in two hashes, which I declare and initialize at lines 36 and 37.

               36   my %rem = ();
               37   my %loc = ();

Once the information is safely captured in the corresponding hashes, the code can focus on the differences and take the appropriate actions.

Finding Out about the Local Files

When matching the contents of the remote FTP site with those on the local copy, it is important to compare apples to apples. Because the filesystem layout is not always straightforward, I chose to compare relative pathnames. Therefore, before looking for the local files, I do a chdir() to the path the user specified with the -l option, as seen on line 44.

               44   chdir $opt_l or die "Cannot change dir to $opt_l:   $!\n";

After this step, I use find(), as provided by File::Find, to traverse the local tree. Below is the code from line 46 to 69. I will explain this code bit by bit, I promise.

               46   find(
               47        {
               48            no_chdir       => 1,
               49            follow         => 0,   # No symlinks, please
               50            wanted         => sub
               51            {
               52                return if $File::Find::name eq '.';
               53                $File::Find::name =~ s!^\./!!;
               54                if ($opt_i and $File::Find::name =~ m/$opt_i/)
               55                {
               56                    print "local: IGNORING $File::Find::name\n"
               57                    return;
               58                }
               59                my $r = $loc{$File::Find::name} =
               60                {
               61                    mdtm => (stat($File::Find::name))[9],
               62                    size => (stat(_))[7],
               63                    type => -f _ ? 'f' : -d _ ? 'd'
               64                        : -l $File::Find::name ? 'l' : '?',
               65                };
               66                print "local: adding $File::Find::name (",
               67                "$r->{mdtm}, $r->{size}, $r->{type})\n" if $opt
               68            },
               69        }, '.' );

At line 48, I tell find() that I don't want the script to chdir() into each directory along the way using the no_chdir argument. I want to be absolutely sure that I am seeing path names that are consistent and relative to whatever the user specified with the -l option.

At line 49, I prevent following symbolic links with the follow argument. I do not want to deal with them mainly because of the infinite loops they can introduce when pointing upwards in the filesystem. I do not use symlinks in my web site, so I see no problem with this.

At line 50, finally, there's some interesting code. find() accepts a user-supplied function that will be called for each filesystem object found. I specify it using the wanted argument, although there are various ways to invoke this function. Please see perldoc File::Find for more information.

The wanted argument requires a reference to a sub, which I define in lines 50 through 68. Line 52 makes sure that I do not include the current directory (.) in the collection. find() passes the complete path name of each filesystem object in the $File::Find::name scalar.

The find() function, when used as shown, produces relative path names, such as ./dir/foo.htm. However, I consider it easier to work with names such as dir/foo.htm. Both are legal forms of relative paths. To accomplish this, line 53 removes the leading ./ from all the pathnames.

After this is done, a check is performed on lines 54 through 58 to see if the current pathname matches the ignored regular expression specified with -i. This is a matter of a simple pattern match that exits the sub { .. } if a match is detected, after providing an informational message when the -v option is specified.

By line 59 all tests have passed, so we should collect the information from this file. I'll collect three elements of information: the modification time or mdtm, the file size and the file type. The first two are collected with the stat() call, which returns a list of values. Notice that I use the special stat(_) construct in line 62.

It turns out that stat() is a somewhat expensive operation, so it is better to do as few of them as possible. When Perl sees bare _ as an argument to a function or operator that would cause a call to stat(), it uses the results of the last one performed. Therefore, the above construct causes only a single stat() call, even when Perl's stat() function is used more than once. The same applies to the -x file operators that I use in lines 63 and 64 to assign a type to this file.

All of this information, kept in a reference to an anonymous hash, is stored in %loc. Additionally, a copy of the last entry is kept in the $r lexical scalar to provide the informational message in lines 66 and 67, if the -v option was specified. So, when find() calls this sub for each object found in the filesystem, %loc will be populated with the data for the whole tree.



Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.

When I try to run this

Anonymous's picture

When I try to run this script, I got this error:

/var# /usr/bin/perl /var/
"my" variable $l masks earlier declaration in same scope at /var/ line 170.
"my" variable $r masks earlier declaration in same scope at /var/ line 218.
Global symbol "$type" requires explicit package name at /var/ line 124.
syntax error at /var/ line 126, near "/f/;"
Execution of /var/ aborted due to compilation errors.

I hope someone can help me with this.


warning - :Cannot cwd to ....svn: Permission denied

John Plumridge's picture

Hi, a very useful script, and neat.
I got something like this warning, when i accidently left a .svn dir in a duplicated image directory in my site folder, and it halted the process of transfer options = ( -cpv). SO i deleted these .svn dir on th elocal machine, and checked the remote for their absence, but the warning, ":Cannot cwd to /public_html/_img/icons/.svn/public_html/_img/icons/.svn: Permission denied" persists! these .svn directories dont exist any longer.
Why is this? Please email me if you know!

Just use lftp

Martin Guy's picture

Thanks for the perl tutorial, but there's already a utility to do this: the "mirror" command of lftp (or "mirror -R" if you want to upload).


Bug report: Doesn't like directories with spaces

David's picture

ftpsync seems to have a problem handling directory names like "test 1" and "test 2" with embedded spaces in the names. It will always mark all files in these directories as needing to be uploaded.

Unfortunately, my Perl-fu is insufficient to post a fix for this.


Anonymous's picture

Some feature I'd like to see:
- Autodetection of the time difference. That is best done by uploading a dummy file then reading the timestamp.

Implemented some new useful features

Peter Orvos's picture

Dear All,

I found this utility useful, however I needed a synchronizer over FTP that could download files as necessary. Therefore I implemented the -D switch for Download mode.

I had to work with an FTP server running on windows, that was neither supported by this script, until now... :)

And the last utility I implemented was the -R n option, where n represents the maximum depth of directory recursing, 0 for no recurse, and unlimited if omitted.

Use the code if You find it useful:



Jonathan Hipkiss's picture

Just downloaded this, it's excellent, does exactly what it says on the tin!

Other use aside from web sync

Yaten Kou's picture

Hi again!
I forgot to share my experience on where I used Luis' script ^_^

I didn't actually use it for synchronizing my website, instead, I used it to synchronize my backups.

I just thought of sharing it coz someone might have the same needs as I am, it might save them some time.

Aside from Luis' scripts, I made 2 shell scripts, and, both can be found at :
Mirror #1
Mirror #2

Basically, my objective is to backup important files on our server farm and send it to our backup server. Something like:
etc etc
then a backup server where all the files will be synced at.

each server, I have 3 files, the two files I mentioned above and Luis' basically does the listed task:


backupd=./ # path to my backupd script

$backupd -y -s /data/scripts -d default "*" # backup all files under /data/scripts
$backupd -y -s /etc -d default "*" # backup all files under etc
$backupd -y -s /usr/local/apache -d default "conf/" # backup conf directory of /usr/local/apache
$backupd -y -s /var/qmail -d default "alias/ bin/ boot/ control/ scripts/ users/" # backup selected directories of /var/qmail
./ftpsync -v -s [backup_server] -u [username] -p [password] -r [remote_dir] -l [local_dir]

exit is usually being invoked through cron is the one that actually does the compressing, download the code to see how it works in detail. Basically, it just compress the list of files and checks the destination if it already exist using md5.

After saving all the backup on a certain backup directory, the backup directory is then synced with the backup server using Luis'

That's just it ^_^ Just thought of giving back a little of something to the community than being the beneficiary all my life. ^_^

Best regards,


Very useful!

Yaten Kou's picture

Great scripts! very useful! I'm using them now ^_^
Thanks to Luis Muñoz for the great script!
Thanks to Peter Orvos for the additional options!

I've also added a simple switch (-e), the script now defaults to 'no erase' so as to prevent accidents of deleting files. The script won't delete files locally or remotely without specifying the -e option.

Here's the link to the new script which contains Luis and Peter's scripts plus my -e option.

Mirror #1
Mirror #2

Best regards!


last version

M. Maahn's picture

Just a repost of the last version with the -e option, since the server is down:



# This script is (c) 2002 Luis E. Munoz, All Rights Reserved, original author
# (c) 2005 Peter Orvos, All Rights Reserved, added options D and R
# (c) 2006 Yaten Kou, All Rights Reserved, added option e and defaults no delete to avoid accidents
# This code can be used under the same terms as Perl itself. It comes
# with absolutely NO WARRANTY. Use at your own risk.

# see ./ftpsync -h for help

use strict;
use warnings;
use Net::FTP;
use File::Find;
use Pod::Usage;
use Getopt::Std;

use vars qw($opt_s $opt_k $opt_u $opt_l $opt_p $opt_r $opt_h $opt_v
$opt_d $opt_P $opt_i $opt_o $opt_D $opt_R $opt_e);


if ($opt_h)
pod2usage({-exitval => 2,
-verbose => 2});
# Defaults are set here
$opt_s ||= 'localhost';
$opt_u ||= 'anonymous';
$opt_p ||= 'someuser@';
$opt_r ||= '/';
$opt_l ||= '.';
$opt_o ||= 0;
$opt_R = -1 if !defined $opt_R;

$opt_i = qr/$opt_i/ if $opt_i;

$|++; # Autoflush STDIN

my %rem = ();
my %loc = ();

print "Using time offset of $opt_o seconds\n" if $opt_v and $opt_o;

# Phase 0: Scan local path and see what we
# have

chdir $opt_l or die "Cannot change dir to $opt_l: $!\n";

no_chdir => 1,
follow => 0, # No symlinks, please
wanted => sub
return if $File::Find::name eq '.';
$File::Find::name =~ s!^\./!!;
if ($opt_i and $File::Find::name =~ m/$opt_i/)
print "local: IGNORING $File::Find::name\n" if $opt_d;
my $type = -f _ ? 'f' : -d _ ? 'd' : -l $File::Find::name ? 'l' : '?';
my @dirs = split /\//, $File::Find::name if $opt_R >= 0;
if ($opt_R >= 0 && $opt_R + ($type eq 'd' ? 0 : 1) < @dirs) {
print "local: IGNORING(depth) $File::Find::name\n" if $opt_d;
my $r = $loc{$File::Find::name} =
mdtm => (stat(_))[9],
size => (stat(_))[7],
type => $type,
print "local: adding $File::Find::name (",
"$r->{mdtm}, $r->{size}, $r->{type})\n" if $opt_d;
}, '.' );

# Phase 1: Build a representation of what ss
# in the remote site

my $ftp = new Net::FTP ($opt_s,
Debug => $opt_d,
Passive => $opt_P,

die "Failed to connect to server '$opt_s': $!\n" unless $ftp;
die "Failed to login as $opt_u\n" unless $ftp->login($opt_u, $opt_p);
die "Cannot change directory to $opt_r\n" unless $ftp->cwd($opt_r);
warn "Failed to set binary mode\n" unless $ftp->binary();

print "connected\n" if $opt_v;

sub scan_ftp
my $ftp = shift;
my $path = shift;
my $rrem = shift;
my $mdepth = shift;

my $rdir = length($path) ? $ftp->dir($path) : $ftp->dir();

return unless $rdir and @$rdir;

for my $f (@$rdir)
next if $f =~ m/^d.+\s\.\.?$/;

my @line = split(/\s+/, $f, 9);
my $n = (@line == 4) ? $line[3] : $line[8]; # Compatibility with windows FTP
next unless defined $n;

my $name = '';
$name = $path . '/' if $path;
$name .= $n;

if ($opt_i and $name =~ m/$opt_i/)
print "ftp: IGNORING $name\n" if $opt_d;

next if exists $rrem->{$name};

my $mdtm = ($ftp->mdtm($name) || 0) + $opt_o;
my $size = $ftp->size($name) || 0;
my $type = (@line == 4) ? ($line[2] =~/\/i ? 'd' : 'f') : substr($f, 0, 1); # Compatibility with windows FTP

$type =~ s/-/f/;

warn "ftp: adding $name ($mdtm, $size, $type)\n" if $opt_d;

$rrem->{$name} =
mdtm => $mdtm,
size => $size,
type => $type,
} if $type ne 'd' || $mdepth != 0;

scan_ftp($ftp, $name, $rrem, $mdepth-1) if $type eq 'd' && $mdepth != 0;

scan_ftp($ftp, '', \%rem, $opt_R);

if ($opt_D) {
# Phase 2: Download "missing files"
for my $l (sort { length($a) <=> length($b) } keys %rem)
warn "Symbolic link $l not supported\n"
if $rem{$l}->{type} eq 'l';

if ($rem{$l}->{type} eq 'd')
next if exists $loc{$l};
print "$l dir missing in the local repository\n" if $opt_v;
$opt_k ? print "mkdir $l\n" : mkdir($l)
or die "Failed to MKDIR $l\n";
next if exists $loc{$l} and $rem{$l}->{mdtm} <= $loc{$l}->{mdtm};
print "$l file missing or older in the local repository\n"
if $opt_v;
$opt_k ? print "GET $l $l\n" : $ftp->get($l, $l)
or die "Failed to GET $l\n";
# Phase 2: Upload "missing files"
for my $l (sort { length($a) <=> length($b) } keys %loc)
warn "Symbolic link $l not supported\n"
if $loc{$l}->{type} eq 'l';

if ($loc{$l}->{type} eq 'd')
next if exists $rem{$l};
print "$l dir missing in the FTP repository\n" if $opt_v;
$opt_k ? print "MKDIR $l\n" : $ftp->mkdir($l)
or die "Failed to MKDIR $l\n";
next if exists $rem{$l} and $rem{$l}->{mdtm} >= $loc{$l}->{mdtm};
print "$l file missing or older in the FTP repository\n"
if $opt_v;
$opt_k ? print "PUT $l $l\n" : $ftp->put($l, $l)
or die "Failed to PUT $l\n";
# Phase 3: Delete missing files

exit if ! $opt_e;
if ($opt_D) {
for my $r (sort { length($b) <=> length($a) } keys %loc)
if ($loc{$r}->{type} eq 'l')
warn "Symbolic link $r not supported\n";

next if exists $rem{$r};

print "$r file missing from the FTP repository\n" if $opt_v;
if ($loc{$r}->{type} eq 'd') {
$opt_k ? print "rmdir $r\n" : rmdir($r)
or die "Failed to DELETE $r\n";
} else {
$opt_k ? print "rm $r\n" : unlink($r)
or die "Failed to DELETE $r\n";
for my $r (sort { length($b) <=> length($a) } keys %rem)
if ($rem{$r}->{type} eq 'l')
warn "Symbolic link $r not supported\n";

next if exists $loc{$r};

print "$r file missing locally\n" if $opt_v;
$opt_k ? print "DELETE $r\n" : $ftp->delete($r)
or die "Failed to DELETE $r\n";



=head1 NAME

ftpsync - Sync a hierarchy of local files with a remote FTP repository


ftpsync [-h] [-v] [-d] [-k] [-P] [-s server] [-u username] [-p password] [-r remote] [-l local] [-i ignore] [-o offset]


The recognized flags are described below:

=over 2

=item B<-h>

Produce this documentation.

=item B<-v>

Produce verbose messages while running.

=item B<-d>

Put the C object in debug mode and also emit some debugging
information about what is being done.

=item B<-k>

Just kidding. Only announce what would be done but make no change in
neither local nor remote files.

=item B<-P>

Set passive mode.

=item B<-D>

Download directory, rather than upload (default).

=item B<-i ignore>

Specifies a regexp. Files matching this regexp will be left alone.

=item B<-s server>

Specify the FTP server to use. Defaults to C.

=item B<-u username>

Specify the username. Defaults to 'anonymous'.

=item B<-p password>

Password used for connection. Defaults to an anonymous pseudo-email

=item B<-r remote>

Specifies the remote directory to match against the local directory.

=item B<-l local>

Specifies the local directory to match against the remote directory.

=item B<-R max_recurse_depth>

Maximal depth of recursive directory synchron. 0 is for no recurse, -1 is for unlimited (default).

=item B<-o offset>

Allows the specification of a time offset between the FTP server and
the local host. This makes it easier to correct time skew or
differences in time zones.

=item B<-e>

Erases remote files that does not exists locally or
erases local files that does not exist remotely if used with -D option.



This is an example script that should be usable as is for simple
website maintenance. It synchronizes a hierarchy of local files /
directories with a subtree of an FTP server.

The synchronyzation is quite simplistic. It was written to explain how
to C and C.

Always use the C<-k> option before using it in production, to avoid
data loss.

=head1 BUGS

The synchronization is not quite complete. This script does not deal
with symbolic links. Many cases are not handled to keep the code short
and understandable.

=head1 AUTHORS

Luis E. Munoz , original author
Peter Orvos , added options D and R
Yaten Kou , added option e and defaults no delete to avoid accidents

=head1 SEE ALSO




download and upload depending on what's newer

Kalen's picture

this is really cool. almost exactly what i need but i'd like something that operates in both modes at the same time. so if the file on remote is newer than local, download. if the local file is newer, then upload. i must be missing something, b/c this should be a no-brainer??

mdtm not supported

JD's picture

The mdtm operation is unsupported on my ftp servers, I rewrote the scan_ftp function using the File::Listing module and it works fine now.
Thanx for your script, it was very helpful to start working on this subject.

sub scan_ftp2
my $ftp = shift;
my $path = shift;
my $rrem = shift;

my $rdir = length($path) ? parse_dir($ftp->dir($path)) : parse_dir($ftp->dir());

return unless $rdir and @$rdir;

foreach my $entry (@$rdir) {
my ($n, $type, $size, $mtime, $mode) = @$entry;

my $name = '';
$name = $path . '/' if $path;
$name .= $n;

if ($opt_i and $name =~ m/$opt_i/)
print "ftp: IGNORING $name\n" if $opt_v;

next if exists $rrem->{$name};

my $mdtm = ($mtime || 0) + $opt_o;
$size = $size || 0;

warn "ftp: adding $name ($mdtm, $size, $type)\n" if $opt_d;

$rrem->{$name} =
mdtm => $mdtm,
size => $size,
type => $type,

scan_ftp2($ftp, $name, $rrem) if $type eq 'd';
} ## foreach

method contains a call to parse_dir which is not included

Cman's picture

Would it be possible to post also all the code at which this method depends on?
For example, "scan_ftp2" does a call to "parse_dir" which is not included in the code snipped.

Regards, Corry.

using mdtm to delete files older than n number of days

Anonymous's picture

I am working an a perl script that would connect to an ftp site and delete any files older than the number of days that I pass it. For example, if I pass it 30, it should delete any files older than 30 days. From the net::ftp doc I found the mdtm method that can return the last modification time of a file. How can I use this to delete a file? I'm at beginners level in perl.

Thanks for this wonderful site and line by line explanation. It helps tremendously.

Null $path to ftp->dir()

Anonymous's picture

The first time through scan_ftp() with an empty $path variable, the ftp->dir() in my Net::FTP package would send 'LIST ' (note trailing space) and get an unrecongized command error back. My horrible hack to fix was to change line 92 to:

my $rdir = (length($path)) ? $ftp->dir($path) : $ftp->dir();

This causes 'LIST' to be sent if length of $path is zero and all started working properly. (Perl 5.6.1 netlib/ftp 2.67 netlib 1.13 on Debian). I was using the -r ftpsync option to set a remote path, FWIW.

Re: Null $path to ftp->dir()

Anonymous's picture

Thanks a lot for the bug report/fix. I was already working on it as Tim Rowe pointed me to it. This does not break my FTP server, so I didn't catch it in time.

There's an updated version of this article at You can download a fixed version of the script there but please, keep coming to :)

Best regards and thanks a lot to all for the feedback.


Re: Synchronizing FTP Files with Perl

Anonymous's picture

great article Luis...

the use of underscore was new to me ;)

and thanks to Anonymous poster for putting me on to rsync

Re: Synchronizing FTP Files with Perl

wdtj's picture

Where is also a common perl script called mirror that does this, plus more.

It is a good example of perl though, and we can always use more perl examples.

Good article.

I found mirror a few years

Rick's picture

I found mirror a few years ago and have been using it since then. Unfortunately, I have been struggling with it lately. The first problem I ran into was that the ls -lRat was stopping after 5000 files. Now, I'm syncing many more files than that, and I've been having problems with Mirror. It does things like flag directories on the remote destination server (I'm uploading) for deletion, when the sources exist and haven't changed ever.

I've been starting down the path of making my own mirror script, and this code looks very promising. I was getting ready to do the work in shell, using ncftp, and I think this will work much better.

Thanks to all; I hope that I'm able to contribute something to the code as well.


Re: Synchronizing FTP Files with Perl

Anonymous's picture

That's what I thought as well until I went to actually try Mirror. That is a complex program to setup and the way to sync files is not very clear. It turns out you have to set the get_files option to false to get mirror to put files from the local dirs to the server. Sure it's logical, just not intuitive. Don't get me wrong, I think it's a great tool, just not very well-documented. Seems that it was written with one-way mirroring in mind more than file synchronizing.

Re: Synchronizing FTP Files with Perl

Anonymous's picture

Wow! Thanks for this tip. I just deleted "mirror" from my system because I thought it could not PUT, only GET. (Typical developer myopia to spend hundreds of hours writing something comprehensive ike mirror and then skip writing a few decent introductory paragraphs outlining context and broad capabilities. Geez...)

Re: Synchronizing FTP Files with Perl

Anonymous's picture

what's wrong with rsync over ssh again?

Re: Synchronizing FTP Files with Perl

Anonymous's picture

As my knowledge rsync doesn't support nested directories. Only one level is supported:

this is just wrong. rsync is

Anonymous's picture

this is just wrong. rsync is recursive.
further more the beautifull about rsync is that it only transfers parts of files that have been modified:
lets say you have a 600MB file and you change only 2 bytes in the middle of it. rsync will transfert only one block (of 4Ko I guess).

Re: Synchronizing FTP Files with Perl

Anonymous's picture

If you don't have shell access to the remote machine you need something like this when you are PUTting files to it, since rsync will not be available.

Re: Synchronizing FTP Files with Perl

Anonymous's picture

Actually nothing if your hosting provider supports them.


Börkur's picture

Is it hard to add recursive option to this?

If not please post the code ;)

already done, read first

Anonymous's picture

already done, read first comments

Problem with MKDIR after this line print "$l dir missing in the

Rajesh Muthu's picture

I'm new to perl . I want to backup the files from my production server which runs on linux to a window based ftp server. I'm using Filezilla FTP Server.
Question 1:
Is there files/folders size restriction ?
Question 2:
I have lots of files. But everyday I get only few new files that I have to backup to window based ftp server. Will there be performance issue if number of files is in large number?

Question 3:
Every time I run the script, it breaks up with this message. I use -v option
temp/in dir missing in the FTP repository
Failed to MKDIR temp/in

But that folder exist in both local and FTP site ?
What could be the problem

Any help will be appreciated