[vox-tech] shell script challenge

Samuel Merritt vox-tech@lists.lugod.org
Wed, 7 Aug 2002 13:03:50 -0700 (PDT)


> I'm using cygwin and I was given the request by my boss to remove all
> duplicate files from the server
> the server is on the x: drive of the windows machine which means that
> cygwin saw it as /cygwin/x
> I forget exactly what command I ran toget checksums.txt
> but it is in the format
> 
> <checksum> *x:<filename>
> 
> The challenge is to find the duplicate checksums and print the file name
> of those checksums.  This is tricky because the directories contain spaces
> which gawk, sed, etc ... see as fields.  Even if I change the IFS to * and
> then use gawk to print the *x:<fname> <checksum> -- sort wouldn't know how
> to deal with it which would make uniq useless (I think).  if I do it the
> other way, <checksum> *x:<filename> sort will work fine but uniq will fail
> because the filename is there.  if I exclude the filename with a gawk ' {
> print $1 } ' then sort and uniq will work fine but I won't have a
> filename.  So all the combinations I can think of fail.  Does anyone know
> how I can find only the duplicate checksums and the file names associated.
> 
> **I realize that with a lbut the problem is that there are 4,575 duplicate
> checksums using:
> cat checksums.txt | awk ' { print $1 } ' | sort -uniq -d | wc -l
> and 46340 files on the server, which seems like it would take an awful
> long time.  any suggestions?

This may not be exactly what you're looking for since it requires Perl, but
this script should do the job. 

WARNING: I haven't tested this script. perl -c claims that it's
syntactically correct, but if this script wipes your filesystem and drinks
all your beer, I'm not responsible. 

-----
#!/usr/bin/perl
# Feed it the checksum file on stdin. 
while (<>)
{
    /(\S+)\s\*x:(.*)\n$/;	

    $cksum = $1;
    $fname = $2;
    if ($seen{$cksum})
    {
	print "duplicate: $fname\n";
    }
    else
    {
	$seen{$cksum} = 1;
    }
}
-----