• This software is owned by The university of California, Irvine, and is not distributed under any version of the GPL. GPL is a fine series of licenses, but the owners of the software need it to be distributed under these terms.
    try-copying-up-to-n-times is a script I wrote to facilitate recovering data from a filesystem that shows inconsistent behavior - sometimes a file looks fine, and other times the file won't be readable, or the filesystem will even crash when you try to read a particular file or series of files.

    The general flow of usage is:

    1. Create and seed a database for the source directory, with -i, carefully selecting the number of reattempts you want before a file is marked "failed", aka "unrecoverable". Wait while an iteration is attempted.
    2. Run more iterations with -r, until you've had enough.
    3. In both of the above cases, you may want to specify -c or -C to determine some conditions under which the script will stop trying for the time being (until you rerun it).

    Commandline arguments are like:

    esmf04m-root> ./try-copying-up-to-n-times 
    ./try-copying-up-to-n-times: -i, -r, -m, -g, -s and -e are mutually exclusive, and exactly one must be specified
    Usage: ./try-copying-up-to-n-times [-i databasefile initialrepetitions] [-r databasefile initialrepetitions] [-e databasefile]
    "-i databasefile repetitions" says initialize the database.
        Repetitions is the max number of times we will try to copy a given file
    "-r databasefile" says restart: continue counting down repetitions
    "-e databasefile" says delete a preexisting database
    "-d sourcehier desthier" says to copy data from sourcehier to desthier
    "-m databasefile repetitions filename" says to set filename's repetition count to a specific
        value, manually
    "-g database filename" says get the value for filename's repetition count
    "-s database" says to summarize counter status for all files in the database
    "-v n" says to operate verbosely.  Higher n is more verbose.  1 is only for definite error conditions,
        2 is surprise (non-)preexistence conditions, and 3 is for the whole ball of wax
    "-c shellcommand n" says to run shellcommand after attempting to copy n files.  If the command
        returns POSIX shell false, ./try-copying-up-to-n-times will exit.  Otherwise we continue
    "-C n" says that if %s sees n consecutive file errors, terminate prematurely
    
    -i, -r, -m and -e are mutually exclusive
    
    Only regular files, directories and symlinks are handled at this time.  Hard links are not
    preserved, their relationship will be broken silently
    
    This program uses the python anydbm interface, so it may seemingly at random choose a backend
    database like berkeley db, gdbm, dbm, dumbdbm or others.  However, once a database of a given
    name is created, subsequent usage of that same database name should come
    up with the same type.
    
    This is a letter I sent to a client, about combining an automatic reboot solution from CPS, with my try-copying-up-to-n-times script:
  • Known bugs:
    1. Sometimes, it's possible for the script to error out because of a missing directory in the target hierarchy. For now, one can just mkdir it, and rerun the script.
    2. If the filesystem containing your database gives a write error due to a full filesystem (and perhaps other reasons as well), then your database may become corrupted (at least if you're using Berkeley DB 4.2, AIX 5.1 ML 4 and python 2.4.x) - not sure about other python database interfaces). However, the maximal corruption I've seen so far could be temporarily corrected with:
      • ./try-copying-up-to-n-times -s /tmp/Francois-subset-db
      • Make a note of any key that the above step tracebacks on. For the sake of discussion, assume it is "run110/xrun110-68590000.field"
      • Then run "./try-copying-up-to-n-times -m /tmp/Francois-subset-db 2 run110/xrun110-68590000.field" to correct the problem with key, where "2" is the number of times you wish to retry the file in question.
      However, this lead to a series of many such manual tweaks, so eventually I wound up writing a small python script that would obtain a list of all the keys in a database, check that the the have data associated with them, and write them out to another database. I converted dbhash to gdbm, but dbhash to dbhash might have worked as well.
  • Future directions
    1. A python module for handling ranges of numbers (EG, file blocks or so). Might be nice to make the program understand how to retrieve parts of files, and not just treat files in such an all-or-nothing manner.



    Hits: 3491
    Timestamp: 2024-12-23 06:37:41 PST

    Back to Dan's tech tidbits

    You can e-mail the author with questions or comments: