This is a backup application, written in Python.

It works a bit like an rsync-based backup script.

Features include:
1) Ability to deduplicate data on a variable-sized, content-based blocking
   a) Just renaming a 1.5 gigabyte file doesn't cause a second copy of that same file to be saved, unlike
	   rsync-based schemes
	b) Storing 3 copies of your fambly jpeg's on 3 different computers results in a single copy of the
	   pictures being saved in the repository
	c) Changing one byte in the middle of a 6 gigabyte file doesn't result in a distinct copy of the whole
	   file in the repository
2) Ability to compress deduplicated chunks with xz (planned, not yet implemented)
3) Safe, concurrent operation over local or remote filesystems, including but not limited to: NFS, CIFS
   and sshfs (could use a little improvement with temp files and renames though)
4) Ability to expire old data for the repo as a whole (planned, not yet implemented - not down to a host
   or filesystem granularity, just repo-granularity)
5) No big filelist inhale at the outset (unlike rsync), unless you request a progress report
   during a backup
6) Hybrid fullsaves/incrementals, much like what one gets with an rsync backup script - so an interrupted
   backup can in a significant sense be subsequently resumed
7) Ability to not mess up your buffer cache during a backup (planned, not yet implemented)
8) A far smaller number of directory entries than a year's worth of daily snapshots with an rsync-based
   backup script would give
9) Backing up and copying a repository from one system to another is practical, unlike rsync backup scripts
10) Input files are selected in a manner similar to cpio, using GNU find with -print0
11) Output is created in GNU tar format; a restore is a matter of piping tar output into a tar process for
   extraction.  This means there's no restore application to worry about race conditions in other than
	tar itself
12) No temporary files are necessary for backups or restores; even a system with nearly full disks can be
   backed up
13) The backup process is cautious about symlink races
14) Runs on a wide assortment of Python interpreters, including CPython 2.x (with or without Cython, with or
   without Psyco), CPython 3.x (with or without Cython), and Pypy.  Of these, Pypy is by far the fastest.
	Portions have been tested on Jython, but Jython is not supported due to its use of Unicode strings in
	a mostly-2.x interpreter.

Misfeatures:
1) There's no way for a user to restore their own files without requiring excessive trust in users; the
   administrator needs to get involved.
2) During a backup, users can see each others' files; data is not saved in an encrypted format
3) It could conceivably be nice to have host- or filesystem- granularity on expires, but this would require
   quite a bit more metadata to be saved
4) Disk-to-disk only - Disk-to-tape is not supported

----------------------------------------------------------------------

About using backshift:

	For a backup:
		Example 1: To do a backup of a system's root filesystem, to a filesystem on that same system, this should work:
			find / -xdev -print0 | backshift --save-directory /where/ever/save-directory --backup
			Of course, you don't want /where/ever/save-directory to be in the root filesystem!

		Example 2: To pull from an sshfs, to a local filesystem (much faster than the reverse):
			cd /ssh/fs/base
			find . -xdev \( \( \
				-path ./sys -o \
				-path ./dev -o \
				-path ./var/run -o \
				-path ./var/lock -o \
				-name .gvfs \) -prune -o -print0 \) | \
				backshift --save-directory /where/ever/save-directory --backup --init-savedir --subset fullsave
			One does the -path's and -prune, because sshfs doesn't distinguish between the different filesystems of the
				machine you're pulling from, so they're all one filesystem to find -xdev.  The -name .gvfs is pruned because
				it causes problems, so we avoid it.

		If a backup takes forever to say it's inhaled 10,000 filenames, there's a good chance you've used -print instead of
			-print0.

		If you have a huge filesystem to back up, and inhaling the whole list of files would overwhelm your VM system,
			use --no-stats.  This turns off the progress report during the backup, but should take much less VM.

	For a restore:
		First, locate what backups are available to restore from, using --list-backups
		Second, locate the files within that backup you require using --list-backup --backup-id
		Third, use --produce-tar --starting-directory | tar xvfp -

		Strictly speaking, you can use --produce-tar with a pipe to "tar tvf -" in the second step too, but it's much
			slower.

----------------------------------------------------------------------

BTW, about the statistics listed during a backup:

1) It's assumed that all files take the same amount of time to process, on
   average.  Doing so isn't as accurate as something like considering the
   number of bytes each file uses, but this is simpler in more ways than one.

2) So if you have one directory with 500 movies about programming in Python, and
   another directory with 500 text files containing recipes about cooking, then the
   statistics generated will be pretty off.  If the movies are backed up first, then
   it's going to expect all the recipes to take the same amount of time the movies
	did - initially.  By the end of the second 500 hundred, it should have a pretty
	clear idea of average duration per file.

3) One way of dealing with this, is to use the "randomize" script that appears in
   this directory.  You can use it as a filter in between your "find -print0" and
	your "backshift".  Make sure to give it the -0 option.  In this way, you'll
	roughly alternate between movie files and recipes.  Randomizing the order of
	the files can be expected to make a backup take a little longer (because various
	directory caches will miss alot), but the statistics should be more accurate.

----------------------------------------------------------------------

About choice of Python runtime:

1) Backshift runs, unmodified, on:
   A) CPython 2.[567]
	B) CPython 3.[012]
	C) PyPy 1.4.1 (much like CPython 2.5.x)
	D) A variety of 2011 PyPy trunk builds (getting close to CPython 2.7.x)
	E) Jython Release_2_5maint -r 7288 (it is known to not work on Jython 2.5.2,
	   but a bugfix was checked into 2.5 maint shortly thereafter that enables
		backshift on Jython).

2) Backshift has some issues on Jython 2_5maint -r 7288:
	A) Jython has no os.fstat, so the fstat verification is turned off when
	   running on Jython, hence symlink races are possible.  IOW, running
		backshift on Jython as root is not extremely safe.
	B) Jython has no os.major or os.minor, so backing up device files is
	   impractical (short of spawning an "ls" subprocess or similar)

This enables one to select the fastest runtime that one trusts for running
backshift.

----------------------------------------------------------------------