repo_mod module¶
Provides a backup repository class.
- class repo_mod.File_like_empty[source]¶
Bases:
object
A file like object that pretends to be an empty file.
- class repo_mod.File_like_pieces(hashes)[source]¶
Bases:
object
A file-like object that slurps in chunks from our content directory.
- exception repo_mod.Games_detected[source]¶
Bases:
Exception
An exception to raise when some pranks have been detected (or some kinds of changes during the backup).
- exception repo_mod.Missing_chunk[source]¶
Bases:
Exception
An exception to raise when a one (or more) of the chunks in an st_mtime+st_size match are missing.
- exception repo_mod.Permission_denied[source]¶
Bases:
Exception
An exception to raise when a file can’t be opened due to EPERM.
- class repo_mod.Repo(save_directory, canonical_hostname, subset)[source]¶
Bases:
object
A repo that holds saves and file metadata (including hashes) and chunks of files.
- list_backup(backup_id, starting_directory, recursive=True)[source]¶
List the content of a single backup.
- list_backup_simply(backup_id, starting_directory, recursive=True)[source]¶
List the content of a single backup.
- produce_tar(backup_id, starting_directory, recursive=True, tar_format='default')[source]¶
Produce a tar archive from a backup id and starting directory, for restore or offsite backup.
- save_chunks(lstat_result, filename, dict_)[source]¶
Save the chunks of a file - by borrowing from a previous backup, or computing the hashes the hard way.
If we do the backup of this file the long and hard way, we return False. If we do the backup of this file the quick and easy way, we return True.
- to_established_backup_id(backup_id)[source]¶
Read in basic stuff describing a backupid, from a preexisting backup id.
- traverse(backup_id, starting_directory, visit, visit_arg=None, recursive=True)[source]¶
Traverse the content of a single backup.
This probably should use os.walk and not os.path.walk, because both are present in Python 2.x, but only os.walk is present in 3.x. Some os.path.walk doc on the web says “Note This function is deprecated and has been removed in 3.0 in favor of os.walk().” - from http://docs.python.org/library/os.path.html .
There’s os.walk doc here: http://docs.python.org/library/os.html#os.walk
- class repo_mod.Speeds[source]¶
Bases:
object
Maintain counts of the various kinds of speeds (per file).
- class repo_mod.Tar_state(tar, hardlink_data)[source]¶
Bases:
object
Save some state for our –produce-tar tree traversal.
- repo_mod.borrow_and_save_chunks(dict_, prior_backshift_file)[source]¶
Save the chunks of a file by copying them from a previous backup d hashing them - fast.
If one or more of the chunks doesn’t exist, raise Missing_chunk.
- repo_mod.display_tf(backshift_file, hardlink_data)[source]¶
Display a “tar tf”-like listing for a specific file.
- repo_mod.display_tvf(backshift_file, hardlink_data)[source]¶
Display a “tar tvf”-like listing for a specific file.
- repo_mod.format_time(num_seconds)[source]¶
Convert a number of seconds since the epoch to a human readable string.
- repo_mod.generate_and_save_chunks(lstat_result, filename, dict_)[source]¶
Save the chunks of a file by generating variable-length blocks and hashing them - slow.
- repo_mod.get_initial_directory(backup_id, directory, shorten)[source]¶
Construct and return a repo-internal path to backup_id and directory.
- repo_mod.give_tar(backshift_file, tar_state)[source]¶
Produce a tar-format file entry for a specific file.
- repo_mod.no_games(lstat_result, file_handle)[source]¶
Make sure no games are being played with the file to exploit a race window.
We might also be tripped by legimate changes if they happen at just the “right” moment…
- repo_mod.pick_file_count_estimate(prior_saveset_summaries, least_value, actual_number)[source]¶
Pick a reasonable estimate for the file count of this save in one of 3 ways.
If we’re using a progress mode that gets a file count (passed here in actual_number), use that.
If we have one or more saves we’re backing up relative to, get the largest file count from them, and double it
Otherwise, use 10,000,000. This should cover most filesystems today (2012-04-28), and uses only a modest amount of memory.