Backup Strategies

Whilst there are a few threads on backup strategies, they are quite old. There are a number of individual posts dotted around regarding individual strategies, too.

We got a little off-topic on this thread, and it was kindly and rightly suggested we go play somewhere else. So here’s a thread for up-to-date modern day backup strategies.

2 Likes

My backup strategy is - and happy for anyone to comment on any shortcomings they see - :

  • I back up using TimeMachine using standard settings (that is local snapshots, hourly backups, daily backups)
  • I backup using CCC every 8 hours with SafetyNet and Snapshots
  • I make a monthly backup of DT databases to WORM media, which is then stored off-site.
  • I backup some databases every 8 hours using Arq

The backups to CCC and TimeMachine are to a set of disks which I cycle approx every 3 weeks; I use three sets. Those not in use are stored off-site.

2 Likes

As you’ve mentioned in this thread, what’s currently missing in my opinion is a user friendly method to check file integrity. The file count or file size can be logical, but if some or many files are somehow corrupt, you still end up with a backup that’s of limited value.

How does your strategy take into account file integrity if I might ask? I.e. how do you know your backup actually works as intended should you rely on it in the future?

I presume CCC and TM somehow have a build in integrity check, but I’m not sure to be honest.

I’ve implemented the script which I posted in the thread you link to; my hope is that I would notice damaged files more quickly that way. I have no foolproof way of checking the integrity of the backups. I have played back backups to make sure I can open them and that they contain files which I can open; obviously, however, if I did not notice file corruption, the snapshots containing the intact files could be damaged by the time I need the backup. In my case that would happen if I did not notice damage for at least 6 weeks and all my WORM media failed.

My assumption is that under most realistic scenarios, in the worst cases I would not lose more than 4 weeks worth of files. That would be bad enough, but probably manageable.

I’ve just looked up what CCC offers and I think it might be the ‘Backup Health Check‘.

https://bombich.com/kb/ccc5/advanced-settings

As I understand CCC calculates an md5 checksum for every file, but I’m not sure whether it will calculate the md5 of the complete DT DB or of each separate file included in the package.

I’m personally more concerned with the original files getting corrupt than with the meta-data DT adds, so I think I’m going to extend my backup strategy with a way to backup separate files (mostly PDFs) as a Finder folder based on the file tree in DT. If I understand the CCC support page correctly, the files on the backup medium will be compared to the files on my macOS drive automatically (which usually is an improvement over manual checks).

That’s correct, and I do use that setting too - it offers additional protection against the backup medium failing, but won’t recognise that an original file has changed to something it shouldn’t be.

@Blanc i do the same thing with CCC and TM. just looked into ARQ, if the backend is s3, it encrypts the files but somehow manages to only upload the deltas, making the total backups only slightly larger than the sum of my databases sizes?

The total size of my backup suggests that you are right in your interpretation of what happens; my DT backup is approx. 2.5 times the size of my databases.

thanks. IF i could mount the cloud service as a normal volume (i.e. dropbox, s3, onedrive), i could achieve the same result with CCC that i would get with Arq?

I’ll think about that - I’m unsure.

I think your answer to that question is here, in the Bombich knowledge base. The short answer is that depending on what exactly you do with Arq, you could copy that action with CCC.

1 Like

You legend, thanks a bunch.

CCC offers limited support for third-party filesystems, such as those provided by FUSE for OS X(link is external). Due to the large number of filesystems that can be provided by FUSE, CCC provides generic support for these “userland” filesystems rather than specific support. CCC takes a best effort approach by determining the capabilities of the source and destination filesystems, warns of potential incompatibilities, then presents only unexpected error conditions that arise during a backup.

Doesn’t sound too promising, for important backups especially.

You’re most welcome :slight_smile:

I agree - if it’s (for any value of “it”) mission critical, it should probably be used in the way it was intended.

My backup strategy:

Daily with Arq (only DT databases in the backup set; Destination OneDrive)
Daily with CCC (whole drive, not bootable)
Weekly with Arq on a local drive (SSD)
Monthly via DT Export and encrypted upload to OneDrive

Last year, after my iMac lost suddenly all its data I restored from my Arq Backups. This was very easy and without any problems. Everything works as expected. My Settings were restored from the CCC backup.

I can recommended the backup via Arq (online and local)

1 Like

I access my notes on a Mac, and iPad (2 devices)
Of course Time Machine should be in use by all Mac users; stored on an external drive
I also backup note versions using a smart rule; triggered by “before saving”

A weekly backup using the archive feature, stored on iCloud
My weekly backups also include a full export of notes using the file/folder option; also stored on iCloud.

1 Like

how do you only restore settings from CCC?

i have been playing around with Arq now and it seems the new encrypted databases in DT don’t seem to support delta uploads?

In an unencrypted setting, the backup client sends only the changed files. In an encrypted setting, there’s only one file, namely the encrypted database. The client has no way to figure out which files were changed.
If he could, encryption were broken.

1 Like

wasn’t sure if they encrypt the entire db as one file or the individual files etc

Encrypting single files would leave the file names unencrypted, I guess. So not a good idea.