Perhaps I need to start another thread here, so tell me if so. Even though I’ve successfully verified the database, and have manually run File -> Optimize Database with no errors, when I run that script I get “Optimization of database failed”. Any ideas?
Also, is there a way to make this happen automatically on a periodic basis?
I agree most people do not think about file integrity, but a good backup is probably of most value if the files are actually the same as they were once stored.
To explain a bit for those who read along: a checksum or hash is a unique digital ‘fingerprint’ of a file. When you input the text abc123 a hash algorithm produces a fairly short readable but seemingly random string of characters. In fact, it is not random: as long as the input abc123 stays the same, the output stays the same, If you change one character, for example zbc123 the output changes completely. The same goes for a file, where even the slightest change results in a completely different output of the hash algorithm.
Currently DT users have no simple way to check the integrity of a static (non-versioned) backup created with the ‘archive’ menu. AFAIK the only way to compare databases is by looking at the item numbers, file size or by manually comparing checksums for those who are familiar with it.
The point is: one or more corrupt file(s) within the backup database will be hard to spot if the size and file number stay the same. This might sound trivial, but a file missing (many) bits or bytes of data can appear to be in order ‘from the outside’, but might in fact be corrupt.
My suggestion would be to keep the archive function as it is, but also have DT/macOS calculate a checksum of the database before it is zipped and store that checksum in an accompanying text file within the ZIP for reference purposes. This could of course be a checkbox in the preferences for those want to incorporate it, which can be left unchecked for those who don’t want it.
Then a ‘restore backup’ menu-item (which doesn’t exist now AFAIK) could simply unzip the archive, recalculate the checksum if it exists, compare that to the checksum in the text file, copy and then open the database when the checksums are alike. A similar ‘check backup integrity’ menu item could do the same but without copying and opening.
(And while I’m on this topic: many software vendors publish the hash of their released software or firmware on their website for users to check file integrity after downloading it. As DT is signed, it’s fairly easy to do so as a user if the hash was available).
Creating a database archive does a database optimization and verification before compression. The hash wouldn’t be the same pre- and post-archive unless the hash was generated after the optimization step.
A backup alternative that has been overlooked is adding a second (or third or fourth) Time Machine drive. Time Machine will automatically alternate backups between the drives. If one fails you can always recover from the other. The process is entirely transparent.
I don’t have any details, but if I understand correctly the point seems to be TM doesn’t backup mounted encrypted volumes and automatically unmounts them.
I presume this is done to prevent encrypted data being backed up on an unencrypted volume, on a volume with less complex encryption or a volume with different credentials. I’ve read you can prevent unmounting the volume by excluding it as a backup target in TM, but that workaround obviously doesn’t backup your data.
I would say use another tool like CCC, either combined with TM or separate. @cgrunenberg mentioned a bugfix is likely to be released soon for DT.
Are only network volumes affected? I understood it could also be locally stored volumes.
As said I don’t know the details and whether it’s a bug or not, but if unmounting of encrypted volumes is done to prevent decrypted data entering an unencrypted or weakly encrypted medium, this makes perfect sense to me.
Say you’ve got data that needs to be protected and place that in an AES256 encrypted volume on your primary disk. I think many users probably don’t expect the backup software to simply copy the decrypted data to an unencrypted volume or a volume with completely different credentials and level of encryption without their knowledge.
Thanks for that suggestion; I was not aware TM would automatically alternate. I actually cycle the disks I use for TM; the idea behind that is that malware cannot possibly affect a disk which is not attached… in the case of failure, however, I would be thrown back to the last backup on the previous disk (I hedge against that by using TM combined with CCC, and assume failure of both backups at the same time to be unlikely)