Revisiting the prohibition on using Dropbox for backing up

Tx Jim, you’re right. I’ll store the ZIP files. Yeah, safer than Dropbox, but still not safe…

WebDavNav - you should try it, see what you think. I know you have a real WebDav solution via Apple Server (from before they got rid of it)

Yeah, safer than Dropbox, but still not safe…

Actually, neither are technologically safe for package files like ours. (Ignoring the potential security “safety” you’re referring to :slight_smile: ).

WebDavNav - you should try it, see what you think. I know you have a real WebDav solution via Apple Server (from before they got rid of it)

I had a copy a long while ago, but haven’t messed with it since. I may have to check it out again.

Going back to external disks vs Flash drives - yep, I have a collection of flash drives, of which at least a fourth stopped working after a while. Some failed exactly when you I most needed them. You’d think they would be more reliable. OTOH, I have SSD blades in enclosures, plus Samsung and other SSD drives, none ever failed.

Is there any difference between creating a Daily Backup Archive, as opposed to going into Finder and just compressing a DT file?

If you have 20 databases, it is quite slow creating the Daily Backup Archives one database at a time. So one might instead created a compressed copy of the directory containing all those databases. But the compression is very slow.

But from what is said here, all this can be avoided by saving to appropriate media rather than to the cloud (not even to Backblaze mentioned above?), since the save can then be of uncompressed data. (For some reason, only the cloud tends to chew up metadata in non-zipped files). Appropriate media may or may not be an SSD, depending on your take on their reliability. (The stated failure rates look good, but maybe the problem is what happens to small SSDs while thrown around in your pocket).

If you have two local sync stores created identically in terms of the databases saved to them, and both selected for, and on SSDs (or spinning drives), then, as I understand it, they will be kept in sync with each other just as one’s several Macs are also kept in sync. The chances of two SSDs failing simultaneously is quite unlikely. (Would one be aware of the failure of one of the SSDs, however?)

Of course, one should also have other backups, but the benefit of the two SSDs, (perhaps per machine) is that they would all be right up to date (which I guess is a good thing at least for certain purposes, but not for recovering earlier unintended deletions that have been propagated everywhere). Hence the need for earlier backups.

Taking it one step further. Here’s the scenario. You realise that last year you must have deleted a big chunk of one of your databases, because you hit the wrong button and then emptied the trash without thinking. But that database has all the stuff you’ve added in the last year. You want your new stuff, but also that that disappeared a year ago. So you go back and find a copy of your database as it was 366 days ago. What, then, is the best routine? Can you use a sync process? (And what would be the routine if unfortunately you couldn’t remember how long ago you deleted this data?)

The automatic sync uses actually different but dynamic intervals for different sync locations as local sync stores and Bonjour connections are faster than e.g. Dropbox/WebDAV.

Thanks, Christian! Perhaps this could also go into the feature-request list, which might come about someday (or not :stuck_out_tongue:), that is, the ability to choose different sync intervals for different sync locations. So for you location you could let it on Automatic, on another, you could set it to daily, and so on. I think it would open up some interesting possibilities for using some locations as backups.

Is there any difference between creating a Daily Backup Archive, as opposed to going into Finder and just compressing a DT file?

There is verification and optimization that goes on before creating the archive.

But from what is said here, all this can be avoided by saving to appropriate media rather than to the cloud (not even to Backblaze mentioned above?), since the save can then be of uncompressed data.

Yes, this is possible, but it’s already taken care of with Time Machine (if in use). The online service Arq is also data-safe for online backups of native DEVONthink databases.

If you have two local sync stores created identically in terms of the databases saved to them, and both selected for, and on SSDs (or spinning drives), then, as I understand it, they will be kept in sync with each other just as one’s several Macs are also kept in sync.

Correct. And local sync stores are a very fast sync method.

Taking it one step further. Here’s the scenario. You realise that last year you must have deleted a big chunk of one of your databases, because you hit the wrong button and then emptied the trash without thinking. But that database has all the stuff you’ve added in the last year. You want your new stuff, but also that that disappeared a year ago. So you go back and find a copy of your database as it was 366 days ago. What, then, is the best routine? Can you use a sync process?

In this case, no sync would not be an option. Unless you were using a Manual or longer sync interval… If you made a mass delete, you would have to disable sync before it removed the sync data for the deletion. Also, no - you can’t go back to earlier states of a database via sync. It is always current - even with your deletions.

Thank you very much for all that help. Interesting that Arq is kinder to metadata.

You’re welcome!

If you’re looking at a sync / backup option, always be wary of ones that tout constant or instant backups. It is always best to use one that uses a snapshot-style process, similar to what Time Machine does.

:slight_smile:

Arq would do that?

Yes, Arq allows a scheduled backup in the same fashion as Time Machine. In fact, our company president @eboehnisch has been using Arq for backups for quite a long time now.

Pretty strong recommendation! Thank you.

Do you have any ideas as to best practice when accessing a backed up database that has data missing from the current database? Am I right in thinking that the drill would be to rename the earlier database so as to open it alongside the current database. Then to find what has gone missing, and to drag over to the current database?

Seems to me there is lots of discussion on how to back up databases, but not much on how then to use those backups…

@eboehnisch When using Arq, do you just tell it to back up the entire /databases folder? How many snapshots does Arq keep if you do this?

That would certainly be more convenient than manually exporting database archives once a week.

Arq just archives the changes, also inside the database packages. It does not backup the whole database over and over again. I let it backup my whole Documents folder with everything in it every four hours.

Thanks guys for such good thread! @StephenUK, I use Backblaze, which does incremental backups, and certainly can go way beyond one year. After @BLUEFROG and @eboehnisch comments, I just checked Arq, very nice. And very inexpensive! Feature-wise, Arq and Backblaze are very similar, and there are others.

Out of curiosity, where do you have Arq back up to? (for your Documents folder, in general, and the DT backup, specifically)

Thank you uimike! I will look at both.

I have Arq backup to Amazon S3, personally. When I started using it, I had an AWS account and there were many fewer supported cloud options. Nowadays, there are others that may be cheaper for you (or not).

Thank you for your reply.

My preferred cloud is Tresorit which doesn’t offer direct integration with Arq. However Tresorit can sync any folder on my computer. Would this suffice as a cloud backup solution:

  1. Have arq backup my /databases folder every 4 hours to a different folder on my computer.
  2. Tell Tresorit to sync that folder.

Is this a data safe method? Tresorit sync is real time, but the contents of the folder will only change after arc has updated its snapshot.

I backup to Amazon S3, @RobH as well as via WebDAV to a Synology here at the office.