Secure Backup Strategy

Hi All,

Ive been using Devon Think for a while now and have been using the “File → Export → Database Archive” method to backup my databases. I had to do a restore today and noticed that the archive isn’t encrypted or protected. What is the best way to achieve a backup with encryption directly from DT or is this not possible?

Thanks in advance for any replies!

I use Time Machine hourly on an encrypted SSD and Arq for remote backups, also encrypted, with Wasabi cloud storage for immutable (90 days) backups. I’ve restored databases with Time Machine in the past without issues. You can also browse or search the contents of database backups easily in Finder (show package contents) or something like Backup Loupe (or even DEVON’s EasyFind) if you only need to get to specific files. Arq works well too and preserves metadata (which can be an issue with other backup apps like Backblaze :face_vomiting:). It can also search within databases and since I set it to run every 30 minutes, it’s a bit like version control.
I don’t see the value of doing extra exports in that case.

Where do you store your exports? If it’s on an external drive, you could create an encrypted APFS container as an alternative to the above mentioned options (HFS+ volumes can’t be encrypted any longer after Catalina). I’m not a fan of encrypted zip archives for anything bigger than 5 GB.

(not a direct answer to your question, but wanted to share how I do this)

5 Likes

Here’s what I do:

: iMac has Time Machine running continuously to a Synology NAS server, and two a connected USB drives. Also running Synology Backup on selected key folders, including DEVONthink’s. Backblaze also running taking all it can to provide remote copies. Also, like you have a scheduled (with cron) AppleScript to do an export to Zip of the DEVONthink files once a week (those zip files captured in the regular backups). I keep a monthly copy of these zip files for a few years.

: MacBook. Same as above but no Backblaze or Zip file export. Syncs with the iMac (Bonjour, WebDAV, and one small database using CloudKit)

: iPhone and iPad. Syncs with iMac (Bonjour, WebDAV, and CloudKit). Use standard IOS backup to Apple iCloud entire device.

: Synology NAS backed up daily to an attached USB drive and Backblaze B2 sync.

I test restores from TimeMachine at least a couple times a month if only because I want to restore back to a previous version of a file I am working on. Synology Backup files I look at log to make sure they happen. Pay little attention to Backblaze.

I do not consider DEVONthink’s sync as backup.

Is it secure? I hope and think so.

Over the top? Maybe.

3 Likes

I am thinking about creating an immutable backup for the most important DTP databases in Arq. But how does Arq behave? If I change a small file of 1 MB in a 20 GB database, will the whole 20 GB database be added to the immutable backup or just the change of 1 MB? After all, it makes a difference whether I save an additional 20 GB as an immutable if I change 1 MB every day or just 1 MB.

Arq deduplicates data, only the changes will be uploaded, not the 20 GB database.
I work from a 40+ GB database daily and my hourly Arq backups finish within 50 seconds most of the time.

1 Like

Thanks a lot for your quick reply. So I will also start with this backup tonight.

1 Like

You’re welcome.
Here are the settings I currently use.

Immutable is set to 21 days, but the refresh days will always be added on top of that, so backups are always immutable for 21 days at least, but some backup records will be for 28 days.

It’s advisable to increase the default hourly retention (if thinning is used) to slightly above the immutable maximum (28 days). So I set it to 720 hours (30 days), otherwise there will be errors when Arq tries to thin backup records that are still immutable (looks messier).

1 Like

Thank you very much for your advice!! I’ve just tried to change my existing bucket (via Arq in Wasabi) to immutable, but I got the message that “object lock is not available for the chosen storage location”. I am now clarifying with the Arq support team what the reason for this might be. Actually, this should be possible with Wasabi.

Did you create the bucket as immutable? This is required at least for Backblaze.

1 Like

As mentioned by @chrillek, you need to create a new bucket first.

  1. Create a new bucket on Wasabi’s website and enable “Object Locking”.
  2. Create a new storage location in Arq (File > New Storage Location).
  3. Choose Wasabi.
  4. Enter Wasabi Access Key and Secret Key.
  5. Choose “Use existing bucket” and point to the immutable bucket you previously created.
  6. Create a new backup plan (File > New Backup Plan) and point this to your storage location created in step 2.
  7. In Arq’s settings for that backup plan enable immutability.

Also worth mentioning is Wasabi’s 1 TB minimum storage requirement and 90 day deletion policy.

You’ll always pay for 1000 GB, even if you store less.

And they also charge every file uploaded for 90 days. So if you upload something today and delete it tomorrow, you still pay for it for 3 months.

This means that the immutability and hourly thinning in Arq should be increased to reflect Wasabi’s policy.

You could use 90 days for the immutable setting and 7 days for the refresh interval and set the hourly thinning to 2352 hours (98 days) since you pay for it anyway.

2 Likes

Thank you very much for your help! The back-up is running! Now I’ll check over the next few days whether I’ve set everything correctly…

1 Like

Are you backing up the original (and maybe open) DT databases or only a an exported copy? If the latter, how do you automate the exported copy?

Please see my reply here

2 Likes