Feedback on hybrid (db and indexed syncstores) setup via webdav please?

Hi everyone :wave:

This is my first post here, and thus I am transforming from a silent reader to an active one. Glad to be here, and in advance: many thanks to all the great content that helped me on my way so far.

I am writing this post mainly to get some feedback if my setup makes sense, or if I should change something, or maybe to gain some new perspectives on any possibilities I might not see at the moment. Truth be told: I have been thinking and tinkering quite a while, exploring different options, but need advice from all you wise long-term users before going all in. I’m just trying to establish a base structure/foundation beforehand.

My main objective is to have DT as a hub that enables me to draw connections across all my content. That includes my Obsidian notes and reference materials such as PDFs and eBooks - and of course I’d like to take advantage of all the organizational power DT has handling sensitive as well as not so sensitive data. Also, I use DTTG (only for selective content). That said, I want to (kind of) keep it lean, which means I would like to minimize the needed storage on the device.

I use DT with a Synology via WebDAV, and want to have as much data on there as possible (opposed to having the content locally). I understand that databases live locally as DT works the way it does, while the sync store living on the Synology is used to sync. So, here is what I came up with, after reading quite a while and without claiming to understand everything just yet.

My Sync Stores:

  1. iCloud for inbox (trying to keep used space to a minimum since I would rather not pay more and more over time while space needed increases…)
  2. Library (On NAS, via WebDAV, Sync indexed content active, encrypted)
  3. Glacier (ON NAS, via WebDAV, Sync indexed content disabled, encrypted)

This would leave me with any database within library and inbox to be potentially accessible via DTTG or on a second Mac (e.g. work machine). This could be work related, reference material, project specific stuff and so on: Mostly PDFs, text documents, books, notes and so on.

Glacier is for everything that is stored on the NAS. It is index only so it turns up in searches and won’t be “out of sight” but on the other hand minimizes the local disk space required. Here I would store videos, personal documents, saved podcasts and other material that is unlikely to be needed on the go and has more of an archival character. Those contents will be accessed only from time to time. The plan is to gradually move data in library to the glacier if it is not used actively anymore.

I think this could work for my specific use case. Moving a file from library to glacier moves the file accordingly in the file system: yay! This actually should reduce the space required for the Library (on disk), while moving the file to the glacier destination (on NAS).

Why do I think this would minimize disk space needed locally? I did a quick test with a db indexing two PDFs that had 17 MB in sum. The indexed database, on the other hand (same PDFs), had a little more than 2 MB. I understand that I will use more space overall, but since the surplus of storage (to keep the index) lives on the NAS I don’t care because the cost factor here is insignificant and not part of a subscription. :wink:

While I did the above test with an unencrypted database, I now set up the final two encrypted databases and added two identical PDFs:

PDF Size in Sum: 7,4 MB on disk
Glacier (indexed) db with above PDFs: 43,1 MB
Library (in DT) db with above PDFs: 52,6 MB

The one thing I don’t get is why those databases don’t change in size even if I delete both test PDFs and empty the trash and so on. I guess, this is some initial size an encrypted db just has, but I don’t know / can’t explain to myself. Maybe the sample PDF size is also too small to see that reflected in the db files right away.

To me, this seems to work. Just to be certain, though: Is there any blind spot in my thinking or anything I should be aware of?

If you read this far: thank you and I appreciate any feedback.

At a first glance I don’t see anything wrong with your setup or logic. Two comments:

  1. See how you fare with sync via iCloud; whilst I think it is perfectly reasonable to try it, numerous users here on the forum have complained that iCloud is unreliable. Others, however, have remarked on trouble they have had with other cloud providers, suggesting there is an element of luck or other user-specific factors at play.
  2. To the best of my knowledge encrypted DT databases are in effect disk images; they can grow in size to the maximum size you set when setting them up. They do not, however, shrink (this is not specific to DT, but a limitation of the way operating systems handle sparse images). I posted steps which I successfully used to reclaim space from an encrypted DT database here in the German language section of the forum. If you do not speak German, and a translation tool such as deepl doesn’t get you there, post back here and I’ll translate the post. Note that at the time the steps worked on macOS 11.3; I’ve not repeated them on 12.x.
1 Like

Welcome @pawo

Your setup sounds reasonable. However, unless you’re on a gigabit Ethernet connection, we don’t advocate keeping your databases on an NAS, especially if you (will) have large databases.

Are you just starting out with the Glacier database?


As @Blanc mentioned, iCloud has been a poor performer in terms of reliability lately. However, if you’re just syncing the Global Inbox you may have a better experience than trying to sync many databases. YMMV


And no, encrypted databases don’t currently compact themselves.
Lastly, make sure you’ve specified a good maximum size for the database; anticipate the size of the database and add 10-20% more to allow for unexpected growth.

Thank you very much @Blanc and @BLUEFROG :slight_smile:

@Blanc I appreciate your explanation regarding encrypted disk images, I didn’t know that. Furthermore, your hint regarding reclaiming space is remarkable. I’ll try that on a test database. If it still works, I wonder if this could be automated (Once a month: close DT, reclaim, and then reopen). But even if it’s a manual task or not possible, that is no dealbreaker. The encryption is more important to me.

@BLUEFROG Fortunately I am on a wired Ethernet connection 90% of the time. :tada: That’s a great heads up in terms of iCloud. So far, I’ve not had any severe hiccups. But it’s good to know in order to keep an eye on it.

Indeed, I am starting out freshly with the mentioned setup (Library, Glacier, iCloud inbox). Currently, everything is set up and running, though I am still playing with some test files and then taking it slowly from there. At least that is what’s planned.

I did try to anticipate and went with 5GB per DB, which is great for now. I wonder though: Let’s say I hit the maximum in the future. Would it be a viable option to then just make a new one and set it’s sizing accordingly?

1 Like

You’re welcome and I’m glad to hear you’ve got a nice hardwired connection. I use an Ethernet connection at least 80% of the time! :slight_smile:

Would it be a viable option to then just make a new one and set it’s sizing accordingly?

Yes, this is feasible but let me have a think on it too.

At least with macOS 12.4 it is possible to:

  1. change the file extension of the (closed!) encrypted database from .dtSparse to .sparseimage
  2. open Disk Utility, and select Images > Resize… from the menu
  3. in the ensuing dialog, select the database
  4. set the new size
  5. change the file extension back to .dtSparse

Whilst I cannot test that the maximum size really has changed, the new size is displayed in Disk Utility when the database is next opened in DT, so I have no reason to assume the size hasn’t been correctly changed.

Since yesterday, I’ve been pondering on this sentence from @BLUEFROG saying that encrypted databases don’t currently compact themselves. Maybe that’s a hint that they someday may do exactly that. :joy:

Until then, I’ll definitely make myself a note on the valuable insights from @Blanc on how to deal with encrypted DT images. Invaluable for sure! Thanks a lot @Blanc

1 Like

There are many things under consderation in our applications. That would be one of them. I of course can’t promise and IF or WHEN but it is on our (very long) list of possible enhancements.

Hi, I’d like to report back and ask one question @BLUEFROG. I have finally imported/indexed what I wanted to and was able to search across all my different sources. I think this was my first magical moment in Devonthink. This felt excellent.

My glacier database is set up not to sync file contents. Please see the image below:

I noticed one thing though: the encrypted sparse image locally on disc has the same size as all the indexed content (roughly 4,2GB) and this size is also reported in Devonthink when I click “Get Info”. The Sync Store itself (on the NAS) however has around 1,2GB which is what I would expect due to it holding “only” the metadata.

What I don’t understand though is: why is the encrypted sparse image locally so large? Shouldn’t it also hold only the metadata and relate to the size on the NAS?

Please post screencaps of the sizes you are referring to.

Hi @BLUEFROG, please find enclosed some annotated screenshots. Hope they help. Meanwhile, I also removed both groups and reindexed them with no effect. What I find even more mysterious is that the sync store on the NAS is so small. Can that be? All logs seem fine and I even verified the db successfully.

My next idea would be to remove the Glacier DB completely and set it up new, or how would you go about this? What would be the expected behavior in terms of size on the local disk and the NAS?

Appreciate any insights and your thoughts :slight_smile:

EDIT: I couldn’t resist and made another encrypted DB and indexed the exact same data. Within Devonthink (via properties) I get: 4,3 GB used, 5 GB available. This test DB shows with 49,4MB within the finder. This, I can’t explain either. :see_no_evil::joy:

I’m hazarding a guess you actually imported into the initial database but more carefully indexed into the test database.

Well, both databases and their imported groups had the little finder icon next to the name. From your reaction, though, I took it that the indexed database should not be that large. I deleted all, including the initial glacier, and set it up fresh. With the same imports the database now has a size of around 50MB, which seems ok.

How are you doing the indexing?

File → Index Files and Folders…

Okay. Thanks!