Why doesnt devonthink store indexed file contents as regular files (filesystem level accessible) in the sync location

Simple as that. I thank very much, but don’t need no alternatives. I just want some light on whether and why it doesn’t. And finally, if it really doesn’t as it seems why does it need to embed away in the database itself, when it could just as well handle it as to go does locally. Btw I am referring to Dropbox sync environment (if that changes anything to the inquiry at all. I am currently still remotely considering recovery from other sync environments i.e. WebDAV, etc). Thank you again.
Edited

You said the magic word, it’s a database not “regular” folders. You shouldn’t go messing about directly with files in the database. And the sync location doesn’t even store a regular file since it is binary in format .

Omg, that scrambles things quite a bit seemingly. Ok then, what’s the rationale behind it. I don’t think its too much to ask. Otherwise, to go does it differently doesn’t it. Local Pro desktop stores have taken a different route as well, that is, when it comes to imported files, these are saved as is (inside local bundle of course)
Edited

But DTTG does decode it on its end, in the end. What’s is the real advantage, besides some assumed possible processing time gains, which can all be lost, given the (obvious) loss in flexibility. Of course I get the point of databases, but does the sync store have anything else special that would absolutely require the current approach. My question is why sync stores have gone the all or nothing route, so to speak. Now, we are actually talking about a `database about the database``. I know that (digital) life is made of choices, and I will definitely not try argue in favor or against this or that. Given all that, finally, what is the advantage in this current situation, why doesn’t it keep i.e. plain pdf files separate from the binary(ies) remotely just as it does locally? for the sake of information at this point, I guess. Thanks again

I appreciate very much the concern. I have backups.

Among other things, I believe the sync store keeps track of which files are present on which systems. That is, it has information beyond what’s in the database itself. Which in turn is very helpful for data integrity and useful things like that.

2 Likes

Flexibility does not matter if the data is intended to be read by one program only.

The “time gains” are very significant in the case of large databases. Among other things, the action of retrieving a single file from a cloud drive involves significant overhead, which is the reason it can take a couple of seconds to download a mere 50KB file when you have gigabyte internet. Imagine if you have to access thousands of such files every few hours.

Therefore, cloud service providers develop, and encourage the use of, a separate mechanism to sync databases – those of DT and other software as well – from the “cloud drive” mechanism you’re perhaps accustomed to. In DT’s sync settings, you can see an option called iCloud (CloudKit). CloudKit is one such mechanism developed specifically for syncing databases.

The DT sync store isn’t just the copying of your files from location A to location B. It’s also all the metadata and indexing (and probably other things I don’t understand) that also need to make their way to DTTG. Your database is far more than the files you put in it. It makes sense that this is packaged as efficiently as possible for the software to access. It’s not intended to be handled by a human (you see the end result in DTTG).

1 Like

I can see your whole point. Specifically, relative to the quoted excerpt, this does indeed make a lot of sense when talking about the ios platform, which is not as rich in other offered data management tools as is the case with macos.

Additionally, overhead in networking is indeed a real thing and should be avoided. But still…

The problem with those so called encouraged separate sync mechanisms is that they are unaccompanied by sufficient tools to double check basic operations that otherwise could have and would have been checkable by other filesystem level working(workable) tools and means. Just that.

I wouldn’t care about an extra second, hour, day, or even a week or month of wait time if I could be absolutely certain of no dataloss. Now we are reaching a point where invested time can come to a breakeven…

Feels like I’m being inclined to be content and conform with good ol’ cloud provider of-the-shelf end-consumer sync which in general is more predictable and can be double checked in more than one single way.

What “file system tools” do you have in mind re iOS?

1 Like

It would be very helpful if you could be more specific

in ios, nothing at time being

db management is ios and, for that matter, ipados are leaner and will always be and that isn’t a problem

although i might agree with you that they might ve been separate by a reason, but what could these be?

I don’t work for DT, and don’t have detailed knowledge about the structure of the sync store.

I think, though, that

reflects an overly optimistic view of off the shelf consumer sync tools. You don’t have to look very far to see all sorts of reliability issues with all of these tools. And the assumption that a collection of potentially thousands of files (plus metadata) is easily “double-checked” by third-party tools seems more than a little questionable.

2 Likes