Slow startup

Hello,

I only have two relatively small databases. One is for personal files, mostly scanned documents and some word and excel stuff, the other is work related, mostly archived emails and some more word, excel and PDF files.

When DEVONThink Pro Office starts up on my %k iMac, it takes several minutes reading the work database. Why does it need to do that? why does it take so long? I’m adding more and more emails to it, it only has about a year’s worth of messages so far. How can I speed up the startup process?

Thanks,
Chris.

Where is the database located - the file path?

From your other post, mentioning 455,000 emails… this is not a “relatively small database”. Size in gigabytes isn’t the critical number. If you check out File > Database Properties > … for a given database, the number of words / unique words are more critical. On a modern machine with 8GB RAM, a comfortable limit is 40,000,000 words and 4,000,000 unique words in a database. (Note: This does not scale in a linear way, so a machine with 16GB wouldn’t necessarily have a comfortable limit of 80,000,000 words / 8,000,888 unique words.) So text content in a database is far more important.
If you have a database of images, it will have very few words but be large in gigabytes.
If you have a database of emails, it will have many words, but may be smaller in gigabytes.
The second one may perform more poorly as the number of words increases beyond the comfortable limit.

Smaller, more focused databases will generally perform better, Sync faster, and be more data-safe in the event of a catastrophe (avoiding the “all your eggs in one basket” problem). They also give you the opportunity to close unused databases when you’re not using them. This frees up resources, not only for DEVONthink, but the rest of the system. There is no benefit to having a bunch of unused databases open all the time.

Correct me if I’m wrong, but it seems that the database is loaded into memory on startup, there are currently about 7 million unique words out of 275 million words total.
Can this be done differently, so that only the working set is loaded into memory? What kind of database are you using to store these? Is there any chance these could be stored in a postgresql database? I have an instance running on my Mac anyway.

BTW: this machine has 24 GB and a fusion drive (128 GB SSD + 3 TB spinning rust) which is usually pretty quick.

Thanks,
Chris.

Yes, the databases are loaded into memory on startup.
No this cannot be done differently, nor cannot it be stored in postgreql (or any other system).