** Unofficial but really useful Devonthink API and Zapier Integration

I am glad you may find the unoffficial API to be interesting.

Note that if your sole goal is to automate the process of adding scanning documents to DT3, you may be able to do that more simply using an indexed Dropbox folder and/Hazel and/or a smart rule.

Or you could set up a process to automate emailing your scanned documents and importing them to DT3 using this script which is a lot simpler than the unofficial API:

2 Likes

Yeah, I didn’t mean that DT and iCloud (Drive) had a direct relation, other than both not being accessible as pure web services that have your typical REST endpoints that you can interact with as you please. I come from tools that are more your typical web services like Evernote and Google Drive, but I’m gradually taking steps to use tools that give me more control (like DT). That of course comes with certain trade-offs when you’re used to working in a specific way.

Of course, exactly what I figured. Just wanted to get a confirmation since I’m still not that familiar with DT, so wasn’t sure if it somehow was possible to hook the API into the iCloud sync mechanism (which, to be fair, I don’t have much experience with either)—although I didn’t think that would be the case. Just wasn’t sure I’d need a Mac always on in my house or if I could run it in a DigitalOcean droplet or something like that. My idea was of course that only the former was possible (DT being Mac-only and all), although the latter would have been great.

Anyway, I’ve already seen some mentions I think both here and on MPU Talk about being careful not to have DT open on more than one computer at the time. Are those valid concerns? If I repurpose an old MacBook as an always-on machine so it can then serve up this API for interacting with DT through HTTP requests, are there any considerations I would need to keep in mind to not cause unnecessary conflicts or corruptions of my DT databases?

So I don’t use Dropbox, and I would like the scans to arrive automatically into DT even if I don’t open my Mac, meaning if I’m on the go I’d like to see them appear in DTTG.

Thanks for sharing this. Perhaps it would suit my current needs better, but I assume this would still require me to use Apple Mail and keep it running on a Mac anyway to meet my above goal?

Since I have been planning on setting up an old Mac to sync my iCloud Photos library to an external HDD so I always have an offline copy of it, I could probably use the same computer for serving up the API. Or just running the email import script. I’ll probably try both.

Either way, thanks a lot for your help and guidance!

I think it may be helpful to reply in one post to concepts you asked about in multiple posts:

(1) Opening DT on more than one computer at a time is both supported and encouraged. The key point however is that you cannot open literally the same database, i.e. you cannot store the database on a NAS drive in your home and then access the same NAS folder simultaneously from two computers on your network. Instead, you need to set up a sync store somewhere (Dropbox, Synology NAS with WebDav, iCloud and a few other possibilities) . Each computer then can have its own database and sync that database to the shared Sync store. Then you can have as many computers or iOS device as you wish simultaneously accessing the same database by syncing to the same sync store.

(2) It may be helpful to understand a bit more about your use case - in particular, is the scanning of your documents being done by a “trusted person” such as a family member or employee who can have access to your DT3 database, or will this be done by some 3rd party who should be shielded from database access? If it is a “trusted person” then one simple solution is to simply set up your document scanning person with a synced copy of your database - see paragraph 1 above. Only the sync store needs to be available 24/7; neither of you need to keep your computer operating 24/7 since your data will sync back up as soon as you turn on your computer and it connects to the sync store.

(3) Of the two “unofficial” solutions I have posted, the “API” does need to be running 24/7 and does give the user access to your database overall. However the “Mail Import Script”: does NOT have to be running 24/7 and does NOT require access to your database. The mail import script simply requires setting up a dedicated IMAP email account for purposes of import of documents to DT3. Anyone can send documents to that email account and if your computer with DT3 is not operating at the moment, it will collect and import the email at a later time when you start up DT3 and Apple Mail.

(4) An equally easy solution to allow “non-trusted” people to scan these documents is to set up a Dropbox folder to which your scanning people have access. Then set up DT3 to index the relevant Dropbox folder and set up a smart rule which Imports the indexed items into your database. This solution does not require your DT3 computer to be running 24/7.

2 Likes

Thanks @rkaplan — this is very helpful.

  1. Great! I’ve already set up iCloud sync, so should be all good then :slight_smile:

  2. This is a third party professional mail scanning provider (usually called a commercial mail receiving agencyCMRA—in the US) where all our snail mail is redirected. They support sending the scanned documents as both email and webhooks, but there’s more meta data available in the webhooks. Currently I have a Zap receiving the webhooks and checking the language of the content, and if not English it’ll translate it for me, etc. It would be helpful to have this translation stored together with the document to aid in search later on.

  3. I still think the API would be the best solution for me, as I’m not sure I will stick with Apple Mail or not (I’m trying to see if it can meet my needs since it seems to work well with DT—but currently really missing a snooze button). I wouldn’t give direct access to the API to any third party, but rather receive webhooks in Zapier and then have Zapier do the interaction with the API after processing the input.

Thanks again for all your help! It’s a bit overwhelming to get started, but I think I’m starting to grasp most of the important concepts now. I’ve started reading the manual, but just made it to page 18 so far. Just 274 pages to go! :sweat_smile:

While I love the unofficial API (indeed that is why I designed it), I suspect it may be overkill for your specific situation as described.

Since your CMRA supports webhooks and you have already figured out how to use the webhook to initiate a Zapier Zap, it seems to me it would be straightforward to create a Zap to place those documents into either a Dropbox folder or a free Google Drive folder. Then index that folder in DT3 and create a smart rule to import the contents into your database.

I think that process will be much more bullet-proof long-term and require much less software/configuration maintenance than the unofficial API.

Of course if you have more complex additional uses for the unofficial API then that is another story.

1 Like