Y.A. mcp server for DT

Hey folks, I built a small MCP server for DT that I’ve been using locally. Yes I know there are others but I wanted to implement my take on how to deal with private data - the server implement options for replacing sensitive data with an anonymized string still allowing meaningful searches. There is also an option to exclude an entire database(s) from the search entirely. It is not perfect but may go a long way to address your privacy concerns.

I have gotten quite a bit of good use from this already, combining searches in my databases with external sources. Often times this involves scientific papers - which I research for novel ideas tackling engineering projects. I am sharing it in case it is useful to others. For your convenience, I provide a signed binary so it doesn’t require building from source. Enjoy.

1 Like

Is this referring to DEVONthink’s built-in AI functions?

(Oops, deleted my own response rather than edited it). Ah yes, that was in reference to DTs built-in AI. Was my initial text that needs a bit of revision, 2nd point is wrong for sure.

The first is incorrect…
You can modify a database’s content via the Chat assistant, provided:

  • You have enabled Settings > AI > Chat > Assistant: Allow property & content changes
  • The AI model you’re using supports tool calls (though that’s still no guarantee as the support can vary)

That being said, AI is not meant to be an automaton, replacing the user in every way imaginable. Some tasks are more efficiently and logically done by you, not AI.

And so is the last…
While you don’t automate while chatting, DEVONthink certainly has AI-related commands in its scripting dictionary, with very good parameter support, and can do some very impressive things.

Well, I am indeed eating a bit of dust here. I did write these down as my initial understanding and motivation for writing the mcp but … was wrong on some of these. My strongest motivation is/was driving AI from within the CLI so it can be combined with other CLI tools and that point still survives but I could have done a better job vetting my bullets on the repo. My excuses.

I have addressed these in the repo.


An interesting point you bring up: “AI is not meant to be an automation”. My initial response was, hmm Jim is wrong here, and then hmm perhaps he is right :slight_smile:. I think it can be both and indeed depends on what one is working on (as you said some task as more efficiently done by ‘you’). Though I put it to you that having an LLM and in particular when working in the CLI so other tools can be called upon easily and integratedly (sorry, new word), changes how one works with documents. Though I still spend time in DT directly, I do engage my docs increasingly more via the LLM. These engagements may range from filtering to extracting and combining with online searches all in one query. Sometimes asking the LLM to tag docs (or other operations) as the query progresses. Sooh … these things are very much in flux.

No worries at all and thanks for correcting the repo notes :slight_smile:

I think it can be both and indeed depends on what one is working on (as you said some task as more efficiently done by ‘you’).

Indeed, I agree this is very situational.

Sooh … these things are very much in flux.

Oh how true that is and getting even fluxier by the day it seems (hey, if you can coin your own word, so can I :wink: ).

IMHO, there’s a mix of curiosity, surprise, uncertainty, frustration, excitement, fear, etc. when it comes to AI. Where this all ultimately leads, no one really knows. So we don’t rush headlong into things nor do we drag our feet. We keep our heads up and eyes and ears open, choose our steps carefully, and move at a pace that’s sustainable.

1 Like