DEVONthink for academic research - ADVICE


I am new to DEVONthink. I am halfway through the first year of a PhD in the humanities, and DEVONthink seems like it has so much to offer in terms of general organisation and productivity, as well as the process of sourcing, reading, and synthesising all kinds of research materials, from academic books and articles, to scans of archive documents and audiovisual sources. I’m keen to really commit to this software, not least because of this platform, which seems to be really responsive and supportive, and full of great discussion. Would anyone using DEVONthink for this kind of project management be willing to share their setup, how they use DEVONthink on a day-to-day basis, and recommend other software that they have integrated successfully into their process?

Also, more specifically, I began by indexing some materials into a database, but I am now wondering whether it would be better to import it. Does anyone have any advice on this? The indexed files are local, but in a folder synced with OneDrive - this makes it easy to access documents on other devices (although I realise DEVONthink To Go would allow me to do this).

Sorry for the ramble, and thanks in advance.

  • I have a MacBook Pro, an iPad, and an iPhone
  • I currently use Zotero for reference management

See e.g. An academic workflow in DT: note-taking, references/bibliographies, and knowledge organization

1 Like

Excellent! Thank you!

I’m an academic in the humanities and went all-in into DT some years ago. I’ve never looked back, it’s an outstanding research tool and my go-to brain for nearly everything now. I haven’t automated a great deal because I’ve never learned to Apple script and at this stage, it feels like the many (exciting, beautiful) integrations and scripts that people use to customise interaction between apps will take me more time to set-up than that I will win back in the end. I may be wrong. Anyway, some details of my set-up:

  • DT3 contains all my research materials in a series of DBs (‘Research’, c. 40GB, one for ‘Images’ 30GB many taken from manuscripts, kept separately largely due to file size and syncing needs; and another I’ve ambitiously called ‘Zettelkasten’ following that broad philosophy of note-taking, but to be honest it never really took off).
  • Reading notes are in many formats: handwritten (then scanned with iPhone and saved as PDFs) and typed. I’m forever streamlining this system in order to find stuff later – most importantly, each note is saved in a folder (‘Reading notes’) with the same file name as the original source PDF (Author - date - Title). At times I create link backs from note to PDF, but I don’t always bother.
  • I am now a heavy user of replicants. This allows me to keep a central library of ‘Primary’ and ‘Secondary’ lit in a single place (and further such documents and notes, e.g. I run separate folders of notes that are essentially manuscript descriptions and indices), and replicate any relevant material to dedicated project folders (largely, these map onto writing projects, e.g. articles, books, chapters). It is this feature alone that sold DT to me.
  • Because of my reliance on replicants I import everything. I dipped my toes in first, indexing a lot, but it just didn’t work as well. I trust DT to keep my data save, and of course, it’s a breeze to get it all out again.
  • I sometimes create a top-level bibliography for new projects in such dedicated folders: an RTF or MD file that contains the bibl reference (copied from Zotero) and active links to the primary and secondary sources, and reading notes (stored in the central DB).
  • I use Zotero for bibliographical management, but without any of the automation that the other author linked by cgrunenberg has suggested. I simply keep it running alongside and don’t use it to extract a bilbiography until the formatting stage (see below). I don’t use any ‘watched folders’ – rather, when I download articles into Zotero, they are renamed with Zotfile and then I use the ‘share’ sheet to push a copy into the DT database and file it there. I do this sporadically enough not to have invested in automating. See above!
  • My writing workflow centres around Scrivener for any long-form writing: this lives on the left side of a 24" monitor, with the dedicated project folder in DT3 open on the right hand side. I might run another window on separate screen of the entire ‘Research’ db which contains the rest of my materials, for global searches.
  • Bibl refs inserted into Scrivener drafts are my own short hand place holders (e.g. {Smith 2009, 123}. I compile the Scriv draft only at the very end of the writing process and then key in the references using the Zotero plug-in (with Word). I know that you can also scan your RTF drafts for automatic insertion, but I don’t use it. This is time-consuming and could be improved.
  • I do shorter note-taking in Ulysses, or straight into DT by creating new notes in RTF. I am getting going with MD format now, having been persuaded on this forum that plain text is really the way forward for future-proofing and inter-device exchange, esp. to iOs.
  • I’ve more recently got into mindmapping: Mindnote works well, and DT handles the file format okay. I also export the native format to MD since they the mindmaps are visually represented in DT but not searchable.
  • I run DT2G on the iPad, which syncs selected chunks of the ‘Research’ database. It largely functions as my e-reader. I annotate using DT2G’s native annotation tools (highlighting, pens, etc) which are a little clunky but serve my needs perfectly.
  • The ‘Readling list’ metadata offered by DT3 seems cool but doesnt’ sync to iOs at present (I think). So I run two smart groups which I use ALL THE TIME, ‘To Read Teaching’ and ‘To Read Research’. This looks for documents with tags of the same name in the DB, and if found, creates a replicant of those docs in a separate folder. It’s only these folders that I’ve set to ‘always download’ on my iPad. This really saves on having to sync an entire 40GB database (I don’t have the memory).
  • Using DT2G as a note-taking tool on iOS is still awkward though the devs have hinted that this feature is in active development. MD is better than RTF (not supported, I think). This is the main reason that Ulysses is still in play for me. But I largely use the iPad as a tool to ‘consume’ (read, annotate) data, largely PDFs; for ‘production’ (any sort of writing and organisation), my MBP and Mac Mini do virtually all the work.

This is a broad overview of my workflow with DT3 proudly at the centre of it all. I’d be happy to elaborate if you have any questions!


Since version 3 smart rules can automate a lot of things (and the next release even more), therefore scripting is actually less often required these days.


Thank you for such a detailed response. This is a brilliant insight - we don’t talk about these everyday working practices enough!

I’m going to dive straight in and try few things out, using your points as a guide. I think I’ll continue indexing for the time being, as it seems pretty straightforward to fully import indexed material without losing any of the metadata (still a bit wary - although, DT does seem pretty robust on that front. Have you ever experienced any problems with retrieving your data?).

Thank you again!

1 Like

You are welcome! Agreed – workflows can be mystifying though this forum is excellent (there are several hums and social sciences academics lurking and in plain sight, I think).

Never. You might know that the DBs themselves are socalled ‘packages’ (right click and ‘show content’ will reveal the structure, but don’t meddle with any of it!) so data retrieval in worst case scenario is never an issue. In daily use you can simply export a single file or everything at once from a DB using the ‘export’ commands or dragging onto Finder or an external folder. It’s always been rock-solid for me.

This is obvious but do invest in sensible back up strategies (there are good posts on this forum) – DT won’t eat up your data but if you don’t keep backups you could still loose it all when an HD fails, in case that is your single site of storage. My research data has been with me since the first year of the PhD and some of it still comes in useful many (many!) years after.

Good luck with the PhD!


Excellent! Thanks again!


Hi there,

I’m new to DT but have a similar outlook for using the software. I’m right in the middle of collecting way to much data as scanned PDF; books; diaries; photographs; and eventually video files for my dissertation and a friend recommended DT and DT2GO. I’m sold but have been waiting to upgrade my MacBook Pro after my aging one just had no storage room or sufficient RAM to do it.

So, all that to ask, what do you all suggest after having set up your own databases? Would 16GB of RAM suffice to run this kind of humanities dissertation database or since I’m upgrading anyway, invest now in 32? That feels like too much but that’s coming from a failed foray at setting up a database elsewhere and having it be a memory hog that crashed my 2013 MBP with 4GB RAM. That was my bad since I didn’t know that some of the features for graphics were just too much and would send my computer into a tailspin.

I’m stuck between investing in more ram or more storage. Maybe the 2TB of storage would be overkill because I could set up external backups but I’ve been having a rough time for the last six months and keep wondering what’s the best move going forward.

I appreciate any feedback!


I actually asked this question some months back, and got some engaged responses: DT3 large database performance versus new hardware choices.

I’ve ended up with the new Mac Mini with 16GB of RAM which flies along. I also got a refurbished MBP 2015 with 16GB, and this has unexpectedly become my main machine since the lockdown and working from home. I’ve never found this golden oldie limiting for my work and DT uses comparatively moderate system resources.

(I’m also recording and encoding video for my teaching, and here it’s another matter: the MBP fans sound like the whole machine is taking flight, with the processor pushed to the max – all just to compress a quick mov. to MP4).

Your mileage may vary (and more is always bettter if you can afford it) – but I’d say 16GB of RAM is more than sufficient. If you are planning on editing video files rather than storing them, perhaps err on the safe side. Others will be better equipped than I am to advise on using video in DT, as my DBs are all text and static images.


16GB of RAM would be sufficient, but 16GB is not more than sufficient. My wife just bought a 21.5” iMac at Costco, and I told her the 8GB model was a good choice as I intended to upgrade it with 32GB. My mistake, as I knew the 27” was user-upgradable but I didn’t research to learn the 21.5” is not. The darn thing will hardly boot up in less than 3 minutes and just launching is painful. Tried to exchange it for the 27” and Costco is out of them. Apple should be embarrassed to sell even an entry level Mac with 8GB of RAM.


Apple should be embarrassed to sell even an entry level Mac with 8GB of RAM.

Or a 128GB hard drive :roll_eyes:


Thank you for the input. I wouldn’t mind going the 32GB in RAM route I just can’t comfortably afford to take that upgrade and the 2TB of storage because Apple and it’s damn accessories needs means that I’d go way over budget. But I suspect that an external drive will be a good addition in time. Or at least I hope.

Echoes a lot of mine. I would also endorse use of replicants and importing over indexing. I used to index but always worried that the link would break.

DT3 works best when you have separate databases for separate projects but those projects are in themselves comprehensive. The more detailed and thorough and structured the more useful it is as an adjunct brain. I’ve found that the structure evolved over time, and my initial forays were eventually remapped as research progressed. Paralleling understanding in a way, but don’t worry about that, get in there and stick with it. It can take a while, even years, but it’s really worth it.

There’s other tools, like the supercharged mind mapping tool Tinderbox, which can link to items in DT3 and set up other views and ways of interacting. The rather marvellous Beck Tench demonstrates a detailed use of this over a series of videos


Just a curious question - isn’t the above the reason we need indexed folder(of literature) if projects are sharing the same/similar pool of literature. Importing articles into different databases may/will duplicate the same material? Perhaps the +side for import is highlighted points in the same pdfs can be different for different projects?

The second curious question is for all experienced academics regarding the recall of knowledge.
I assume that we all take notes after reviewing each journal(or other) article and by different methods. Some may use text highlights or annotations or comment field directly in pdf, some may take one main note for each article, some may take many snippets for each article (me), or as mentioned-above some may take hand written notes on paper or use Apple Pencil in different notes app.
Curious to know what methods are used by different people to recall their knowledge through these notes. For example, search by key words, tags or grouping?

Thanks for sharing.

1 Like

There are as many ways to use DT3 as there are users, so just consider this food for thought as you experiment.

Should you have a separate database for each project? Maybe yes if you have a half-dozen or so “projects” to work on long-term. But if a “project” is a publication you are writing or a client you are working with and you have hundreds of such “projects” then you might well want to put all projects of a given type in one database, such a “active publications” vs “complete publications” vs “active consulting cases” vs “completed consulting cases.”

As for a database of academic literature, for me at least I have one database with all academic literature I want to read or keep on file. It is fairly large because it includes RSS feeds which. automatically import new articles regularly. I keep one database for academic articles for all disciplines - because as you say you never can predict when a project will cross disciplines. When I start a new project that will require referencing academic literature, I create a Group in my academic literature database and then insert an Item Link to that group in my new project’s database. That way the project database is “complete” but might achieve that completeness through a link to a group in a different database.


This is a wonderful question. I’ve just been rereading an essay on historical research method that I love:

What really speaks to me here is somethign I’ve never managed to get going: a system in two related set of notes: one mapping onto the source and respecting the chronology of argument and flow of ideas; and another that organises these smaller sets of ideas into themed/headed snippets (in line with the early modern practice of commonplacing under themed heads that Thomas describes earlier in the article). As Thomas says he wishes he kept duplicate notes, but many have been cut up and fill an entire basement. Enter DT as our basement!

At this point in my academic life I regret not having invested much in the second, snippet style of note-taking (I’m still thinking through how useful it would have been, but looking at it now is also a reflection of new projects and new research questions). I’ve tried getting a Zettelkasten going but didn’t follow this through. Yet Thomas seems absoluly right that in order to generate ideas, it’s critical you are able organise your snippets by theme/subject, and reorganise, until patterns start to emerge and a narrative presents itself. To him the hard intellecual labour lies in the process of arrangement, and the writing follows more naturally.

I’m currently overhauling some of my own note-taking practice in DT. I will keep producing the reading notes per source/publication/work (book, article, chapter, primary source) – but I’m looking at the best way to extract themed snippets. adding many subject tags to notes per source doesn’t appeal to me – since there would be way to much irrelevant noise in long documents. If I get back to 1,000 words two years later, I just want what is relevant, not having re-engage with the rest. Unless of course I’m reminding myself of the arc of an argument of evolution of ideas, in which case I do want to first kind of note.

Happy to report back once I’ve landed on a good solution!


I have the same needs for both chronological perspective and thematic perspective.

DT is great for the thematic perspective, I do make heavy use of replication and group tags.

All my reading highlights end up exported as Markdown or PDF excerpts which i can categorize more precisely than the document itself.

I also categorize My thoughts, open questions, observations, notes … in the same way which make every topic group a very rich bundle of sources, or in other words, the dum of my knowledge and know sources on the topic.

What I miss is a better chronological view, a way to scrolldown all my notes/thoughts and scan their content visually.


@SebMacV & @benoit.pointet, I have similar goals w.r.t an academic reading / note taking workflow. Having atomic notes that are self-contained is key so that these knowledge elements can be filtered & gathered independently, reused, and arranged in new ways. With I‘m currently trying to develop a Mac app that supports this workflow. I plan to deeply integrate it with DEVONthink so that users can use DT‘s unique features to further work with their notes.

Here‘s a short demo screencast that shows a new note being created from a PDF highlight which is then tagged, rated, labeled, commented on and cross-linked:

More info is available in these threads:


That preview is very, very interesting - I will definitely want to try this as DT3 integration is added - this could be a terrific piece of software.