DEVONthink for academic research - ADVICE

Since version 3 smart rules can automate a lot of things (and the next release even more), therefore scripting is actually less often required these days.

4 Likes

Thank you for such a detailed response. This is a brilliant insight - we don’t talk about these everyday working practices enough!

I’m going to dive straight in and try few things out, using your points as a guide. I think I’ll continue indexing for the time being, as it seems pretty straightforward to fully import indexed material without losing any of the metadata (still a bit wary - although, DT does seem pretty robust on that front. Have you ever experienced any problems with retrieving your data?).

Thank you again!

1 Like

You are welcome! Agreed – workflows can be mystifying though this forum is excellent (there are several hums and social sciences academics lurking and in plain sight, I think).

Never. You might know that the DBs themselves are socalled ‘packages’ (right click and ‘show content’ will reveal the structure, but don’t meddle with any of it!) so data retrieval in worst case scenario is never an issue. In daily use you can simply export a single file or everything at once from a DB using the ‘export’ commands or dragging onto Finder or an external folder. It’s always been rock-solid for me.

This is obvious but do invest in sensible back up strategies (there are good posts on this forum) – DT won’t eat up your data but if you don’t keep backups you could still loose it all when an HD fails, in case that is your single site of storage. My research data has been with me since the first year of the PhD and some of it still comes in useful many (many!) years after.

Good luck with the PhD!

4 Likes

Excellent! Thanks again!

Richard

Hi there,

I’m new to DT but have a similar outlook for using the software. I’m right in the middle of collecting way to much data as scanned PDF; books; diaries; photographs; and eventually video files for my dissertation and a friend recommended DT and DT2GO. I’m sold but have been waiting to upgrade my MacBook Pro after my aging one just had no storage room or sufficient RAM to do it.

So, all that to ask, what do you all suggest after having set up your own databases? Would 16GB of RAM suffice to run this kind of humanities dissertation database or since I’m upgrading anyway, invest now in 32? That feels like too much but that’s coming from a failed foray at setting up a database elsewhere and having it be a memory hog that crashed my 2013 MBP with 4GB RAM. That was my bad since I didn’t know that some of the features for graphics were just too much and would send my computer into a tailspin.

I’m stuck between investing in more ram or more storage. Maybe the 2TB of storage would be overkill because I could set up external backups but I’ve been having a rough time for the last six months and keep wondering what’s the best move going forward.

I appreciate any feedback!

2 Likes

I actually asked this question some months back, and got some engaged responses: DT3 large database performance versus new hardware choices.

I’ve ended up with the new Mac Mini with 16GB of RAM which flies along. I also got a refurbished MBP 2015 with 16GB, and this has unexpectedly become my main machine since the lockdown and working from home. I’ve never found this golden oldie limiting for my work and DT uses comparatively moderate system resources.

(I’m also recording and encoding video for my teaching, and here it’s another matter: the MBP fans sound like the whole machine is taking flight, with the processor pushed to the max – all just to compress a quick mov. to MP4).

Your mileage may vary (and more is always bettter if you can afford it) – but I’d say 16GB of RAM is more than sufficient. If you are planning on editing video files rather than storing them, perhaps err on the safe side. Others will be better equipped than I am to advise on using video in DT, as my DBs are all text and static images.

3 Likes

16GB of RAM would be sufficient, but 16GB is not more than sufficient. My wife just bought a 21.5” iMac at Costco, and I told her the 8GB model was a good choice as I intended to upgrade it with 32GB. My mistake, as I knew the 27” was user-upgradable but I didn’t research to learn the 21.5” is not. The darn thing will hardly boot up in less than 3 minutes and just launching Mail.app is painful. Tried to exchange it for the 27” and Costco is out of them. Apple should be embarrassed to sell even an entry level Mac with 8GB of RAM.

4 Likes

Apple should be embarrassed to sell even an entry level Mac with 8GB of RAM.

Or a 128GB hard drive :roll_eyes:

6 Likes

Thank you for the input. I wouldn’t mind going the 32GB in RAM route I just can’t comfortably afford to take that upgrade and the 2TB of storage because Apple and it’s damn accessories needs means that I’d go way over budget. But I suspect that an external drive will be a good addition in time. Or at least I hope.

Echoes a lot of mine. I would also endorse use of replicants and importing over indexing. I used to index but always worried that the link would break.

DT3 works best when you have separate databases for separate projects but those projects are in themselves comprehensive. The more detailed and thorough and structured the more useful it is as an adjunct brain. I’ve found that the structure evolved over time, and my initial forays were eventually remapped as research progressed. Paralleling understanding in a way, but don’t worry about that, get in there and stick with it. It can take a while, even years, but it’s really worth it.

There’s other tools, like the supercharged mind mapping tool Tinderbox, which can link to items in DT3 and set up other views and ways of interacting. The rather marvellous Beck Tench demonstrates a detailed use of this over a series of videos https://m.youtube.com/watch?v=IOWLOMGFAEw

5 Likes

Just a curious question - isn’t the above the reason we need indexed folder(of literature) if projects are sharing the same/similar pool of literature. Importing articles into different databases may/will duplicate the same material? Perhaps the +side for import is highlighted points in the same pdfs can be different for different projects?

The second curious question is for all experienced academics regarding the recall of knowledge.
I assume that we all take notes after reviewing each journal(or other) article and by different methods. Some may use text highlights or annotations or comment field directly in pdf, some may take one main note for each article, some may take many snippets for each article (me), or as mentioned-above some may take hand written notes on paper or use Apple Pencil in different notes app.
Curious to know what methods are used by different people to recall their knowledge through these notes. For example, search by key words, tags or grouping?

Thanks for sharing.

1 Like

There are as many ways to use DT3 as there are users, so just consider this food for thought as you experiment.

Should you have a separate database for each project? Maybe yes if you have a half-dozen or so “projects” to work on long-term. But if a “project” is a publication you are writing or a client you are working with and you have hundreds of such “projects” then you might well want to put all projects of a given type in one database, such a “active publications” vs “complete publications” vs “active consulting cases” vs “completed consulting cases.”

As for a database of academic literature, for me at least I have one database with all academic literature I want to read or keep on file. It is fairly large because it includes RSS feeds which. automatically import new articles regularly. I keep one database for academic articles for all disciplines - because as you say you never can predict when a project will cross disciplines. When I start a new project that will require referencing academic literature, I create a Group in my academic literature database and then insert an Item Link to that group in my new project’s database. That way the project database is “complete” but might achieve that completeness through a link to a group in a different database.

4 Likes

This is a wonderful question. I’ve just been rereading an essay on historical research method that I love: https://www.lrb.co.uk/the-paper/v32/n11/keith-thomas/diary.

What really speaks to me here is somethign I’ve never managed to get going: a system in two related set of notes: one mapping onto the source and respecting the chronology of argument and flow of ideas; and another that organises these smaller sets of ideas into themed/headed snippets (in line with the early modern practice of commonplacing under themed heads that Thomas describes earlier in the article). As Thomas says he wishes he kept duplicate notes, but many have been cut up and fill an entire basement. Enter DT as our basement!

At this point in my academic life I regret not having invested much in the second, snippet style of note-taking (I’m still thinking through how useful it would have been, but looking at it now is also a reflection of new projects and new research questions). I’ve tried getting a Zettelkasten going but didn’t follow this through. Yet Thomas seems absoluly right that in order to generate ideas, it’s critical you are able organise your snippets by theme/subject, and reorganise, until patterns start to emerge and a narrative presents itself. To him the hard intellecual labour lies in the process of arrangement, and the writing follows more naturally.

I’m currently overhauling some of my own note-taking practice in DT. I will keep producing the reading notes per source/publication/work (book, article, chapter, primary source) – but I’m looking at the best way to extract themed snippets. adding many subject tags to notes per source doesn’t appeal to me – since there would be way to much irrelevant noise in long documents. If I get back to 1,000 words two years later, I just want what is relevant, not having re-engage with the rest. Unless of course I’m reminding myself of the arc of an argument of evolution of ideas, in which case I do want to first kind of note.

Happy to report back once I’ve landed on a good solution!

9 Likes

I have the same needs for both chronological perspective and thematic perspective.

DT is great for the thematic perspective, I do make heavy use of replication and group tags.

All my reading highlights end up exported as Markdown or PDF excerpts which i can categorize more precisely than the document itself.

I also categorize My thoughts, open questions, observations, notes … in the same way which make every topic group a very rich bundle of sources, or in other words, the dum of my knowledge and know sources on the topic.

What I miss is a better chronological view, a way to scrolldown all my notes/thoughts and scan their content visually.

3 Likes

@SebMacV & @benoit.pointet, I have similar goals w.r.t an academic reading / note taking workflow. Having atomic notes that are self-contained is key so that these knowledge elements can be filtered & gathered independently, reused, and arranged in new ways. With https://keypoints.app I‘m currently trying to develop a Mac app that supports this workflow. I plan to deeply integrate it with DEVONthink so that users can use DT‘s unique features to further work with their notes.

Here‘s a short demo screencast that shows a new note being created from a PDF highlight which is then tagged, rated, labeled, commented on and cross-linked:

More info is available in these threads:

https://www.outlinersoftware.com/topics/viewt/8814/35

4 Likes

That preview is very, very interesting - I will definitely want to try this as DT3 integration is added - this could be a terrific piece of software.

2 Likes

Thanks @msteffens for pointing that out!
Also interested to see that integration come to life.
Though I would favor staying in one app as much as possible, that one being DT atm.

Great question, @ngan! It’s often taken as given that we all develop an idiosyncratic process/workflow that is built around the requirements of our own fields/projects/methodologies. While this is true, by not talking about them (and asking these kinds of questions!), we’re missing out on important opportunities that could make our own processes work more effectively. In fact, it was while searching for answers to a very similar question that I first stumbled across DT.

@SebMacV, I had never come across Zettelkasten before you mentioned it earlier in this thread – and, of course, a simple Google search took me down a very deep rabbit hole. It seems like a pretty solid technique, and one that DT seems suited to. What was your experience of it?

1 Like

does the imac have an ssd?

Thanks to @rkaplan, @SebMacV, @benoit.pointet, and @richard.d for sharing the experience.

It seems that the search for a suitable method to recall knowledge is an ongoing process/challenge. I have been experimenting to use a custom script for taking note snippets on a journal article/book. By using the script, those snippets are consolidated under its original source on the one hand (it is almost like each article/book is having its own shoebox of index cards) and all snippets from different sources can be filtered and recombined into a consolidated summary by way of tags-selection or keyword-searching. I found this method quite effective for me.

However, while I believe that I have largely solved the puzzle from a technical aspect, I also found that the real challenge is methodological. Even when I have a highly organised superset of shoeboxes and the index cards in each shoebox can be recalled effortlessly by any combination of tags and search, the challenge is how the tags and keywords should be designed. In other words, the system of knowledge-retrieval is only as good as the way a researcher is categorising knowledge in their mind. Here, I think we are potentially in a catch 22 situation: we couldn’t have established a well-thought categorisation of knowledge in our mind at early academic career, and it would a rather impossible task to rearrange/redo our knowledge/notes once we have established such categorisation. Perhaps AI will take over the task of knowledge retrieval for us in the near future, while a human can keep focusing on seeking the right question and the methodology for answering the question.

Just sharing my 5 cents.

4 Likes