Interested in your scenarios / experiences around timeline related documents / knowledge (audio / video)

I will take a stab at this based on my own experiences and current perspective (which may evolve). Everything that is said about video below also pertains to audio.

DevonThink as the backend of the knowledge ecosystem

As a starting point I think we need to appreciate that DevonThink does a large amount of things incredibly well. Then it offers additional functionality that can address many more specific use cases adequately although there may be some third-party app that could address that use case even better. A simple example here would be PDFs, where most editing functionality is available in DT but e.g. bookmarks require a third-party application. Finally, there are things it just can’t do, like editing a MindManager map.

The good thing is that we can simultaneously

  1. leverage DevonThink for its many strengths in database management and automation
  2. rely on it for those more (file-)specific use cases it does support
  3. use third-party applications for certain specialized / niche use cases that it doesn’t cover (via opening externally or indexing)

Quick side note: Whether officially supported or not, I just want to point out that there are no issues in my experience with importing and/or indexing large media libraries in DevonThink. My personal main database is upwards of 2.5TB now and running smoothly on an M1 Macbook Pro with 32GB. Even on a first-generation M1 with 16GB (at that time with a 1.5TB database) the performance was absolutely adequate. Everything is shallow synced to iPad / iPhone via Bonjour without issues.

What DT can and can’t do with video-based contents

From my perspective, working with video in DT sits somewhere between the second and third category i.e. basic workflows are possible, which is great, but there are limits. For example, being able to create links to specific sections in a video and then add related notes is a great foundation. I typically use this when I need to jot down some quick ad-hoc notes for videos that are “standalone” and relatively short.

There are other cases in which the individual video is part of a collection where the whole is greater than the sum of its parts (e.g. a set of academic lectures that is part of an online course). Here, the question is how to really consolidate all this information in order to integrate it as actual understanding, ensure it’s possible to refer back to relevant sections quickly in the future and make it practically useful.

When it comes to this second use case, there are certain limits to what DevonThink can do. Some examples:

  1. There is no way to excerpt a part of a video, i.e. to create shorter “clips” from the full video
  2. There is no way to consolidate excerpts from a video in video-based form for later review
  3. There isn’t any transcription functionality for video contents, so they aren’t searchable

Basically, I use MarginNote to address 1 & 2 and Otter.ai to address 3.
Otter.ai is an online service that transcribes videos accurately at a relatively low cost. I add transcribed text into the video item’s annotations in DevonThink so that it can be found in searches. Not much more to say on that, so let’s focus on MarginNote from here on.


Using MarginNote for advanced analysis and consolidation of video-based contents

If you really need to slice and dice and consolidate complex, content-dense videos I cannot recommend MarginNote enough. As a warning, it does have a very unique logic, an initially confusing/overwhelming user interface and a very steep learning curve. But enduring the early frustrations has been absolutely worth it for me, especially for its benefits working with video-based contents.

The principle in MarginNote is that you have mindmap notebooks, which are linked to as many documents and media files as you like.

I press play on a given video opened in the right-hand document view and when there is a particularly relevant section, I just mark that section with the handle bars. Now a new node in the mindmap is created instantly, which contains the excerpted video section. Clicking this node will play back the excerpted section. More video excerpts can be added to this node, as well as text-based comments. Video excerpts combined in a mindmap node can now be played back in sequence by selecting the node. Also, whenever a node in the mindmap is clicked, the main document view (imagine a split screen) automatically jumps back to the respective video section or paragraph in the source document or media file, so the context for the excerpt is always accessible.

There are all the typical benefits of structuring content snippets in mindmaps (including video-based excerpts without limitations), such as connecting nodes through visual cross-links and bringing them into a tree structure with nodes and subnodes.

Another extraordinarily useful feature is the ability to create so called “reference nodes”. To lead into this, imagine you’ve created a complete map of a course lecture’s contents based on video excerpts, each under a main node for lecture 1, lecture 2, lecture 3 etc. Now comes the time to consolidate clusters of related excerpt from all the different lectures, which are dispersed across the mindmap. One great way to select the respective nodes (containing the video excerpts), copy them all as references, scroll to a part of the infinite canvas that is free and paste them there. Now they can again be organized and connected as needed, but at the level of topic clusters (or whatever other grouping logic you wish to use that is different from the main mindmap). Changes made to the reference nodes are synced back to the originals automatically while the structure and organization of the original map remain untouched. The reference nodes also still link back to the original source from which they were excerpted, which remains instantly accessible.

While this is just a glimpse into how I use MarginNote to work with video, I hope this can inspire some further discussion.

Tips on the general setup and using MarginNote in combination with DevonThink

In general, I would recommend not using iCloud sync to synchronize videos managed in MarginNote across devices. You will encounter sync issues that are frustratingly difficult to diagnose / solve / reset. Instead, use the local sync (similar to Bonjour), which is far quicker and reliable for large media files. Note that iCloud sync does work reliably for regular documents in MarginNote, just not videos (with the usual caveats that apply to iCloud in general).

Secondly, marginNote saves all documents and media files in a folder under User/Documents/MarginNote 3 that can be indexed by DevonThink. However, after indexing there are some unique behaviors to be aware of:

  • MN doesn’t allow renaming or deleting files within the folder via the MacOS Finder and since DevonThink integrates with the Finder, deleting and renaming is also not possible from within DevonThink. More specifically, changes get automatically reversed or there are discrepancies. So, as a general rule MarginNote’s document management must be opened to make such changes. If this rule is followed, there are no issues.
  • Importantly, (custom) metadata and tags added to an indexed MarginNote item in DevonThink are retained as expected.

Final Thoughts

Let me end by emphasizing that in my view this is not a question of application X versus DevonThink. It’s a matter of leveraging the strengths of each application for those use cases where they excel the most, based on individual needs. This type of optionality is made possible in the first place by DT’s endless flexibility, as well as advanced indexing and automation capabilities.

2 Likes