I have lots of audio and video clips in my DevonThink database. I’d like to add lengthy text metadata to the clips for searching as well as browsing. As an example, I might add a transcript of a TED talk or a outline with time stamps for a podcast.
Right now, I’m embedding the AV file to a rich text file, but I’m wondering if there is a better way to do this. I have three goals:
Be able to search the text
Be able to read the text as easily as I can read text notes (so the metadata can’t be hidden away in a place that requires extra steps to view)
Easy access to the original AV file for viewing, dragging into another app, etc. This is where the RTF approach is weakest
What’s challenging is that the database has two entries for what I think of as a single logical item (the AV file and the metadata). It’s a paper cut rather than a major issue, but it isn’t as streamlined as I might hope. If I name both the text file and the AV file with logical names and do a title search, I might have both entries show up in a search box, cutting the number of search results I see in half and making skimming harder. If I only give a meaningful name to the text annotations, then I have a bunch of video files with random names and it’s possible to accidentally delete the wrong file.
If you are using the Annotation template (see Data > New from Template > Annotation), the name of the file is the same as the referred file, just with (Annotation) appended to it. In a search this provides hits for both documents. The file is also created in the same location as the original, which makes for easy grouping. Also, a smart group for all Annotations is automatically created.