Interested in your scenarios / experiences around timeline related documents / knowledge (audio / video)

this is quite succinct, @AW2307 – thanks for that great serve!

I think it is a great initial post, as you really lay out the conditions of DT as (central) part of a ‘knowledge ecosystem’ (a term I quite like and use myself) quite convincingly, and also provide a really relevant framing for any work around timebased media documents with this! it puts things into perspective and sets the landscape quite well!

– just to add and insert my first thoughts:

DevonThink as the backend of the knowledge ecosystem

I am still thinking whether I would also call DT my ‚backendof the knowledge ecosystem. surely it does a lot, but it is rather my intelligent go to keeper and sorter, set up to work well with my file system blending stuff with found materials (collected from the web), also allowing some original input (mainly MD notes for me). So, my first associations are ‚clever bucket‘, ‚flat database-of-a-sorts‘, or ‘trickster filing cabinet’, and ‘hyper intelligent library’… I couldn´t yet say it´s my ‘universal backend’, as it does not really provide some things that I regards as central to my ‘knowledge ecosystem’ (eg allowing for good representation of my core knowledge structures; as it misses schematic aspects – maps, graphs –, and generally all kinds of ways to order elements in visual or creative ways or lay them out spatially (in the end I think knowledge lives in 'cognitive spaces`). this might be nit-picky and just about the metaphorical meaning of ‘backend’; or maybe it talks about different cognitive styles… and the great thing is that DT allows for myriad ways to make it a central ‘machine’ in one´s system… but I am sure we wouldn´t disagree here…

but to add to your thoughts about what DT actually provides – and also start talking more about my interest in getting AV-documents into the DT-equation / -ecosystem: what DT really provides to my knowledge ecosystem is:

  1. automatic relations to other relevant content / documents, which DT provides in several ways (proposing filing locations, show linguistically / semantically similar documents, giving interactive concordances of document sets,… etc.)

  2. intelligent ‚links‘ between different filetypes and in some way across different modes of media/documents (e.g. setting it up the right way, one can bring in image metadata and DT would find similar texts, graphic files etc.; turning highlights into separate text-files would be another example of this mode-transition; as would be the different ways in which tags, metadata, and inline-text (e.g.) hashtags can be set to relate to each other, and further build special mechanisms of ‘intelligent-relationing’ to my fingertips; this also goes for filtering, with the example of tags allowing to drill down semantically within a set of very heterogenuous filetypes and media-modes …)

– this also includes allowing some kind of ‘translation’ (conversion) between these different modes / filetypes; e.g. stron conversion capabilities; currently it allows to produce personal indexes of videos (via time-based reference) as text files, living alongside and relating to video files; it allows to preview very heterogenuous sets and allow for that in very different sets (via replicants)

– 3) a very own kind of ‚hyper-index‘: where Pony Notebooks had automatic indexes DT has: most powerful tag-system (hierachical; easily administrable; allowing for drill down; across DBs; transparent to OSX tag; convertible to other metadata systems like IPTC, folder structures etc.); powerful concordance, mentions, annotations that can reference back (this also goes for timestamps in videos!); TOCs (in case of video that is the YT-timemarker TOC; – on top parts of this is intelligent to some degree (linguistic analysis), and intermodal (so it works to a large degree across filetypes and ‚modes‘); a very finegrain search language that can leverage (mostly) all of that; … and probably some other trick ponies, all of them amazing in itself … and potentially of interest to work with AV media (esp. in mix with other kinds of documents)

… so, this is my account of unique leverage points DT provides for my view on ‚knowledge ecosystem‘ and also what / why I think it´s so convincing for (potential) intelligent use / leverage / digest of video and audio (with specific ‚metadata‘)

What DT can and can’t do with video-based contents

First - to understand: what do you mean here?

– is this related to text-like excerpts or to video (the second of which I think asks a little too much of DT)?

… But then where I think your thoughts are most helpful, is turning your description into a kind of conceptual inventory / typology of what kind of informational extension / augmentations exist video (audio) in principle. this also helps to sort out what can (and can´t) be done in DT, what might be of interest, but also what 3rd party tools and even more what kind of technical protocols are out there to really work on these – and to clarify what kind of frictions exist, for DT and in principle.

Positively and very roughly speking, I see in this typology:

  1. time markers (equivalent to notes; highlights; indexes)

  2. sub-clips (think chapters, and sections of text; like stuff that normally goes under a headline…

  3. text-stream augmentation (subtitles; transcriptions; voice-over or note stream)

All these are hybrids of text-mode information and specific position(s) in the timeline of a media document.

What I see here is that DT allows for

  1. in a certain form (via timecoded references internal to the DT system; picking up markers in the specific cases of QuickTime, FinalCut annotated exports, but also YT); – big caveat with that: the markers appear in a TOC (great!), but are not searchable (bummer)

  2. nope (only as emulation via the marker system) – not really as ‚subclips‘ that is selected parts of a video/audio that starts and stops at a certain moment (sometimes, as with chapter, the subclip logic is really transferred into the marker-logic, allowing one to jump to the starting point of a chapter (subclip)

  3. these text-stream-augments can easily brought into DT, as text – that is once they exist (e.g. from SLT files); problem here: but I do not see a way in which they would automatically relate back to the time-index of the actual AV files they belong to…

So bringing this into the frame of thinking raised by AW2307 – thinking about the possible interactions between DT, its ‚file specific use-cases‘ (on-board editing capabilities), and other specialized / 3rd-party apps – the big question for me is: what kinds of exchange between DT and 3rd-party-tools are possible, what would be desirable? (– e.g. what about vimeo chapters? What about subclips made in FinalCut or other apps.) And what are the options on the ‚backend’, i.e. given the technical preconditions (formats, standards)?

As I also try to understand the actual technical preconditions in play, it might be noted that @cgrunenberg
pointed out that what DT can do is related to what the Apple AVKit. But then I can´t really tell what else that would make possible or prohibit in leveraging timeline-related information (metadata) in DT…

  • Then there is the obvious fact that DT loads YT-timemarkers and presents them in the TOC. The technical background here is not clear to me; in this regards it would be interesting to know what technical format / protocol that is based on, whether this taps into some standards, that could also be leveraged for other media-contexts/-platforms/-apps (e.g. vimeo)

Using 3rd-party apps in unison with DT for advanced analysis and consolidation of video-based contents

@AW2307: I wonder what concrete uses in DT can be made out ouf your comment:

What MarginNote is for @AW2307´s probably is Kyno for me (especially since the once industry leading CatDV has now been dissolved into a company-SAAS). The good: Kyno produces flat DBs (via sidecar files) allowing for versatile video annotation, keywording, and also for (annotated) subclips. The downside: the only way I see this can be exported / linked to DT ‚ecosystem’ is a) via an Excel-/Numbers-listing of the subclips b) exporting thumbnails of markers c) exporting subclips as seperate video files d) export FinalCut XML.

Now a-c are not really keeping references to points on the timeline of the referenced video / audio, obviously. d) export of FinalCut XML would seem to hold the promise of somehow keeping / creating a link between text-based info (markers, subclips etc.) and the actual timeline of the video file. But I do not see how that would work in DT yet and how it might be connected / connectible to the Apple AVkit…

But going via Kyno, I couldn´t even produce a TOC like one can for FinalCut exports.

Then FinalCutProX itself is an interesting point. Related to Christian Grünenbergs reference to AVkit, it is possible to let introduce (or use) subclips that are referenced from within FCPX in DT. They appear just as the YT-markers do (– but are also not searchable). Then, irritatingly, markers from within FCPX do not appear in DT. Hm.

The latter behavior cuts out going via another dedicated MAM-like app vor video and annotation, which is KeyFlowPro Thing is it allows for markers (and ‚keywords’), but not for subclips or chapters. I do not own it, so I can not test it. But it seems like not helpful for bringing timeline-related info into DT.

There are some more apps that work with annotation / markup of timelines (like ANVIL or Elan – and of course I´d appreaciate others sharing experiences in relation to use / integration with DT…

Also, the whole subscenario of podcasts and chapters (notes etc.) in here, I haven´t yet tested; but might be able to do that later. But of course comments / chip-ins from people who work with podcasts, esp. chapterized ones are welcome…