First off, I have been using Devonthink for several years and been enjoying it greatly. I appreciate the versatility and stability of the software.
I am a researcher (life science) running a lab and use DTPO mainly for two tasks:
- topical databases for things that are central to my research. I just dump everything in there, knowing that this will be the place to look it up be it a pdf research article, a hurriedly scribbled note of an idea I had, etc.
- I index all my research literature that I keep in my reference management software (currently in Sente and Papers, but I will finally make up my mind once Papers 2 is out).
While I used to work on two different computers (office/home) and synchronized both to a portable Firewire harddrive, I have now put everything on a MBP 15’’ which has become my main computer. It has its disadvantages such as lugging the computer around all the time but at least I have one big worry off my mind (keeping everything in sync). Besides, I am not sure if the Sorter would take kindly to being sync’ed between two computers.
Since upgrading to DTPO 2 I have been asking myself if I make the best use of the software. Most notably, I keep all my files in folders in Finder, only some items, mostly information snippets clipped from webpages, actually reside inside my libraries.
One thing I am particularly unsure about is if I should depart from the filesystem-base storage concept and declare and use DTPO my one and only information administration hub. Before you point me elsewhere: Yes, I did read the online help and and searched the forums and am at roughly familiar with pros and cons of either approach. Feel free to point out particularly relevant points though. But what I don’t know is if all the information is up to date and how much of an (dis)advantage some things represent in practice. Hence the longish introduction to give you an idea about my work.
In no particular order:
-
sync and portability
Working on a single portable computer provides some degree of portability of my data. Does it still make sense to strive for importing data or would an indexed approach have its advantages since the portability is already provided by the laptop (even though the database would not be self-contained)? -
file types: If some filetypes are not understood by DTPO, would creating a less Finder-based approach such as the DTPO hub wold represent create a bias against finding and using the information contained in those particular file types? Right now I am painfully aware that I need to look outside DTPO but this feeling might vanish.
-
Version 6 of Sente does not store pdfs in the file system space any longer but inside its (cryptically named) subfolder structure hidden inside a UNIX package. But its note-taking abilities are awesome, and the comments can be exported to the file system as text files, complete with quotes, my own comments etc, so I am kind of reluctant in giving Sente up for making (not maintaining) my notes about research literature, not even using this technique http://www.devon-technologies.com/scripts/userforum/viewtopic.php?f=2&t=9434.
One advantage these annotation files have is that they are searchable. Perhaps I’ll find a way to share the pdfs between Sente, Papers and DTPO like in the old days. -
What is currently (!) the best way to work with documents that reside inside DTPO? I have experimented with the template-based approach described here
http://www.devon-technologies.com/scripts/userforum/viewtopic.php?f=2&t=9506&p=44140&hilit=template#p44140
and
http://www.devon-technologies.com/scripts/userforum/viewtopic.php?f=2&t=7595&p=35731
Does this still apply and how do you deal with filetypes that you use less often or that DTPO does not understand? Either way there would be no templates to facilitate making DTPO aware of the new file. Or can you automate this? -
Is importing and killing the duplicate outside DTPO after successful import creating friction or is it still worth it? Again, can this/should this be automated?
-
size.
DTPO is incredibly fast and stable. But my current databases are already measured in Gigabytes (including their internal backups though). Would I not hit a reasonable size limit if I tried to import everything? Even a product in the top league will have its limits somewhere. I can foresee trying to split my current databases into more numerous, but smaller databases as a consequence of this. Would this entail some consequence elsewhere in my workflow that I may not be aware of? Such as the inability of creating replicants across databases? What else? -
longevity
Generally I try to keep things simple because every shiny gadget can break. Since we are talking about possibly decades of work ahead still even a rare error is a threat. So I am thinking about relying on simple lookup functionality (instead or at least in addition to a link), in which case the fancy search stuff would take place in realtime and the only thing that needs to be stored is a little string of text. Moving or renaming the file that contains the target passage is no problem then. Paranoid? Lately some people lost all their notes due to the fact that Skim (a very nice pdf reader) stored the notes in a place that was unsafe, so in some cases all notes, sometimes representing years of note-taking, were gone (Yes, I know, backups and everything, but you get my point).
Am I trying to over-engineer my work or are some of these points relevant? How did you solve your problems? I am particularly worried that in solving one problem I am creating another (does this work-flow really work? and how do I make sure there is nothing disapearing in the cracks). If the new task is similar in dimension to my old problem then I ought to stick to the file system approach (keeping things simple, remember?)
Any and all comments particulary real-life experience from similar fields are most welcome.
Prion
PS: Sorry for the long post.