Seems like ratings might be useful, but I can’t for the life of me think of how they might be used. DevonThink isn’t great for storing music, video, or photos, and it’s not a great ebook reader–areas where we generally think of ratings being used.
DevonThink isn’t great for storing music, video, or photos, and it’s not a great ebook reader
We have people who do all these things every day.
Also, a rating is as arbitrary as flagging. You make what sense of it or impart any meaning you want.
Sure. And I’m wondering how people are using ratings.
As for flagging, I only occasionally use that but I find it very useful when I do: When I have a stack of documents to review, I flag them all and then unflag each one as I’m done processing it. Did that just this morning.
I occasionally rate support documents based on the severity of the issue they relate to.
Related to ebook area.
I read a lot and annotate a lot more. Before I knew DT/DTTG I annotate over the paper book itself, and over Kindle eBooks.
Since I’m a hardcore user of DT/DTTG (I was ready to switch back into Windows but couldn’t due i weren’t able to find a replacement for those programs), paper books annotations are photographed and sent to DTTG, eBook pages are annotated and then captured and put into, well, I let you guess where.
For technical reading, I use DTTG itself, and annotate into PDF and then export annotations from DT… I even convert some literature books to PDF to be read inside DTTG.
And with new OCR capacities in DTTG, I’m skipping one step and instead of annotate in paper book and then photo it, I will photo it, OCR and annotate inside DTTG.
However, I don’t use ratings. But since in my podcast I asked a female techie about one phone model, expecting she will critique it, and she praised it and all of us got very surprised because she was completely right (and we were completely right to critique the same phone characteristics as well), I never say an option in a program is not useful, I use to say “I don’t use it” , even if I cannot find any way that option could be useful for anyone. Surely it is.
I hadn’t really thought to read ebooks in DT. I use Kindle, and though I do think about switching, it would be to another ebook platform. I don’t annotate ebooks.
I do make heavy use of DT as a PDF reader for work-related documents, in my job writing marketing articles for a major technology company. I love the highlighting feature.
I have tried on occasion to use DT as a read-it-later app for Web articles, but have never quite made it work. I use GoodLinks for that.
Used to rely on ratings but have moved on to labels. Labels are faster and much easier to quickly see articles “rating” via color of labels. Colours work well too on smaller screens of iPads with DTTG. Though would love Mac’s option of colouring article title rather than small color dots on DTTG.
I meant on DTTG. Been using modern option on Mac as dot is too small to notice straight away. DTTG doesn’t have the option.
I have occasional bowel (IBD) problems, so I have to keep temporary food logs when I have flares. Logging what I ate is one thing, but getting health trends out of that log is where I use ratings on my food log entries.
This way, I can export the stuff I collected in DT and create graphs of how well I’m eating via Numbers or using the Shortcut integration in DTTG and Charty to create a widget of daily rating averages.
That’s a very interesting approach, especially with the integration for charting. Nice!
I use ratings in two different ways in different databases:
Maturity of project ideas
For documents in my “interesting / project ideas” database, I have adopted a rating workflow from digital photography. The aim of the worflow is to very quickly reduce a huge amount of “incoming” data into piles that are “great”, “need work” and “can be kept as personal memories but are not fit to show others”. The original idea was published in an ebook called “1 hour, 1000 pics” which aimed at giving photographers a way to provide clients with a selection of the “best” pictures of a shoot very quickly.
You take your input data and, quickly and without too much thought, assign star ratings to every item. Since I’m dealing with web clippings and PDFs, I use the following meanings for n stars:
- Personal sentimental value only, low quality
- Personal sentimental value only, okay quality
- Can perhaps be published / implemented, but needs quite a bit of work and promises medium rewards
- Can perhaps be published / implemented, with little additional work and probably high reward
- Great resource, publish / do asap
Once I work on an idea and/or find more background material for a topic, it gets additional stars until its implementation starts, at which point I move it to its own project folder and discard the star rating.
I have another database with historical documents, in which I simply use star ratings as an intuitive measure of how closely the document relates to my topics of interest. If it’s an essential paper that only deals with my topic matter, it gets 5 stars, down to papers tangentially interesting, which get a single star.
Nice! Reminds me of my days with Bridge and Lightroom
I’ve been using ratings as a sort of prioritization structure. As I do research I rate an item in a sort of weighted importance/relevance/urgency level. In my system, 5 stars would mean most relevant/most important, 1 least. This helps me triage work as I research, write, and edit against a deadline. Color labels do this in a way, too, but I use those slightly different.
For a recent photo curation project, I followed a similar process in lightroom. I had hundreds more source images than I would ever be able to fit in the final project. When I was done I could eliminate any 1 and 2 star items, focus my attention on 3s, then narrow further and further in the same project. However, those materials are still available if I need or am able to go back and add more, and as I did I started from the higher end and moved down. Back to DT I’m now doing the same thing with subtasks in research: 5 stars get my immediate attention, then 4s, etc. as I am able.
Prioritizing sounds like a good use of ratings!
Here is a crazy one: I set up a spaced repetition system within DT using ratings. I’m too old for cramming exam materials (in fact, I’m the one who now makes exams :-), but I still like to review some information periodically, and spaced repetition seems an efficient way of testing/reminding yourself. Of course, there is Anki and a host of other systems. But I was not keen on any of them. In particular, I don’t want to just review flashcard-type documents, but also periodic reviews of other documents such as scientific articles. My information is all in DT, so what better place to do spaced repetition practice than within DT, which is synced across all my devices? Setup and maintenance on the Mac, testing and reviewing on the iPad and iPhone.
What does this have to do with ratings? I use them as a simply way to carry forward the progress of an item in the spaced repetition. I’ve set up a system with 3 groups and hourly smart rules that manage the repetition.
In short: a new item enters with rating 0 and is scheduled for a review on the next day using the DT due date mechanism. Upon review of an item, I label the item green (passed review) or red (failed review). In the former case, the rating is then upped by one, and a new due date established. The intervals between reviews are spaced out by a formula that is based on the current rating. If the review failed, the rating gets reset to zero.
This works quite well and creates a flexible SRP system within DT. Any item in the database can be replicated into this system for study and removed if no longer needed. The “flashcard” can be any document.
The only things that’s a bit cumbersome is the assigning of labels on iOS. Obviously, it would be much nicer to just have a correct/wrong set of buttons on the screen instead of having to dive into the info menu.
Another drawback: the rating scale only allows 6 levels of spacing (0 to 5 stars). It would be easy to move the encoding of the progress into a custom metadata field, which would also allow recording the review history of each item. In that case the rating stars could then be used to give more differentiated evaluation, like Anki et al. allow, to fine tune spacing to the difficulty of the card at hand. Indead of judging cards by fail/pass with labels, wrong would then be zero stars, 1 star is “very hard”, and 5 stars “very easy”.
Very interesting use of ratings! Thanks for sharing.