How do you use new DT3 features?

No, because:

  1. you can have words similar in form, but different in meaning;
  2. in both cases you cannot group these words together and get an actual picture of how prevalent a determinate theme is among others (at least as far as the concordance panel goes).

The Concordance doesn’t show you contextual relationships or themes. That’s up to the interpretation of the individual.

Smart rules and placeholders? A marriage made in heaven! :grinning:

I’m a new user (as from the launch of DT3) so this is probably not the most sophisticated of examples but, for what it’s worth, here is a smart rule incorporating a placeholder:

DT3

Stephen

6 Likes

Yes, indeed. That is why I was thinking in what other ways it could perhaps help the individual in this at times arduous task :wink:

Nice example!

this is probably not the most sophisticated of examples

sophistication is fortunately NOT a requirement. :slight_smile:

If a script or smart rule works as expected and helps you in your daily use of DEVONthink, it’s a success, no matter how unsophisticated or complex.

1 Like

Disclaimer: my comment/discussion can be incredibly naive due to my lack of knowledge in concordance and the typical application of word cloud/concept schemata, and qualitative method such as discourse analysis.

Serious utilisation of word<->concept<-> schemata probably falling more into the scope of more specialised software such as Nvivo. Nvivo is kind of de facto for qualitative and mixed-method researchers (I think?), but it is extremely “slow” and requires laborious initial set-up and ongoing fine-tuning to make the connection works (I tried testing that app a few years ago). I guess there is no easy way when it comes to connecting words and concepts.

The core mechanism of concordance may be essential for some DT’s core function (classification?), but I rarely see concordance of DT being discussed in any forum/blog. Very personal opinion: for concordance reaching to the point of being approachable (ease of use and lots of examples in practical application) and usable (flexibility in customisation), a significant amount of further development and reference to the design of other niched apps may be required. The problem is, perhaps only very few DT users will ever need that sort of ability “directly”. IMHO, concordance may begin as one of the core competence in DT’s blueprint, but now it seems more like a mechanism that is working hard behind the scene. Just to be clear, DT is an amazing, incredible, and unique app in my eyes! Smart rules and smart groups are already delivering everything I need!

2 Likes

Thank you for sharing this! I created not one, but two smart rules based on this.

Speaking of which, @BLUEFROG there appears to be a small error in the date detection on DT3. Dates outside the US usually are written DD/MM/YY and not MM/DD/YY. When the day is <12 then it understands it correctly, when it is >12, then it will wrongly misidentify it as being the month.

I agree with you and yes, Nvivo, MaxQDA, Atlas.ti are all softwares for quantitative and qualitative data analysis. My suggestion would bring DT3 closer to them, but I can see why this would be difficult: these are all big companies that charge thousands of dollars for their software.

As a matter of fact, I have a temporary student license for MaxQDA and while I do find it useful from time to time, DT3 is much closer to what I really need. Most of the time I use it in a similar way to what a windows software called Connected Text does very well, perhaps you heard of it. Still DT3 is obviously richer in features and more open-ended than Connected Text and from time to time I do take advantage of that.

3 Likes

Thanks for the report! I know there is some investigation into this, perhaps needing to use a value with the Locale.

I read some/most of the threads about concordance in this forum. I feel that I am totally wrong to compare Nvivo/MaxQDA and concordance. Nvivo/MaxQDA are specialised apps to treat each individual piece of text ( word, phrase, sentence, paragraph) as a basic unit of analysis and that’s why they are slow, concordance operates at document level.

My interpretation: The salient goal for concordance is to identify the word-usage of similar clusters of documents in DT. Very broadly speaking, the most crucial element in concordance is the differential weightings of words in word-cloud, not frequency rank. A cluster of similar documents is differentiated by how different is its top-ranked words when compared to ALL other clusters. This means that the primary/sole purpose of concordance is implementing “classify and see also” and the efficacy is limited by:
(1) How good is the quality of text/OCRed files? If many pdf files have bad quality OCR (words sticking together, incomplete recognition, many random mixtures of symbol+word in rtf and mark-up files etc.), the differential weights of the top-ranked words will be meaningless.

(2) The quality and the characteristics of existing groups set by users. If users can (i) put “really” similar documents in each group, and (ii) the topic/subject of groups are significantly different from each other, and (iii) exclude many irrelevant groups from classifying, then concordance will be able to learn and identify the unique pattern in each group and do a good job.

This also means that:
(1) We can’t expect DT/concordance to do anything similar/close to the functionality of Nvivo/MaxQDA jobs other than “wishing” DT may have the resources to extend their core competence by improving the functionality of word-cloud . It is because the current form and properties of word-cloud in DT is already an efficient and sufficient mean for “classify and see also”. EDITED: It may means that (i) concordance becomes a back-end engine and word-cloud/analysis is separated into a function that allows a higher level of user-customisation. (ii) concordance becomes customisable and allow users to have their own ways of “classify and see also”.

(2) That’s also why it is much more challenging to perform a meaningful auto-grouping: how accurate could DT cluster a bunch of new items into new groups without a preexisting reference of uniqueness? Perhaps all DT can do is to group the obvious into existing groups, and compare the rest with plain-vallina frequency-ranked word cloud and try its best to cluster them. If I am a very demanding developer and know the limitation of the current methodology, I probably won’t be happy with this incomplete solution and would rather not to include the function in DT3 (pure speculation + imagination). Kind of like what a good chef will do. EDITED: otherwise, if users just let the auto-created/assigned groups staying there, there will be negative learning coz those groups will become increasingly “garbaged”.

(3) Smart rules/groups + tagging are better answers to target grouping according to the design philosophy of DT (another pure speculation + imagination).

Just my 5 cents.

2 Likes

When creating smart rules, is there a way to simply use a field not being empty as a condition?
E.g. If “authors” field is not empty, then do…

I have tried in different ways but nothing did the trick.

Found out. It was obvious.

If “authors” matches [A-Za-z] then…

@Stephen_C I have not been following the DT3 discussions, but your example alone justifies the cost of the upgrade!

@paulwaldo thanks for the kind comment.

Stephen

Wouldn’t the filename contains download lead to unnecessary hits? I’m not sure what including that adds to the functionality of the rule, or am I trippen?

For me the rule works perfectly—because of the combination of the content match (which I have obviously obscured in my screen shot) and the file name. Effectively that combination is unique.

Stephen

Hey, that’s the part that actually matters. If it ain’t broke dont fix it, right? I was just curious based on what I saw in there and wasn’t sure if you had tried it without that part included or if it would change the results at all.

Then again, that’s probably why I’m not much good when it comes to that kind of stuff.

.

My workflow is a slight variant. I used hazel as a first pass of all files in my Downloads folder in Finder. It will inspect the content and/name of all PDF files and then rename the filename to something unique before moving it into my DT inbox. There, a smart rule will match on the file name and move the file to the correct destination. I like this workflow because it significantly reduced the amount of time required to clear my Downloads folder.

A screenshot of your Hazel rule would be nice - I’m wondering right now whether a smart rule could handle all steps.

That, in fact, is exactly what I’m doing. I attach a screen shot of the Hazel rule. The reason I tend to rely on Hazel for automating renaming and moving downloads is that I find it’s rather more accurate than DT3 in identifying the relevant date in the pdf which I need to use to rename the file. The renaming options are (I believe) rather more versatile than those in DT3 (although that may just be a beginner’s lack of knowledge of DT3!).

Stephen

1 Like