Both databases are new, but they have been constructed from documents created over many years, including in V2.
Over the years I’ve reconfigured the original databases (i.e. move the contents to new ones or merged them) a couple of times and I’ve experimented with various group and tagging strategies, probably not always consistently.
I’m trying to bring some order to the chaos, but the chaos itself is almost certainly my fault, not that of either V2 or V3 .
Taking @pete31’s helpful comment into account, I suspect what’s happened is a combination of:
having a lot of tags which ‘replicate’ (in the ordinary sense of the word, they’re not actual ‘replicants’) groups, some of which are themselves ‘group tags’.
I didn’t realise (until after I’d posted) that you no longer risk losing ungrouped documents when the last tag is removed. Some of these ‘redundant’ tags have files which aren’t in the relevant tag group, so I’ve been dragging them all across, thinking that as you can’t duplicate replicants in the same group, that this would just move the ‘extra’ documents across and ignore the others.
Clearly, there are some incorrect assumptions in there…
I’m not sure how many groups this affects, but there’s at least one with 526 documents, many of which are ‘additional replicants’.
So, rather than having to sift through it and highlight the additional replicants, I wondered if there was a way of identifying them and deleting them all at once from that group (leaving genuine replicants in other groups in place). As I said in the previous post, they don’t show up as duplicates.
I had a thought while typing the above and went away to test something:
It looks like that selected the contents of the entire group and running the script `Data > Group with Replicants’ removes the additional replicants into a subgroup ‘Grouped Replicants’. Deleting this group reduces the replicant count by 1, but doesn’t affect any replicants elsewhere in the database.
So, this looks like it will work, but before I try it on a ‘real’ database rather than the testing one, can you see any problems with this approach?