Thank you very much, Bluefrog from DT and then real users, from the user side if I may say.
I’ll have a thorough look into the linked material (handbook pdf, etc).
What I am looking for, quite desperately, is - some users have this perfectly identified indeed - somethin like my current SQLite front-end, just with a much more powerful database as back-end, and the user is right who says he doubts such front-ends as the one I currently use, are used for “production work”, they all become unwieldy beyond a certain number of (formatted text) records - they are not meant for number crunching or for standardized sql data, they just juse SQLite as their back-end, but, and that’s why they are used by me and many other users, they provide “outlining”, i.e. within their SQLite tables, they also provide the necessary data - most of the time, in the form recordID - title - links to other tables, for various content (content field, blob here, opml export possible, which is then sort of a - not very standardized, but human-readable and thus scriptable - XML format; then perhaps a “comment” field and perhaps some other fields, not many), and the - indexed - field (or “column” as it is called in sql) “recordID of immediate parent”, with in case multiple entries IF the front-end in case allows for cloning items, i.e. items (which may be parent items to whole sub-trees, too) to appear in more than one position of the global “tree” (so this is not really a tree then, but sort of a “graph”, in simili-tree form).
From the above, it appears that the “tree”-building (i.e. what you then see as the “outline”, in a special pane) is done in a rather primitive way, with extreme recursion to gather it all, again and again - since there is no information stored “this item has got the following child items”, all this has to be built again and again, in run-time (There are other ways of “tree-building” in SQL but they are not implemented into these, regularly 1-developer, tools, so from a conceptional / developer point of vue, it’s assumed that the regular user of these tool will NOT grow their material, i.e. their item number, beyond certain “limits”, which are not “hard”, but beyond a certain number of necessary recursions, for “finding” the child items by the sole (if indexed) info “parentID”, it simply becomes totally unrealistic - so it’s much more the “design” that limits the use of these tools, than it would be SQLite’s fact of it not being a “serious” sql db, as (free and extremely powerful, including, just like SQLite, in-built text search) Postgres would be (MySQL is beyond reach, financially, and not needed either, same as Microsoft).
This being said, Postgres could store millions of items of course, and I would assume that somebody who uses it as the back-end for their “outlining” front-end, would know that they would have to implement the “tree-building” in a more sophisticated way (as said, those ways are known and can thus be implemented, by a hightly-qualified developer), so as to not slow down the access to the documents by inappropriate tree-building design: better access implementation would there come “automatically”, with the powerful db, whilst for “consumer stuff” like I currently use, the developers assume it’s “good enough as it is”, the access to the documents in a higher-up 4-digits, or a not-so-big 5-digits range, being “fast enough” - selling a Postgres db, with the same, “sub-standard” access would be ridiculous, since the user expectations than are much higher.
When I speak about “access”, it’s always the “tree-building” or sub-tree-building, i.e. the access to complete “compounds of data”, not so much single, or such some, records; also, moves of whole sub-trees in such a “consumer” tool may become slow or even not entirely reliable, according to the tool - it’s not so that they are “good for nothing” though, some of them remain stable, without data loss, but with wait times then of up to several minutes, due to the above-mentioned, poor “design”.
The thing with trial being, you would really need all of your stuff to be imported, then trial, in order to get a realistic perception of how your data will then really be handled, with some data which doesn’t even represent 5 p.c. of your total data set, no such realistic appreciation is possible - in Europe, you can send back bought (mail-order) hardware within 14 days, but that would become a quite incredible time run then, one should need to have prepared all the necessary import scripts (i.e. here: the script for adjusting the export opml to the import opml, so both would be needed to be known before) before delivery of the hardware.
So this seems to be a way indeed, xml adjustment being absolutely doable, this is WAY more realistic than to fiddle with blobs, between Windows and Mac.
I currently do not know any Windows office tool or the like, let’s say in the 1,000€ range instead of the 100€ range - all the aforementioned consumer tools are less or up to 100€ only, so they are incredibly cheap, but as explained above, you get what you pay for, and you then have to divide your stuff into multiple databases, which is really very “unprofessional” then.
There might be some “groupware”, on rent basis, but I’m clearly not willing to pay probably 80€ per month, or more, for “1 seat”, and yet, they would probably want to force me to rent at least 2 or 3 seats, since it’s groupware in the end, and they don’t address the “personal” market…
I’ll try to better understand DT’s architecture from the links you kindly gave me; I’m not really “fixed” what to do in my situation; I have got to say I would be much more confident in my W>Mac voyage if DT used some standard, powerful SQL db (Postgres being the only candidate here indeed, you couldn’t sell the same with MySQL or MS, both would cost much too much) where it would be said, “7-digit number of documents, no problem”; on the other hand, I know that sql, for such work situations, is considered far from “ideal” nowadays, to begin with, so, objectively, DT using its own, proprietary, (as I understand it) rtf (not xml) document db, is NOT to be considered a downside, just that psychologically, going PC>Mac, and then not even preserving SQL format (what I know and can handle, for queries, updates, etc.) - so I might be in for perhaps quite unpleasant surprises - “this is not possible, that isn’t either”, where with sql, it would be - from my particular previous “experience”.
And I “got” the hint, from the above, that with 32gb memory, I shouldn’t run much else, concurrently, or then would need even more - all that at Apple prices since they don’t allow you to buy your memory from other sellers, and even upgrading the memory might be impossible: they are notorious for locking in the user into any given hardware, bought “as is”, so buying a Mac with “just” 32 gb would be another real risk for me, since if afterwards, I see I need more, I’m stuck again…
I’m not so fond with Mac, I just hear brilliant, very tempting news about DT…
Thank you very much for your very kind help, in the meantime!