Looking for suggestions to help my project work-flow.
How can I “connect” info across databases?
Simple example is that I have one database with “Base” material which I create, and a second “Project database” for ongoing work. Many times I want to use a document from the “Base material” database in a project located in a “Project database”.
Assuming no changes to the document, how could I have some type of entry in the Project database that connects to the document in the “Base material” database? Do I need to create a text file that includes a DTP link to the document in the “Base material” database?
I appreciate any and all suggestions for how others accomplish something like this.
Thanks for the suggestion DTLow - I have considered that.
Typically I have five or six current client projects and scores of closed client projects. I’m trying to have the closed projects archived in yet another DB so they don’t clutter up my navigation, searches, etc.
Generally, the docs in the Base materials are static but sometimes I may revise them independently or as a result of project work. For now, working with static docs will be a big help.
Blockquote * Would you be making notes about the document that only pertain to the project you’re working on?
I’m not.
I want to be able to quickly see all materials I am using and / providing on a project. In my mind, it would be like having replicants from another DB in the project DB. (but I know DT doesn’t do that). The only way I found is to create 1 RTF doc in the Project DB that contains DT links of the docs from the Base DB. That kind of gets the job done but then I need to open the RTF to see the list of docs and then find the way to the doc in the Base DB …
DT is so rich, I assume there is an approach to accomplish this but I haven’t found it yet.
As another suggestion, I guess you could also copy the item links and paste them into the project database as bookmarks. This might come pretty close to having replicants, if that’s what you’re looking for.
It’s possible to point to the same file across different databases if the file is indexed.
It won’t have the same UUID in each database, but changes to the file’s content of one record will change all other records’ content that point to the same file across databases.
But if they’re static files making them indexed just to have their content available in another database is probably not what you’re looking for.
Thanks all for the suggestions. My take away - as I suspected DT dos not do what I was trying to accomplish, and probably with good reason. The option that gets me closest is just having copies of Base files in the Project DB.
You’re welcome.
And given your description, I personally believe this is the optimal solution. 32 years in graphic arts & printing taught me to create discrete structures that encapsulated an entire project, regardless if there was duplicate information. I could hand off any folder, CD, or hard drive knowing everything needed to reproduce that product was there.
I’m worried. Assuming you went to school before that, and adding the apparent lifetime you have worked for DT that’s scribbles 16 + 32 + lifetime = ancient! I hope you’re not considering bunking off on retirement any time soon?
Haha!
I started in graphic arts professionally at 15 (yes, 15) and have no formal schooling in it. I’m a quick study and mostly self-taught. Also, I’ve been with DEVONtech 9 years this July but my time here overlapped with my previous job by several years.
Indeed, I am. I had one semester of art school and got kicked out of a 2 year graphic arts program after 1 year (partly because I had learned the two years worth of knowledge in the first semester, got a job running a printing press, and ended up teaching the other kids ). The tech stuff I taught myself to become more efficient in printing companies and service bureaus I worked at. (I was sometimes running three or four Macs simultaneously, i.e., multiple print jobs - usually catalogs - at once )
I am a rather new user of DT3 and I am trying to learn how to use it most efficiently for my purpose. I am a historical researcher and I have several hundred GBs of archive and reference document that I am using in my research. Due to the large amount of data I have decided to split the static reference documents into several databases based on topic and archive provenance.
Basically I would have liked to replicate documents across these databases and my project and research database, however this I understand is impossible. You advice the users to duplicate the files into the project or research databases, which of course is a sound and valid solution.
However, I have found that I can do a pseudo replicate by adding a reference document’s item link into a bookmark in the project database. This seems to work fine, but are the hidden problems with this approach that is going to hit me later down the line.
I have not tried putting it all in one database. The first batch of data that I am looking at putting into DT3 takes 168 GB of disk space for a total of 130,611 files. How many words and unique words there are in those files I have no idea, so I was worried that this amount of data would exceed the DT3 recommended limits of 200,000,000 words and 4,000,000 unique words in a database. I have no experience in judging this.
Well, I guess you are not reporting anything bad happening so far!
You can look in Menu: File → Database Properties to give you an idea how many words you are dealing with now.
I don’t know how “hard-core” the limits you quote are. The experts can weigh in …
Given the linking/replicants that you seek, I still think you are better off working in one database. If you are concerned of the risk you perceive, maybe some of your files are not candidates for your linking/replicants and can reside in another location/database until you find a need, or something,
Meantime, search in this forum for where this question of “big databases” discussed already, e.g.
I am no expert but I can tell you I my main database remains quite performant at way more than those limits. I have found that the number of tags is the main factor which can slow down my system; most of the tags are created by automation or import of some sort - I really should put an end to that so I don’t have to delete them manually.
I suspect but cannot confirm that the amount of RAM you have installed (or unified memory in Apple silicon) influences the size database you can manage.