I am looking for a way to import data into my custom metadata fields from an csv or other transfer-file, maybe even automated in a longer run.
I am keeping data in sync between an excel-file and my DevonThink workspace and I’d like to automate this somehow.
I have an excel-file where I track all my accounting-movements, with date of payments, categories, and other informations. Each individual transfer gets an specific ID-Code in excel. In DevonThink I have created an md-field for this code.
My idea is: When I enter this ID-code in DT, I want a script to look into an external file which I have exported out of excel and fill the other md-fields with the external information out of the excel-file.
Is there something for this already available or would it be necessary to start with a script at the beginning? Can anyone help me with this task?
Many thanks in advance.
I can’t remember such scripts but of course I can’t remember everything burried in this forum To automate this process it’s probably better to import the CSV file into DEVONthink and use the imported file instead of the original Excel file.
However, my first question would be what’s the goal of this workflow? Initially it sounds like unnecessary redundancy.
what’s the goal of this workflow? Initially it sounds like unnecessary redundancy.
Funny, this is what my wife always says about new workflows and even when I brought DT into our house.
I don’t like going into different places for different information as long as I can have one place for all information.In DT I can keep the documents, but not in excel. So I’d like to find a way to merge those information from two places together in one place (DevonThink). Since Excel only works as an overview for all my account-stuff and I almost automatically get all information from the outside (banking, transfer informations, etc) into excel, I’d like to keep it that way and keep using it as a transfer-station to get all stuff together and have a nice overview, but push those information on into DT and put informations into the metadata of the actual documents. This way, I have everything in one place.
For some people, this probably will sound like an “unnecessary redundancy”
This may not be the situation for the original poster, but I can describe a similar situation I am currently working through.
I have a very large SQL database with about 1.5 Tb of data which has been the mainstay of my practice for over a decade. I plan to move that all to DT3 - partly because of better features in DT3 and partly to avoid the cost/effort to operate a dedicated server. The SQL software vendor offers the option to export the files along with XML files describing the folder structure for the files as well as associated metadata.
Either way it will certainly require a custom script - but I don’t think that keeping the metadata from the prior database is “unnecessary.”
1.5 TB of data including any binary data (like files) or more or less text-only? In the second case it’s unlikely that this will work unless not all data is stored in one database and not all databases are opened concurrently. Otherwise the search and AI index would require way too much RAM exceeding the possibilities of every Mac out there.
Almost all of the data are files
And I do plan to divide it among about 2 dozen databases, with only a small number of those ever opened concurrently.
Only (plain) text files or any kind of files?
About 70% PDF files, 15% Word files, and 15% audio files
Any ideas of how I could start with this?
Hold the Option key and choose Help > Report bug to start a support ticket. Include any real world data, screencaps, etc. that may be useful.