Does using the File > Export > Database Archive command make a zip file, which contains the actual content of linked/indexed files? It seems to! This looks like a very cool way to turn indexed databases, into completely portable ones with one command.
Am I wrong in this assumption? It seems to be working.
Is there a command or script, automator action, anything, which would accomplish this? I have a few databases all of which are including indexed files which is the way I want them set up. But for instance, I’d like to take one of them and make it fully portable, is there a way to accomplish this and not change the database itself, but make a .zip copy of the database which is fully portable and can be moved to my work machine for instance?
Would you be satisfied if that copy was never synched back with the original? I could imagine getting easily confused if exported indexed files in the copy were modified and those changes expected to be returned to the originals.
Yes, easily! I’d like to take a database which has a lot of indexed content, and somehow create a fully portable version. I do not want to change the import vs. index scheme of my original database, I do not need to ever synchronize anything back. My desire it to create a fully self-contained backup copy, or quick reference I can copy to portable devices and take with me.
I understand the logistical nightmare of taking an indexed database, making it portable and self-contained and then somehow trying to sync it back to the original. I don’t need that, only the first step! indexed to fully portable.
Is there anything that will do this? A script or automator action I’m hoping?
Indexes are aliases. They have to connect to the same folder structure on the machine or server you move the archive to as on the original machine. It’s possible to index folders/files on iDisk or some other webdav or server target. If that’s a possibility for you, then the archived database should work on another machine. (I tested this; it works. ).
Of course, both machines will need connectivity to that server for that database to work. I think any solution to your requirement is a work around at best, and not a native feature of DT.
For now maybe only with something like what korm suggested.
What’s desirable is an option for copying DT indexed files when creating a backup/archive similar to this one for the tar command:
-L (c and r mode only) All symbolic links will be followed. Normally, symbolic links are
archived as such. With this option, the target of the link will be archived instead.
Replace “symbolic links” with “index references”.
I don’t know if it’s currently possibly to do that (via AppleScript) or if there’d have to be something new implemented to support it.
Personally, I’ve never been comfortable with indexing files in my databases except for a few special cases. There are still too many special considerations I’d rather not be constantly attentive to. One particular change in v2 is that renaming an indexed document also renames the target file and the possibility of temporarily forgetting it, with irrecoverable negative consequences, makes me uneasy. Or, how indexing can make it challenging to create a totally portable backup/archive like is being discussed in this thread. My hunch is that some (even many?) users aren’t aware of these issues when choosing to use indexing since they’re not explicitly documented (AFAIK).
Maybe I have an easy solution on this!
Why not have a disk image on an external harddrive, say 50 GB in size there you create all your database into.
After this move/copy the files you want to a folder in the disk image and then create as many DT database as you want to have some structure for the files you want to have in the database.
Then index all your files from the folder/folders you have in the disk image to the DT databases you have created before for different groups or content to the databases.
Voila, now you can take with you, your external harddrive and always have all content with you wherever you are going
I must say I really love to index all my files to my DT databases. Becuase many files I had, maybe doesn’t have some really important value for me for the moment and instead to waste more space in a DT database to import this files, I’ll rather index this files. Then if I found something from some files I have indexed and really want to edit or have the real file instead for an alias. I simply mark this file/files and then choose Export -> File and folders and save this file/files into the disk image in a folder there I after this can import the files to some DT database.
Sure, a drive’s a drive. A thumb drive could do the same.
Copy, not move. Unless you don’t care about losing your stuff.
Here’s a twist. Say you have data and a database on a thumb drive, and you index the data on that drive. In the same database you can also index the master folder(s) on your home drive - the place(s) that the copied data came from. When the thumb drive database is connected to that machine, you will have access to those files, when it isn’t you’ll get an error. Why do this? So you can have a subset of files on the portable database, without having to carry around the whole thing. But, the database structure stays constant.