database structure, DT 2.0, and online backup

No, this isn’t a feature-bloat request to add online back-ups to DT. Relax. :wink:

With my digital research materials exponentially increasing, I’ve been well advised to start up a healthy backup regimen. What I’ve personally decided upon is:

  1. Full clone backup to external hard-drive (using SuperDuper!, Carbon Copy Cloner, Synk, or the like)

  2. Burn irreplaceable files & large files to DVD (research, music, etc.)

  3. Upload my most important project-sensitive research files to an online backup server (probably will use Amazon’s S3 + Interarchy)

So, what does this have to do with DT?

DT’s current Database structure makes it very hard to incrementally backup only those files which have changed since the last backup, no matter what form of backup. While this isn’t a big deal when backing up to an external harddrive, it could mean a big difference in time when uploading to an online backup service, due to upload speeds.

For example, if I backup/upload a normal Finder folder with 10 files, and then I modify one of those files, upon the next backup/upload only that one file will be uploaded. Which saves time and bandwidth (and thus $).

Due to DT’s current package & database structure, if I make even the slightest modification to even a text file within the database, all of my DT database’s 10 .database packages need to be re-uploaded. True, it doesn’t upload all the files in the “Files” folder within the database, but it still takes significantly longer. And when your database is 1 gig (as a couple of mine are), this could be a problem, since my research virtually LIVES in DT.

I’m getting the sense that – as online storage prices continue to drop and/or online storage of upto 5 mb even become free, and as more and more people start to backup online (as seems to be the trend) – how efficiently any piece of software allows its data to be backed up will become more and more crucial.

So, this is all just a very long-winded way of asking: will the (long-awaited) upcoming changes to the Database structure of DT 2.0 change this? Will the promised open folder system mean that any changes made within DT will then allow for the type of incremental backup/synchronization to take place more quickly and efficiently?

Depending on the synchronization process V2 will improve things a lot (if the synchronization copies only new/modified contents inside the package) or not at all (if the synchronization copies the complete package).

The synchronization/backup process only updates those files which have changed; it actually looks in the package. However, as DT currently stands, the slightest change to a DT database requires that all of the ten .db’s that are within DT’s package structure get updated everytime right now, no matter whether I just add a single word to a single text file, or if I do something much more substantial. I didn’t notice this problem with other programs’ packages that use a more transparent packaging system.

Well, continuing to look forward to 2.0!

Hi,
moreover, the database structure will become problematic using “timemachine”.
IMHO the database file system should be "re"designed for use under 10.5.
Regards