Running DTPO w/remote storage (vs. sync)

Has anyone tried running DTPO with it’s datafiles (both the DTbase itself and the document library) stored remotely but accessed through a mounted network filesystem? I’m thinking about something like WebDAV, SMB, NFS etc. (either in a VPN/SSH tunnel or directly over the net.)

Is it sane to consider doing this? I know concurrent access is a major no-no (and is hopefully prevented by the lockfile) but it occurs to me that there could be filesystem related problems w/using remote storage actively (vs cold storage of files for backup and/or sync) as the DTbase and files would probably be on an ext3 partition, DTPO may assume it’s on an HFS partition of some kind, and meanwhile there’s a network filesystem abstracting things in the middle.

I would certainly play with a test database before doing anything very important this way, and I know database operations may become crippled by latency and effective I/O rates over the network FS but it seems like an interesting idea.

Any thoughts?

I’ve got a few less frequently used databases on an AFP share here at home. It’s pretty slow over wifi from my laptop, but just fine on my desktop connected via gigabit ethernet. I haven’t experienced any problems.

Just a warning due to a recent support enquiry: Ensure that the server won’t sleep/restart/shut down while you’re using databases located on the server.