I’m concerned about data loss and ghost files. I know they’re rare, but that doesn’t help the few who get hit with the problem.
First of, what are the circumstances where people are seeing lost data errors? My impression is it comes up with indexed files, not database files, and that it’s possibly only happening when using DT for Mac in conjunction with DTTG. Although I have seen reports of one instance where DTTG is not being used, that person did use it in the past, so perhaps the problem is legacy?
If I understand correctly, running “Verify & Repair Database” and “Check File Integrity” regularly can help stop, mitigate, and identify at least some of the problems. So I did so this morning on all my databases. However, I got the following error message on one of my databases–and wouldn’t you know it, it’s the most important one I have! “File doesn’t have a checksum yet.”
I assume this error means DT can’t check the integrity of the file, but expects to be able to do so once it generates a checksum. When can we expect that to happen? I have run the integrity check several times.
The files themselves seem to be fine.
Should I be concerned here? How do I make this error stop occurring?