I have a recurring error that seems to be due to a memory leak. I have indexed a directory with 9500 pdf files. Of these ~950 do not have text attached. I have a smart group to select those files. When I select a small number, 5, say, and OCR to a searchable pdf all goes well the first time. However when I repeat this, it eventually fails with the error shown above. Repeatedly Freeing memory between batches allows it to run further but the same error eventually reoccurs, till every OCR job fails. Starting and stopping DT3 just brings me back to the beginning (the 1st conversion batches work, but eventually DT3 just reports the same error again.) I would like to select all the documents and do them as one batch.
Hardware: 2019 MBP, 16GB ram, 400 GB of 2TB SSD available.
Software: DT 3.8 Pro Edition, deleted and re-installed the ABBYY engine as suggested in this forum (for the licenses:0 error)
multiple small batches also cause the error. If I monitor memory, it decreases with the number of papers, whether they are in small batches or a large one.
The bug appears to be in the Finereader engine and support has opened a ticket with them. In the meantime, OCR works if you do single OCR jobs. no batches. Hardly fun with 950 documents. It does give me a chance to add tags, though.