Do you store full eBooks in DTP?

I’ve got a bunch of PDF eBooks that are ~400 pages in length. I loaded them into DTP, and if I search and one comes up, clicking the item takes several seconds to load. For example, a 414 page 3.3MB PDF takes 6 seconds to load up on my new MBP w/ 8 gigs of ram.

Also, the search results don’t seem very useful when it’s a book. DTP tells me that the document matches my query, but doesn’t point me to where in the book it actually does it.

So…should I not be storing full books in DEVONthink? Either cut out snippets, or maybe split the book along chapter lines and load those in instead?

I’ve got some PDFs exceeding 500 pages in length. These open within a second from a view window. From a Search window, it may take several seconds for the PDF to open when selected. The reason is that that occurrences of the search term(s) are being identified and highlighted throughout the document, and when it opens it will have scrolled to the first occurrence of a search term. To move to the next highlighted term, press Control-Command-Right Arrow.

There have been a number of discussions on the forum related to your question about splitting large documents. Some users split PDFs, e.g., by chapter or even into individual pages.

I don’t split large documents myself, but will often make searchable rich text notes about sections of special interest, which are linked to a specific page of the PDF using the Page Link.

If you have used several beta versions of our DEVONthink software, it may make sense to remove the preferences of our application. This has helped me with a problem with displaying large PDF files. It may be that the PDFKit we use for this may have left some cruft around from an older version.

I have many ebooks, all in PDF format. I reduce the size & OCR them (Adobe Acrobat X Pro) and then dump them into my DT database (A-D, E-L, etc). I keep a copy of these PDFs in an external HD, just in case. Some questions/issues:

(1) On one database, the “Properties” includes: 1.9 GB/480K unique and 37 million total words. I’m guessing that “unique” refers to the first time the word appears, and the 37m includes all the words in the database (for example, ‘history’ would be 1 unique, but would include the 400 other instances of the word). So…of those numbers, which one should get me thinking about splitting a database? I used to have a database A-M, but became alarmed over whether or not DT would crash.

(2) Now, I open up the database, and read an ebook with DT. I use Bean as my RTF processor, b/c I find it easy to use, low maintenance, etc. Here’s where it gets a bit tricky/unclear for me. I want to keep my notes with the ebook-should this be one long RTF note, or separate ones? In any event, is there a way to somehow connect/link Bean with DT? If not, no big deal, but if so, I would hate to think I am making more work for myself.
In fact, I would like it if I could set up a feature where I could use DT’s text editor, and my notes would be created automatically under the material I’m reading/highlighting (I guess I could create a folder with the book’s name, and keep the book & notes in the folder).

My system: Mac 2.66 Intel Core 2 Duo/4GB Memory/200 GB available
DT Pro 2.1.1

I’m working on a PhD in history and so especially invite the historians to pitch in with ideas (as for nonhistorians…ok, you can pitch in as well! LOL)

I don’t store my eBooks in DTP because, as far as I can tell, they aren’t searchable within it. When I do, all I get is the blank document-page icon.

Instead, I’ve been using Calibre to convert them to PDF before putting them into DTP. Which is, shall we say, less than optimal. :confused:

I wish DTP could make it possible to view, search, & maybe even to convert various eBook formats (epub and moby are what I have, anyway) to PDFs.

:bulb: I’ll put this request in the “Feedback, Requests & Suggestions” forum.