DT3 JSON Parser

The Bibtext import is very interesting. Is there a way for it not to be read-only? Among other reasons, I cannot edit a column to be a URL column and thus activate hyperlinks.

And again- sorry to repeat this over and over - we really need a “wrap text” feature because it is impossible to read a long text field such as an abstract. Since this is intended for bibliographic entries, that truly is an essential feature.

No, DEVONthink doesn’t support editing or exporting of *.bib files.

Can it simply be converted to a regular Sheet?

Select all rows of the sheet (Cmd-A doesn’t seem to work due to a bug), copy them to the clipboard and press Cmd-N. This should create a new sheet.

Thanks that works.

Also FWIW I realized that it is possible to open a link in a “non URL” column in read-only Bibtext using Alfred - if installed, it recognizes the text as a link and presents the option to open it:

I am not sure how to reply. Indeed it is neither CSV nor JSON, it is LaTeX. All BibTeX files that you’ll come across on sites like JSTOR, Google Scholar, Google Books and so on are just plain text files written in this way. It its remarkably similar to JSON which makes me think it shouldn’t be too hard to do the same thing with it as what it is already doing with the JSON metadata files.

Could you perhaps outline the steps that would be involved in this? It would help me understand how to attempt it.

Having recently gone through this process moving BibTex into FileMaker, I’d advise you not to reinvent the wheel. There are lots of BibTex parsers in different languages out there, with mature codebases (10+ years), but by far the easiest route is to use BibDesk and create a csv export template, then parse that.

1 Like

It its remarkably similar to JSON which makes me think it shouldn’t be too hard to do the same thing with it as what it is already doing with the JSON metadata files.

True, and here’s a caution:
Don’t let that distract you from the fact that it’s not JSON data. Parsing JSON - or any format - is a specific thing. So a parser isn’t trained (unless you’re writing your own ecumenical parsing engine). It just parses what it is written to parse.

Depending on what you’re trying to accomplish, this may or may not be the case. Also, parsing CSV can be less intuitive than it would appear since it isn’t constructed in a manner that conforms to what we see and how we think.

Both in case of a *.bib file or a CSV file exported e.g. by BibDesk the necessary steps would look this:

tell application id "DNtp"
	set theFile to choose file of type {"bib","csv"}
	set theItem to import (POSIX path of theFile) to current group
	set theColumns to columns of theItem
	set theCells to cells of theItem
	set theAuthorColumn to my list_position("author", theColumns)
	repeat with theRow in theCells
		if (count of theRow) ≥ theAuthorColumn then
			set theAuthor to item theAuthorColumn of theRow
			display dialog theAuthor
		end if
	end repeat
	delete record theItem
end tell

on list_position(this_item, this_list)
	repeat with i from 1 to the count of this_list
		if item i of this_list is this_item then return i
	end repeat
	return 0
end list_position
2 Likes

True. I haven’t experimented with csv import into DT3 as I’m happy with the combo of BibDesk on Mac and Filemaker Go on iOS for the time being.

My point though was that it will almost certainly be easier to use an existing parser to get the data into a form DT3 can read natively than to try to modify a json parser to work with BibTex.

I don’t use BibDesk but I just ran into this: https://bibdesk.sourceforge.io/manual/BibDeskHelp_80.html#SEC144

1 Like

Yeah BibDesk’s AppleScript support is pretty thorough and well-documented. It also supports script hooks tied to particular actions so e.g. it could export a corresponding csv automatically every time a new publication is added.

See also https://sourceforge.net/p/bibdesk/wiki/BibDesk_Applescripts/

Instead of exporting CSV, the values @Bernardo_V was referring to could be gotten and used directly.

True. Here’s an example I cooked up. What a pleasure it is to use two programs which both have robust AppleScript support.

What a pleasure it is to use two programs which both have robust AppleScript support.

Indeed! I’ve always felt that Mac apps with robust AppleScript support were just that extra cut above the others. :slight_smile:

The downside of using Applescript is that that script imports maybe 15 records per minute, whereas exporting from Bibdesk to csv and importing that csv to Filemaker is done in a few seconds, even with a 7000 entry .bib file.

The downside of using Applescript is that that script imports maybe 15 records per minute… importing that csv to Filemaker is done in a few seconds,

This is not a 1:1 comparison. Not only are they different apps, they are doing two very different operations.

Importing a 5.4MB CSV file into DEVONthink is no slower than Filemaker. In fact, I just imported a CSV file with 36,634 entries and 18 columns in two seconds. That’s 659,412 cells of data in two seconds.

However, that is also a pointless comparison as the topic at the moment was using AppleScript to create individual records, not just importing a file.

Yes of course; I didn’t intend it to be, nor did I mean to slight DT in any way. I’m sure if I used Applescript to pull the data into FM direct from BD, the performance would be similarly slow. Unfortunately it’s a downside of applescript - I’ve had this experience before of working to get s script to do exactly what I need, than realizing it’s gonna be too slow to be a viable solution.

No slight detected :slight_smile:

And I was actually quite proud to post the numbers I did. 36,000+ entries isn’t enormous, but that’s still a good amount of data and I was pleased with DEVONthink’s performance :smiley: