OpenAI ChatGPT for automatic generation of matching filenames

Follow up to Experimenting with OpenAI API for automatic classification and renaming

I refined the applescript to get rid of that classification server and just use the openai CLI directly. With a bit of prompt tuning I got amazing results. For example, turning:

into

Smart rule available here:

To install, make sure you install the openai CLI:

  1. pip3 install openai
  2. Run which openai to get the full path of where the executable is
  3. Copy the path and replace /opt/homebrew/bin/openai in the script with your path if it’s different
  4. Update set OPENAI_API_KEY to "xxx" to use your OpenAI key

To use GPT3.5-turbo instead of gpt-4, change /opt/homebrew/bin/openai api chat_completions.create -m gpt-4 to -m gpt-3.5-turbo or other model with larger context window.

You can also instruct it to extract additional information such as city names, dates, names and put them into the filename as well. For example, one prompt that I had very good results with is this adjustment, which will correctly find create filenames like: 2023-07 XXX Hotel Booking Confirmation 07/22 -> 07/24:

    set currentDate to text 1 thru 7 of (do shell script "date +'%Y-%m'")

    set theCommand to "OPENAI_API_KEY='" & OPENAI_API_KEY & "' /opt/homebrew/bin/openai api chat_completions.create -m gpt-4 -g system \"You are a program designed to generate filenames that could match the given text. Output exactly 1 filename that could fit the content and nothing else. Include a date in the format yyyy-mm at the beginning of the filename if present in the content, otherwise use the current date, which is " & currentDate & ". Don't output a file extension, separate words by space. No preamble, no extra output, only output the filename. Keep it concise. If the file is a booking confirmation, include relevant city information and booking dates (mm/dd format, no year) (with -> arrow). For bus/flight tickets, include departure -> destination. For airports, only airport code, NO city name.\" -g user \"$(cat " & posixtmpfile & ")\""

Prompt only for better readability:

You are a program designed to generate filenames that could match the given text. Output exactly 1 filename that could fit the content and nothing else. Include a date in the format yyyy-mm at the beginning of the filename if present in the content, otherwise use the current date, which is " & currentDate & ". Don’t output a file extension, separate words by space. No preamble, no extra output, only output the filename. Keep it concise. If the file is a booking confirmation, include relevant city information and booking dates (mm/dd format, no year) (with → arrow). For bus/flight tickets, include departure → destination. For airports, only airport code, NO city name.

(obviously the more complex the prompt, the more will 3.5 choke on it. Use GPT4 for complex instructions)

I’ve also experimented heavily with using llama2 for this task so it doesn’t need to go to OpenAI servers but the results were pretty poor for the 7b and 13b models. I’ll create a separate thread about it so we can tinker together and maybe turn it into something useful :slight_smile:

3 Likes

Better would be set c to characters 1 thru 8000 of c. Anyway, one of the issues of ChatGPT frequently is that the results get translated to English (even if instructed to use the original language). Not ideal for international customers.

Same poor and inconsistent results over here. Not to mention that the performance is painfully slow even when using an M1 Ultra and enabled Metal support.

With GPT4 at least, when instructed in the system prompt with something like “use the language of the content for the language of the filename”, it handles it correctly, but I didn’t do too many tests because I like to have all my stuff in English (and Japanese)

Have you tinkered with different quantizations? I think for llama2 using the chat models is always going to have very poor results, better to use the normal text (or instruct) models, but even then I just couldn’t get it to reliably do what I wanted…

Anyway I’ll create a new thread with the scripts that I used

But just using ChatGPT over llama gave a million times better results with little prompt tinkering that I kinda just wanna roll with that for now haha

Another case I would looove to try sometime would be automatic grouping and classification. So find something common in all the documents and then group them into logical groups, similar to the auto-group feature DT had in the past.

Was thinking multiple passes for this, like:

  1. Generate a short summary, or a bunch of keywords and store it as annotation
  2. Chain all the short summaries/keywords together
  3. Send to ChatGPT for analysis

Problem is that the context window is just way too small for bigger text blobs, and esp on gpt4 that gets very expensive. The 3.5-16k one is better but still expensive just for grouping some files, so need to do a couple of passes to make it as compact as possible first :thinking:

If llama2 can be fine-tuned for document naming and grouping (which I’m sure it can), that would be perfect, no more worrying about cost

Well, size isn’t everything but Llama 2 has 7-70 billion parameters whereas GPT-4 has 1.8 trillion parameters and GPT-3.5-turbo still 175 billion parameters. Therefore no major surprise here.

That’s what I actually do and usually it’s working as expected but not always. Maybe the input is too short or the system is confused by the English instructions in these cases.

That’s an often underestimated issue of GPT-4 when people have batch processing in mind. E.g. just tagging or renaming all my documents in my databases would cost thousands of dollars using GPT-4 and even more using GPT-4-32k. And there are users having hundreds of times more data. GPT-3.5-Turbo is much more affordable (and faster).

Great work! Thank you very much for this very useful script!

1 Like