Poor tag quality using AI

I have a document of type HTML text that I want to add tags to.

I have used the DataTagsAdd Chat Suggestions to Document option to do so.

Initially I had no response what so ever, but through trial and error I figured out that the document has to be highlighted in the document selector and it isn’t enough that the document is visible.

The title and also the opening header of the document is “How to Use Affinity Photo for Focus Stacking”. So this document is about Focus Stacking and when I search for it, it occurs 17 times in the document.

The tags found using the above mentioned are:

  • Landscape photography
  • Macro photography
  • photography
  • Programming
  • Real estate photography

Which is not what I would expect. It is lacking the major topic, being Focus Stacking. The document is referencing landscape photogram, macro photography as being beneficial for focus stacking. Programming is not even mentioned in the document.

In contrast I have Keyboard Maestro macro that uses the following instruction for gpt-4o-min, being:

Please analyze the following text and provide 6 tags separated by commas. Output only the tags and nothing else.  If there are any errors, starts with "!!!"
The tags should reflect the major topics.

Using this approach I get the following tags:

  • Affinity Photo
  • Focus Stacking
  • Landscape Photography
  • Macro Photography
  • Photography Techniques
  • Post-Processing

Which for me is a better result.

So I checked my settings in DT, and noticed that the prompt used by DT is the following:

Make a summary of %@ with the first subtitle pointing out the main points addressed, the second subtitle explaining the main terms used, the third subtitle explaining complex concepts in simple terms and the fourth subtitle giving an example in everyday life of how the subject can be applied.

I changed that to the prompt I used in Keyboard Maestro and voila the output of DT is much improved, i.e. identical to my Keyboard Maestrooutput.

I don’t remember how I got the original prompt being used by DT, but I think it the prompt out of the box.

I wanted to share my experience and if you have the same prompt as I did you could use this to your benefit.

Which model did you use? This can make a huge difference.

This prompt is only used for custom summaries, e.g. by Edit > Summarize via Chat…, but not for tagging.

I have GPT 5 (mini) under summarization and ChatGPT under Chat.

Strange. I just changed it back and indeed it doesn’t influence the result. It is still good. I am puzzled.

Especially the results of small or inexpensive models can be more random.

Also, you should not expect to get the same results each time you run the command… or ask AI a question. It’s not a static library of information. It’s dynamically building a response to the queries on-the-fly. While these vary only slightly (and this is also a simple document) it does vary.

Three runs of GPT5 Nano:



Gemini Flash Lite:

Claude 3.5 Haiku:

Llama 3.2 3B:

image

Logically, the variation would be more dramatic with longer, more complex documents, e.g., legal briefs, medical papers, etc. But things to be aware of, especially if you are very specific about your tagging strategy.

1 Like

Thanks. Still learning the basics of AI together with DT.

You’re welcome. Do note this is just how AI works, in general. Yes, ChatGPT lets you store “memories”, if you feel like giving them more of your data, but out of the box, AI is just making things up every time you ask it a question.

2 Likes