Why doesnt OCR use AI to choose most likely words?

Anyone happen to know why OCR will very often choose impossible spellings of non-existent words over more likely spellings? Shouldn’t it use AI to help it figure out what a piece of text is likely to be?

I.e., instead of “reading” a piece of text as “.eo Stein, Ihe Art in
Painting” why doesn’t it use AI, and the vast store of previously written English texts, to read it as "Leo Stein, The Art in Painting.”
OCR is making “guesses,” anyway, at which letterforms to read from the patterns on the page, so you’d think it would use AI to educate those guesses as to which letterforms are more likely to be correct, given the surrounding context of the word and sentence… and the nature of previously encountered English texts!

Sorry if this sounds like idle curiosity on my part, but I am always deeply frustrated at the many errors that are STILL present in OCR’d docs, a decade after I started OCR’ing, so I’d like to at least understand why those errors “have” to be there.

  1. We don’t control the OCR. It is written by a well known, third-party developer. And its results are considered very accurate in this space.
  2. OCR is not and never has been a 100% process. If you’re in the mid-90’s, accuracy-wise, that’s very good.
  3. The variables involved in OCR are many, including different fonts, lighting of the original, contrast of the image, etc.

As to “Why not AI?”, that would be a question for the OCR developer. The level of acuracy you’re expecting would likely add more overhead to the process, and likely increase the cost to the consumer.

That’s basically a question for Abbyy which develops the OCR engine. As far as I know the engine uses AI but the results might vary of course. Especially poor scans can reduce the quality of the output.