With an LLM integrated into your IDE like Cursor or Copilot, oftentimes the LLM autocompletes the correct code faster than I can think about what must be done next. I’ve been coding for 15 years.
Huh, I tried with the version from pip (instead of my package manager) and it completes in 22s. Output on the only page I tested is considerably worse than tesseract, particularly with punctuation. The paragraph detection seemed to not work at all, rendering the entire thing on a single line.
Even worse for my uses, Tesseract had two mistakes on this page (part of why I picked it), and neither of them were correctly read by EasyOCR.
Partial list of mistakes:
1. Missed several full-stops at the end of sentences
2. Rendered two full-stops as colons
3. Rendered two commas as semicolons
4. Misrendered every single em-dash in various ways (e.g. "\_~")
5. Missed 4 double-quotes
6. Missed 3 apostrophes, including rendering "I'll" as "Il"
7. All 5 exclamation points were rendered as a lowercase-ell ("l"). Tesseract got 4 correct and missed one.
Yes, I imagine it's using the same OCR model as the iPhone, which is really incredibly good. In fact, it's so good that I made a little app for fun just to be able to use it for OCRing whole PDF books:
Interesting! I’ll give it a try, I have a couple of large books to OCR (to be honest, the name in all caps with underscores is not really encouraging).
From your experience, how does the OCR engine work with multiple-columns documents?
The iOS app would likely not handle two-column text very well. I really made the iOS app on a lark for personal use, the whole thing took like 2 hours, and I'd never even made a Swift or iOS app before. It actually took longer to submit it to the App Store than it did to create it from scratch, because all the hard stuff in the app uses built-in iOS APIs for file loading, PDF reading, screenshot extraction, OCR, NLP for sentence splitting, and sharing the output.
I think the project I submitted here would do that better, particularly if you revised the first prompt to include an instruction about handling two column text (like "Attempt to determine if the extracted text actually came from two columns of original text; if so, reformat accordingly.")
The beauty of this kind of prompt engineering code is that you can literally change how the program works just by editing the text in the prompt templates!
They're just a new Swift-only interface to the same underlying behaviors, no apparent improvement. I was hoping for more given the visionOS launch but alas
What I'm trying now is combining ML Kit v2 with Live Text - Apple's for the accurate paragraphs of text, and then custom indexing that against the ML Kit v2 output to add bounding rects and guessing corrections for missing/misidentified parts from ML Kit (using it only for bounding rects and expecting it will make mistakes on the text recognition)
I also investigated private APIs for extracting rects from Live Text. It looks possible, the APIs are there (it has methods or properties which give bounding rects as is obviously required for Live Text functionality), but I can't wrap my head around accessing them yet.
I feel like text detection is much better covered by the various ML models discussed elsewhere in the comments. Maybe you can combine those with Live Text. I found Tesseract pretty ok for text detection as well but I don’t know if any of the models are good for vertical text.
Do you think Meta (Instagram) are pushing GIL removal and Cinder for no reason? They clearly have that scale and still benefit from faster single machine performance