Hacker Newsnew | past | comments | ask | show | jobs | submit | fred123's commentslogin

It’s just $20, that’s almost free compared to cost of human labor


With an LLM integrated into your IDE like Cursor or Copilot, oftentimes the LLM autocompletes the correct code faster than I can think about what must be done next. I’ve been coding for 15 years.


It seems to struggle with German text a lot (umlauts etc)


Azure Vison OCR is supposed to be the best commercial OCR model right now and it’s really cheap (same price as Google‘s)


Note that the tool is uploading/downloading to/from Google Drive through GCP Service Account credentials to perform OCR for free.


IIRC Tesseract is trained on 300 DPI


Something wrong with your setup. It should be less than 30 s per page with your hardware


Huh, I tried with the version from pip (instead of my package manager) and it completes in 22s. Output on the only page I tested is considerably worse than tesseract, particularly with punctuation. The paragraph detection seemed to not work at all, rendering the entire thing on a single line.

Even worse for my uses, Tesseract had two mistakes on this page (part of why I picked it), and neither of them were correctly read by EasyOCR.

Partial list of mistakes:

1. Missed several full-stops at the end of sentences

2. Rendered two full-stops as colons

3. Rendered two commas as semicolons

4. Misrendered every single em-dash in various ways (e.g. "\_~")

5. Missed 4 double-quotes

6. Missed 3 apostrophes, including rendering "I'll" as "Il"

7. All 5 exclamation points were rendered as a lowercase-ell ("l"). Tesseract got 4 correct and missed one.


macOS Live Text is incredible. Mac only though


Yes, I imagine it's using the same OCR model as the iPhone, which is really incredibly good. In fact, it's so good that I made a little app for fun just to be able to use it for OCRing whole PDF books:

https://apps.apple.com/us/app/super-pdf-ocr/id6479674248


Interesting! I’ll give it a try, I have a couple of large books to OCR (to be honest, the name in all caps with underscores is not really encouraging).

From your experience, how does the OCR engine work with multiple-columns documents?


The iOS app would likely not handle two-column text very well. I really made the iOS app on a lark for personal use, the whole thing took like 2 hours, and I'd never even made a Swift or iOS app before. It actually took longer to submit it to the App Store than it did to create it from scratch, because all the hard stuff in the app uses built-in iOS APIs for file loading, PDF reading, screenshot extraction, OCR, NLP for sentence splitting, and sharing the output.

I think the project I submitted here would do that better, particularly if you revised the first prompt to include an instruction about handling two column text (like "Attempt to determine if the extracted text actually came from two columns of original text; if so, reformat accordingly.")

The beauty of this kind of prompt engineering code is that you can literally change how the program works just by editing the text in the prompt templates!


Thanks, I’ll try to play with this. Thanks also for keeping us updated, your work is very interesting!


Sadly no bounding rects


You can get them through the Vision API (Swift/Objective-C/AppleScript)


You’re forgetting about Python and TypeScript/JavaScript. PyObjC and whatever it is for TypeScript.


Yes but it's relatively shit

The Vision API can't even read vertical Japanese text


Fair enough. There are some new OCR APIs in the next macOS release. I wonder if the model has been improved.


They're just a new Swift-only interface to the same underlying behaviors, no apparent improvement. I was hoping for more given the visionOS launch but alas

What I'm trying now is combining ML Kit v2 with Live Text - Apple's for the accurate paragraphs of text, and then custom indexing that against the ML Kit v2 output to add bounding rects and guessing corrections for missing/misidentified parts from ML Kit (using it only for bounding rects and expecting it will make mistakes on the text recognition)

I also investigated private APIs for extracting rects from Live Text. It looks possible, the APIs are there (it has methods or properties which give bounding rects as is obviously required for Live Text functionality), but I can't wrap my head around accessing them yet.


I feel like text detection is much better covered by the various ML models discussed elsewhere in the comments. Maybe you can combine those with Live Text. I found Tesseract pretty ok for text detection as well but I don’t know if any of the models are good for vertical text.


ML Kit v2 works with vertical text better than Tessy



> sharing state between threads is such a narow niche use case

It is the norm. „Kafka scale“ problems are not the norm.


Do you think Meta (Instagram) are pushing GIL removal and Cinder for no reason? They clearly have that scale and still benefit from faster single machine performance


Yes they will benefit from this. Unfortunately most everyone else will suffer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: