And the LLM can parse out total garbage in and understand the intent of the writer? I know when I'm vague with an LLM I get junk or inappropriate output.
As an optimist I would say that it could be better at teasing out your intent from you in an interactive way, then producing something along those lines. People aren't ashamed to answer questions from AI.
That might drift in the future. I've actually found myself leaving small errors in sometimes since it suggests that I actually wrote it. I don't use literal em-dashes -- but I often use the manual version and have been doing so much longer than mainstream LLMs have been around. I also use a lot of bulleted lists -- both of which imply LLM usage. I take my writing seriously, even when it's just an internet comment. The idea that people might think I wrote with an LLM would be insulting.
But further and to the point, spelling / grammar errors might be a boutique sign of authenticity, much like fake "hand-made" goods with intentional errors or aging added in the factory.
Unless you are using a proprietary, dedicated grammar checker, auto grammar check is far from perfect and will miss some subject-verb agreement errors, incorrect use of idioms, or choppy flow. Particularly in professional environments where you are being evaluated, this can tank an otherwise solid piece of written work. Even online in HN comments, people will poke fun at grammar, and (while I don't have objective evidence for this) I have noticed that posts with poor grammar or misspellings tend to have less engagement overall. In a perfect world, this wouldn't matter, but it's a huge driving factor for why people use LLMs to touch up their writing.