I personally find that text written by a human, even someone without a strong grasp of the language, is always preferable to read simply because each word (for better or worse) was chosen by a human to represent their ideas.
If you use an LLM because you think you can’t write and communicate well, then if that’s true it means you’re feeding content that you already believe isn’t worthy of expressing your ideas to a machine that will drag your words even further what you intended.
Yeah. It feels like the same amount of signal for a larger amount of noise, and I strongly prefer high SNR. Terse and accurate are what I strive for in my writing, so it's painful to read a lot of text only to realize that two sentences would've sufficed.
If I want to cleanup, summarize, translate, make more formal, make more funny, whatever, some incoming text by sending it through an LLM, I can do it myself.
I would rather read succinct English written by a non-native speaker filled with broken grammar than overly verbose but well-spelled AI slop. Heck, just share the prompt itself!
If you can't be bothered to have a human write literally a handful of lines of text, what else can't you be bothered to do? Why should I trust that your CVE even exists at all - let alone is indeed "critical" and worth ruining Christmas over?
It's actually far more preferable to read broken English written by a human because each language imposes their own unique "flavour" in English making it preferable to AI slop.
> I prefer reading the LLM output for accessibility reasons.
And that's completely fine! If you prefer to read CVEs that way, nobody is going to stop you from piping all CVE descriptions you're interested in through a LLM.
However, having it processed by a LLM is essentially a one-way operation. If some people prefer the original and some others prefer the LLM output, the obvious move is to share the original with the world and have LLM-preferring readers do the processing on their end. That way everyone is happy with the format they get to read. Sounds like a win-win, no?
However, there will be cases where lacking the LLM output, there isn't any output at all.
Creating a stigma over technology which is easily observed as being, in some form, accessible is expected in the world we live. As it is on HN.
Not to say you are being any type of anything, I just don't believe anyone has given it all that much thought. I read the complaints and can't distinguish them from someone complaining that they need to make some space for a blind person using their accessibility tools.
> However, there will be cases where lacking the LLM output, there isn't any output at all.
Why would there be? You're using something to prompt the LLM, aren't you - what's stopping you from sharing the input?
The same logic can be applied in an even larger extent to foreign-language content. I'd 1000x rather have a "My english not good, this describe big LangChain bug, click <link> if want Google Translate" followed by a decent article written in someone's native Chinese, than a poorly-done machine translation output. At least that way I have the option of putting the source text in different translation engines, or perhaps asking a bilingual friend to clarify certain sections. If all you have is the English machine translation output, then you're stuck with that. Something was mistranslated? Good luck reverse engineering the wrong translation back to its original Chinese and then into its proper English equivalent! Anyone who has had the joy to deal with "English" datasheets for Chinese-made chips knows how well this works in practice.
You are definitely bringing up a good point concerning accessibility - but I fear using LLMs for this provides fake accessibility. Just because it results in well-formed sentences doesn't mean you are actually getting something comprehensible out of it! LLMs simply aren't good enough yet to rely on them not losing critical information and not introducing additional nonsense. Until they have reached that point, their user should always verify its output for accuracy - which on the author side means they were - by definition - also able to write it on their own, modulo some irrelevant formatting fluff. If you still want to use it for accessibility, do so on the reader side and make it fully optional: that way the reader is knowingly and willingly accepting its flaws.
The stigma on LLM-generated content exists for a reason: people are getting tired of starting to invest time into reading some article, only for it to become clear halfway through that it is completely meaningless drivel. If >99% of LLM-generated content I come across is an utter waste of my time, why should I give this one the benefit of the doubt? Content written in horribly-broken English at least shows that there is an actual human writer investing time and effort into trying to communicate, instead of it being yet another instance of fully-automated LLM-generated slop trying to DDoS our eyeballs.
I completely agree I prefer the original language as it offers more choice in how to try to consume it. I believe search engines segment content by source language though, so you would probably not ever see such content in search results for English language queries. It would be cool if you could somehow signal to search engines that you are interested in non-native language results. I don’t even tend to see results in the second language in my accept languages header unless the query is in that language.
Im sorry but I don't buy the argument that we should be accepting of AI slop because it's more accessible. That type of framing is devious because you frame dissenters as not caring about accessibility. It has nothing to do with accessibility and everything to do with simply not wanting to consume utterly worthless slop.
People generally don't actually care about accessibility and it shows, everywhere. There is obvious and glaring accessibility gains from LLMs that are entirely lost with the stigma.
Because authors do two things typically when they use an LLM for editing:
- iterate multiple rounds
- approve the final edit as their message
I can’t do either of those things myself — and your post implicitly assumes there’s underlying content prior to the LLM process; but it’s likely to be iterated interactions with an LLM that produces content at all — ie, there never exists a human-written rough draft or single prompt for you to read, either.
So your example is a lose-lose-lose: there never was a non-LLM text for you to read; I have no way to recreate the author’s ideas; and the author has been shamed into not publishing because it doesn’t match your aesthetics.
Your post is a classic example of demanding everyone lose out because something isn’t to your taste.
Unfortunately, the sheer amount of ChatGPT-processed texts being linked has for me become a reason not to want to read them, which is quite depressing.
You wouldn't complain as much if it were merely poorly written by a human. It gets the information across. The novelty of complaining about a new style of bad writing is being overdone by a lot of people, particularly on HN.
> You wouldn't complain as much if it were merely poorly written by a human.
Obviously.
> It gets the information across.
If it is poorly written by a human? Sure!
> The novelty of complaining about a new style of bad writing
But it's not a "new style of bad writing", is it?
The problem is that LLM-generated content is more often than not wrong. It is only worth reading if a human has invested time into post-processing it. However, LLMs make badly-written low-quality content look the same as badly-written high-quality content or decently-written high-quality content. It is impossible for the reader to quickly distinguish properly post-processed LLM output from time-wasting slop.
On the other hand, if its written by a human it is often quite easy to distinguish badly-written low-quality content from badly-written high-quality content. And the writing was never the important part: it has always been about the content. There are plenty of non-native English tech enthusiasts writing absolute gems in the most broken English you can imagine! Nobody has ever had trouble distinguishing those from low-quality garbage.
But the vast majority of LLM-generated content I come across on the internet is slop and a waste of my time. My eyeballs are being DDoSed. The only logical action upon noticing that something is LLM-generated content is to abort reading it and assume it is slop as well. Like it or not, LLMs have become a sign of poor quality.
By extension, the issue with using LLMs for important content is that you are making it look indistinguishable from slop. You are loudly signaling to the reader that it isn't worth their time. So yes, if you want people to read it, stick to bad human writing!
> There are plenty of non-native English tech enthusiasts writing absolute gems in the most broken English you can imagine! Nobody has ever had trouble distinguishing those from low-quality garbage.
Your entire theory about LLMs seems to rely on that… but it’s just not true, eg, plenty of quality writing with low technical merit is making a fortune while genuinely insightful broken English languishes in obscurity.
You’re giving a very passionate speech about how no dignified noble would be dressed in these machine-made fabrics, which while some are surely as finely woven as those by any artisan, bear the unmistakable stain of association with plebs dressed in machine-made fabrics.
I admire the commitment to aesthetics, but I think you’re fighting a losing war against the commoditization and industrialization of certain intellectual work.
That was impressive to see what you did there and the harsh reality that its true hits like a brick.
Don't forget that those forks of VScode are gonna be bought by Nvidia or chatgpt (OpenAI which gets invested by Nvidia) and everything else
Its all one large web connecting every investment and cross-investments and everything. The bubble image which got infamous recently is definitely popping up even more. Its crazy basically.
You can just strap the watch to the handlebars, no?
There’s also a mode where you can extend the display from your watch to a bike computer, for instances where you’re doing a multisport activity (or just want to record on a single device).
I know all of the wrist watches experience this issue, but this was extreme like drop from 145->80 for like 60+ min then rapidly shopt back up. Not like a small couple min blip.
This was a near the top end model at the time, and after complaining Garmin support owned up that this was a firmware bug impact all sensors of that generation and it would take 2+ months to fix (took like 5).
But they did send me a HRM for free and I've been using that. So I am grateful that and using it since. But for short rides (like 90 min or less) I don't always remember to think to bring the HRM.
Prior to that I had two lower end Garmin watches, and despite having theoretically lower end HR sensors they did not experience such bugs or drop outs (an unexpected blip every once in a while).
But I think the main point still stands, their software/firmware/UX has not moved in relation with the hardware. Next time I'm in the market I will be consider all the options. Feels like Coros and others have come a long way.
Prob the biggest thing keeping me in their ecosystem is multi sport (variations of bike riding types -- I do all), hiking, strength training, erg, winter sports. But even there the list of strength exercises has not been updated in like a decade.
Nobody in their right mind is keen for a war. Nobody would fight in one unless they believed they really had no other choice. I don't blame the people who would runaway to relative safety if the option is available.
But. It's clearly a massive security issue.
> If you’re that keen, go join the reserves?
There is not currently a war, and if there was, there wouldn't be a choice but to join.
That's a valid statement that nobody in this comment chain was disputing. It is exactly why the person you're responding to is assuming anyone who can leave, will leave, in that event (and why "you should join the reserves if you're that keen" is an irrelevant comeback-- nobody was saying anyone's keen, only that people aren't keen and will leave to avoid it if able).
They literally had to invent new types of makeup because HD provided more skin detail than was previously available.
It’s why you’ll find a lot of foundation marketed as “HD cream”.
reply