Hacker Newsnew | past | comments | ask | show | jobs | submit | zakelfassi's commentslogin

Everyone's chasing bigger context windows, but more tokens don't necessarily mean more intelligence. Once an agent fills its window with noise, reasoning quality drops fast. This piece breaks down why "convenience workflows" degrade model IQ and why architectural planning still beats automation.


I had a similar experience here on HN. One of my blog posts recently made it to the front page, and I ended up getting brigaded. I had to add a dozen AI-use disclaimers around the blog, not even the piece itself. It was surreal.

I get the anxiety about authenticity, but sometimes the witch-hunt feels more automaton-like than the tools themselves. For what it's worth, I've written publicly about my process ––how I use AI as scaffolding, not ghostwriting–– in my standing note on AI use.

The discomfort around augmented intelligence is fascinating + telling. I sometimes wonder if the same people browse HN on typewriters or pray for the next X-class solar flare so we can all return to rock carving.


Honestly, the best response is to embrace our inner botness. It's just humans being humans — before, they might've said "this is too smart for you to say," and now they've found a new excuse. So be it.


Is it AI then if there’s a human author? lol. You are funny.


One problem is that, like on a recipe page, the core ideas are stretched into a longer narrative.

And then the reader has to consume the narrative to derive the core ideas themself.

So it's off-putting that the reader has to split off the narrative chaff that you didn't even write and/or spend the time editing.

At some point it makes more sense to publish a minimal paragraph of your core ideas, and the reader can paste it into an LLM if they want a clanker to couch it extra content.


> Is it AI then if there’s a human author? lol. You are funny.

You have now updated the article to admit using AI to write it.

So why is it funny that I recognized it as AI?


Don’t forget; a double-dash on iOS keyboard gets automagically converted to an em—dash.


1/ So far, you've made five comments about this throughout the thread. 2/ I've added an update at the top; pasting it here as well:

"My high school teacher in 2004 accused me of plagiarizing from Wikipedia because my research paper looked "too polished" for something typed on a keyboard instead of handwritten. Twenty years later, HN commenters see clean prose and assume LLM slop. Same discomfort, different decade, identical pattern: people resist leverage they haven't internalized yet.

I use AI tools the way I used spell-check, grammar tools, and search engines before them—as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.

You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine."


> 1/ So far, you've made five comments about this throughout the thread.

As someone who actually clicks the links and reads the articles, I’m growing frustrated with these AI-written articles wasting my time. The content is typical of ChatGPT style idea expansion where someone puts their “ideas” into an LLM and then has the LLM generate filler content to expand it into a blog post.

I try to bring awareness of the AI generated content so others can avoid wasting their time on it as well. Content like this also gets flagged away from the front page as visitors realize what it is.

Your edited admission of using AI only confirms the accusations.


> I try to bring awareness of the AI generated content so others can avoid wasting their time on it as well.

Thanks. My own AI detection skills aren't always up to par, so I appreciate people calling it out.


Its not "too polished." That's not the criticism.


What ideas does this article contain, beyond the headline?


Meta-response to a lot of these comments: My high school teacher in 2004 accused me of plagiarizing from Wikipedia because my research paper looked "too polished" for something typed on a keyboard instead of handwritten. Twenty years later, HN commenters see clean prose and assume LLM slop. Same discomfort, different decade, identical pattern: people resist leverage they haven't internalized yet.

I use AI tools the way I used spell-check, grammar tools, and search engines before them; as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.

You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine :)


Oh well. If you say so.


It's an interesting keyboard layout, though.

  ± 2 3 3 2 6 7 8 0 0 = * -
   M W C B T Y U F O P ] [ [ [
    A G G E R H J K L | / ├ examb
   ² X C V B N M / ꕯ └ ;
Your other keyboard has even more exotic glyphs: is that APL?


I'm sorry the GenAI image prevented you from engaging with the ideas. Layout removed if that makes you and others feel better.


It's not just that, it's that parts of the words are very hard to read. They've been smoothed over. Rather than being drawn to the information content, my attention skips over it like a stone over a lake. Some of the paragraphs are mostly yours: others clearly aren't.

Comparing the two images is a good analogy. You instructed the AI to remove the keyboards, and it completely changed the entire contents of both screens, as well as the hand holding the phone. I'm not sure what app has a modular plug as its "main screen" icon, but that distracts me from the whole rest of the image: even the cardboard surface of the bottom part of the laptop. It's less clear what you were trying to convey with the image than before.

Human-to-human communication is not something that benefits from inserting generative AI in the middle. This whole article is confusing: like a collaboration between a pointillist and an impressionist, except they didn't agree on what they were trying to say, so the picture can only be understood by working backwards and trying to model the production process in your head.

> But—and this matters—the sandbox remains someone else's. The app defines the possibility space. The platform determines what's possible. Users create within the system, never of the system.

I was going to use this as an example of a paragraph I understood, but then I looked closer: I have no idea what the distinction between "within" and "of" that you're trying to draw actually is. Sure, I know what you're trying to say, but which one is meant to be "within", and which one is "of"? The slop header image is a symptom of the broken authorial process that led to this article, not the primary issue with it: the main problem is that you started out with something to say, and ended up with confusing, verbose, and semi-meaningless output.

Most people can write better than this. You can write better than this.


Fair push, thank you. Let me clarify where I'm coming from.

Re: Figma vs Illustrator –– totally agree Figma doesn't cover Illustrator's full surface area. But in a lot of product/UI/marketing work, "good enough + collaborative + extensible" beats "feature-complete." The fact students default to Figma signals where the center of gravity is drifting, even if print/illustration shops still anchor on Illustrator.

Re: Omarchy + Framework –– not claiming market dominance — it's tiny and new. My point is more about signal: indie dev culture experimenting with modular laptops + DHH's OS is the kind of early-edge activity that tends to foreshadow bigger cultural/tooling shifts.

I write from a Silicon Valley x indie dev POV: watching sparks at the periphery because that's often where the next fire comes from.


You're right; I just have no first-hand experience with it, yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: