Hacker Newsnew | past | comments | ask | show | jobs | submit | Arisaka1's commentslogin

>Why?

Because I keep wondering myself if AI is here and our output is charged up, then why am I keep seeing more of the same products but with an "AI" sticker slapped on top of them? From a group of technologists like HN and the startup world, that live on the edge of evolution and revolution, maybe my expectations were a bit too high.

All I see is the equivalent of a "look how fast my new car made me go to the super market, when I'm not too demanding on the super market I want to end up with, and all I want is milk and eggs". Which is 100% fine, but at the end of the day I eat the same omelette as always. In this metaphor, I don't feel the slightest behind, or have any sense of FOMO if I cook my omelette slowly. I guess I have more time for my kids if I see the culinary arts as just a job. And it's not like restaurants suddenly get all their tables booked faster just because everyone cooks omelettes faster.

>It's allowed me to do things that I simply would not have been able to do previously.

You're not the one doing them. Me barking orders to John Carmack himself doesn't make me a Quake co-creator, and even if I micromanage his output like the world's most toxic micromanager who knows better I'm still not Carmack.

On top of that, you would have been able to do previously, if you cared enough to upskill to the point where token feeding isn't needed for you to feel productive. Tons of programmers broke barriers, and solved problems that haven't been solved by anyone in their companies before.

I don't see why everyone claiming that they previously couldn't do something is a bragging point. The LLM's that you're using were trained by the Google results you could've gotten if you Google searched.


>Then everyone who wants AI can have it and those that don't .... don't.

The current trajectory of products with integrated online worries me, due to the fact that the average computer/phone user isn't as tech-savvy as the average HN reader, to the point where they are unable to toggle stuff they genuinely never asked for, but they begrudgingly accept them because they're... there.

My mother complained about AI mode on Google Chrome, and the "press tab" on the address bar, but she's old and doesn't even know how to connect to the Wi-Fi. Are we safe to assume that she belongs to the percentage of Google Chrome users that they embrace AI, based on the fact that she doesn't know how to turn it off, and there's no easy way to go about it?

I'm willing to bet that Google's reports will assume so, and demonstrate a wide adoption of AI by Chrome users to stakeholders, which will be leveraged as a fact that everyone loves it.


My counterpoint to this is, if someone cannot verify the validity of the summary then is it truly a summary? And what would the end result be if the vast majority of people opted to adopt or deny a position based on the summary written by a third party?

This isn't strictly a case against AI, just a case that we have a contradiction on the definition of "well informed". We value over-consumption, to the point where we see learning 3 things in 5 minutes as better than learning 1 thing in 5 minutes, even if that means being fully unable to defend or counterpoint what we just read.

I'm speficially referring to what you said: "the speaker used some obscure technical terminology I didn't know" this is due to lack of assumed background knowledge, which makes it hard to verify a summary on your own.


At least with pre-AI search, the info is provided with a source. So there is a small level of reputation that can be considered. With AI, it's a black box that someone decides what to train it on, and as someone said elsewhere, there's no way to police its sources. To get the best results, you have to turn it loose on everything.

So someone who wants a war or wants Tweedledum to get more votes than Tweedledee has incentives to poison the well and disseminate fake content that makes it into the training set. Then there's a whole department of "safety" that has to manually untrain it to not be politically incorrect, racist etc. Because the whole thesis is don't think for yourself, let the AI think for you.


If I needed something verifiable, or wanted to learn the material in any depth, I would certainly not rely on an AI summary. However, the summary contained links to source material by known experts, and I would cheerfully rely on those.

The same is true if I imagined there would be misleading bullshit out there. In this case, it's hard to imagine that any nonexpert would bother writing about the topic. ("Universal torsor method" in case you're curious.)

I skimmed the AI summary in ten seconds, gained a rough idea of what the speaker was referring to, and then went back to following the lecture.


A lot of the time, the definitions peculiar to a subfield of science _don't_ require much or any additional technical background to understand. They're just abbreviations for special cases that frequently occur in the subfield.

Looking this sort of thing up on the fly in lecture is a great use for LLMs. You'll lose track of the lecture if you go off to find the definition in a reference text. And you can check your understanding against the material discussed in the lecture.


The issue is even deeper - the 1 thing in 5 minutes was probably already surface knowledge. We don’t usually really ‘know’ the thing that quickly. But we might have a chance.

The 3 things in 5 minutes is even worse - it’s like taking Google Maps everywhere without even thinking about how to get from point A to point B - the odds of knowing anything at all from that are near zero.

And since it summarizes the original content, it’s an even bigger issue - we never even have contact with the thing we’re putatively learning from, so it’s even harder to tell bullshit from reality.

It’s like we never even drove the directions Google Maps was giving us.

We’re going to end up with a huge number of extremely disconnected and useless people, who all absolutely insist they know things and can do stuff. :s


>2. Internalized speed: be a great individual contributor, build a deep, precise mental model, build correct guardrails and convention (because you understand the problem) and protect those boundaries ruthlessly, optimize for future change, move fast because there are fewer surprises

I think the issue here is, to become a great individual contributor one needs to spent time on the saddle, polishing their skills. And with mandatory AI delegation this polishing stage will take more time than ever before.


Have we really reached the point where a candidate gets outright rejected for not using AI tools, without taking personal aptitudes into consideration?


Whose personal aptitudes could possibly match those of Claude the Magnificent?


It wouldn’t surprise me if resume filters now look for how many times AI buzzwords are present.


Last person we recruted, the AI question made our choice

"Are you using AI ?"

(his response, tldr: "yes but actually no, because it sucks")

Great collaborator


>Interesting how many people in a hacker forum

I learned to accept the fact that HN reached a critical mass point that made it fill up with people who market themselves as "product-oriented engineers", which is a way to say "I only build things when they lead to products".

People commiting to the hacker ethos that consists of, among many other things, resistance to the established tools, embracing knowledge and code sharing, and exploration for its own sake are the minority.

The fact that there are many commenters who will claim that they finally build something they weren't able to build before and it's all thanks to LLM's is evidence that we already sacrificed the pursuit of personal competence, softly reframing it as "LLM competence", without caring about the implications.

Because obviously, every kid that dreamt of becoming a software engineer thought about orchestrating multiple agentic models that talk to each other and was excited about reviewing their output over and over again while editing markdown files.

The hackers are dead. Long live the hackers.


> I learned to accept the fact that HN reached a critical mass point that made it fill up with people who market themselves as "product-oriented engineers", which is a way to say "I only build things when they lead to products".

This is a mentality I am working extremely hard to get rid of, and I blame HN for indoctrinating me this way.

That said, these days I don't view this place as filled with "product-oriented engineers", but it's become like any other internet forum where naysayers and criticism always rises to the top. You could solve world hunger and the top comment would be someone going "well, actually..."

It's not HN that killed the hackers, it's the Internet snark that put the final nail in the coffin.


I consider myself a hacker as I spend many evenings and weekends writing code for no commercial purpose but to create cool stuff and sometimes even useful stuff all in the open. I have no idea why I should be against using LLM. Just like I use an IDE and wouldn’t want to write code without one, sometimes an LLM can quickly write some drudgery that if I had to write completely myself would likely stop me from continuing. It’s just another tool in the toolbox, stop regarding it as some sort of evil that replaces us! It doesn’t and probably never will, we will always have more important things to do that will still require a human, even if that does not include a whole lot of coding .


> I have no idea why I should be against using LLM

It highly depends on your own perspective and goals, but one of the arguments I agree with is that habitually using it will effectively prevent you building any skill or insight into the code you've produced. That in turn leads to unintended consequences as implementation details become opaque and layers of abstraction build up. It's like hyper-accelerating tech-debt for an immediate result, if it's a simple project with no security requirements there would be little reason to not use the tool.


>It's the same with genetics. Getting lucky with looks is fine but working for the same goal (eg surgery) is somehow bad and people often hide it.

We also tend to hide how hard we work to make our success look natural, but we reveal how hard we work on the extremes of success. For example, if I work hard and take a score of 17 out of 20 in a test people will say "I barely read last evening, phew", but if you're consistently scoring 19-20/20 people may even approach you to learn your studying methods and for tips, because they assume there are important takeaways that they can adopt.

It's my pet peeve with how society recognizes that someone is talented, which is blatantly flawed because all you can do is see what they're capable of doing. Someone may be talented yet unable (or unwilling?) to tap into their talent, but since we recognize talent by the output you can't really tell the existence of talent unless it's at the extremes of success, like the 8 year old who can solve mathematics that are a grade or more above the current grade.

I see talent like a genetic predisposition that can be appropriately cultivated to attain success. It's not much different than my height, because I didn't choose it, yet I can guess that there are men out there who hate the fact that I have their desirable height yet I never hit the gym, cultivate my social skills, or take advantage the fact that I look younger than I am. I am willing to bet everything that I met at least one person who thought of all of these things the first moments they looked at me.

But at least genetic predispositions like height are visible to the naked eye and no one can dispute the differences. When it comes to differences in the brain it's where we ignorantly proclaim that things are obscure therefore they can violate the very facts of observable nature.

In sort, not only I fully agree with you, but I also agree with the obvious double standards in society around it. If I take ADHD medication and that helps with my focus to improve my performance in school or work then I deserve that success as much as someone who naturally had no problems with ADHD. Why is this different for looks (like hair transplants, etc.) is beyond me.


What I find amusing with this argument is that, no one ever brought power savings when e.g. used "let me google that for you" instead of giving someone the answer to their question, because we saw the utility of teaching others how to Google. But apparently we can't see the utility of measuring the oversold competence of current AI models, given sufficiently large sampling size.


Clippy only helped with very specific products, and was compensating for really odd UI/UX design decisions.

LLM's are a product that want to data collect and get trained by a huge amount of inputs, with upvotes and downvotes to calibrate their quality of output, with the hope that they will eventually become good enough to replace the very people they trained them.

The best part is, we're conditioned to treat those products as if they are forces of nature. An inevitability that, like a tornado, is approaching us. As if they're not the byproduct of humans.

If we consider that, then we the users get the shorter end of the stick, and we only keep moving forward with it because we've been sold to the idea that whatever lies at the peak is a net positive for everyone.

That, or we just don't care about the end result. Both are bad in their own way.


One can assume that, given the goal is money (always has been), the best case scenario for money is to make it so the problem also works as the most effective treatment. Money gets printed by both sides and the company is happy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: