Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Sound like you need to learn to search

Sounds like you need to not be condescending :)

Of course I've searched and tried countless of avenues to pick up this, I'm not saying it's absolutely not possible without GPT, just that I found it the easiest way of learning.

And it's not "Write a function that does X" but more employing the Socratic method to help me further understand a subject, that I can then dive deeper into myself.

But having a rubber duck is infinitive worth, if you happen to a programmer, you probably can see the value in this.

> have you tried using it in an area you're an expert in ? The rate of convincing bullshit vs correct answers is astonishing. It gets better with Phind/Bing but then it's a roulette that it will hit valid answers in the index fast enough.

Yes, programming is my expertise, and I use it daily for programming and it's doing fine for me (GPT4 that is, GPT3.5 and models before are basically trash).

Bing is probably one of the worst implementations of GPT I've seen in the wild, so it seems like our experience already differs quite a bit.

> you won't know when it's bullshiting you and you're missing out on learning how to actually learn.

Yeah, you can tell relatively easy if it's bullshitting and making things up, if you're paying any sort of attention to what it tells you.

> By the time LLMs are reliable enough to teach you, whatever you're learning is probably irrelevant since it can be solved better by LLM.

Disagree, I'm not learning in order to generate more money for myself or whatever, I'm learning because the process of learning is fun, and I want to be able to build games myself. A LLM will never be able to replace that, as part of the fun is that I'm the one doing it.



I have personally found the rubber-ducking to be really helpful, especially for more exploratory work. I find myself typing "So if I understand correctly, the code does this this and this because of this" and usually get some helpful feedback.

It feels a bit like pair programming with someone who knows 90% of the documentation for an older version of a relevant library - definitely more helpful than me by myself, and with somewhat less communication overhead that actually pairing with a human.


>Yeah, you can tell relatively easy if it's bullshitting and making things up, if you're paying any sort of attention to what it tells you.

It's trained on generating the most likely completion to some text, it's not at all easy to tell if it's bullshitting you if you're a newbie.

Agreed that I was condescending and dismissive in my reply, been dealing with people trying to use ChatGPT to get free lunch without understanding the problem recently so I just assume at this point, my bad.


> It's trained on generating the most likely completion to some text, it's not at all easy to tell if it's bullshitting you if you're a newbie.

I don't think many people (at least not myself and others I know who use it) use GPT4 as a source of absolute truth, but more like a "iterate together until solution", taking everything it says with a grain of truth.

I wouldn't decide any life or death decisions on just a chat with GPT4, but I could use it to help me lookup specific questions and find out more information that then gets verified elsewhere.

When it comes to making games (with Rust), it's pretty easy to verify when it's bullshitting as well. If I ask it to write a function, I copy-paste the function and either it compiles or it doesn't. If it compiles, I test it out in the game, and if it works correctly, I write tests to further solidify my own understand and verification it works correctly. Once that's done, even if I have no actual idea of what's happening inside the function, I know how to use it and what to expect from it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: