Hacker Newsnew | past | comments | ask | show | jobs | submit | avdelazeri's commentslogin

While I never measured it, this aligns with my own experiences.

It's better to have very shallow conversations where you keep regenerating outputs aggressively, only picking the best results. Asking for fixes, restructuring or elaborations on generated content has fast diminishing returns. And once it made a mistake (or hallucinated) it will not stop erring even if you provide evidence that it is wrong, LLMs just commit to certain things very strongly.


I largely agree with this advice but in practice using Claude Code / Codex 4+ hours a day, it's not always that simple. I have a .NET/React/Vite webapp that despite the typical stack has a lot of very specific business logic for a real world niche. (Plus some poor early architectural decisions that are being gradually refactored with well documented rules).

I frequently see (both) agents make wrong assumptions that inevitably take multiple turns of needing it to fail to recognize the correct solution.

There can be like a magnetic pull where no matter how you craft the initial instructions, they will both independently have a (wrong) epiphany and ignore half of the requirements during implementation. It takes messing up once or twice for them to accept that their deep intuition from training data is wrong and pivot. In those cases I find it takes less time to let that process play out vs recrafting the perfect one shot prompt over and over. Of course once we've moved to a different problem I would definitely dump that context ASAP.

(However, what is cool working with LLMs, to counterbalance the petty frustrations that sometimes make it feel like a slog, is that they have extremely high familiarity with the jargon/conventions of that niche. I was expecting to have to explain a lot of the weird, too clever by half abbreviations in the legacy VBA code from 2004 it has to integrate with, but it pretty much picks up on every little detail without explanation. It's always a fun reminder that they were created to be super translaters, even within the same language but from jargon -> business logic -> code that kinda works).


A human would cross out that part of the worksheet, but an LLM keeps re-reading the wrong text.


Regulatory capture is an ugly thing.


Idk about interviewing, but there are many benefits to opening fake job listing (gathering a database of people, keeping track of people looking for jobs, etc) which is why people do it. Data is valuable.


That is more or less what I fear. If the top 10 percent already account for half of all consumer spending, and I equality keeps getting worse and worse, that's probably where it will end.


It's fine, in the future we will all subscribe to the self driving robot taxi, own nothing, and be (un)happy



True. There's Morita's a mathematical gift for the same audience


That's common with mathematics books. Weil's Basic Number Theory is enough to give the unsuspecting quite the fright, despite the name


Cargo cult mathematics


If someone is working but still needs welfare then the state is just subsiding company payrolls by indirect means. Strongly disagree that gig work is fine as long as there is welfare.


If a given person's labor is of poor enough quality such that its value is not enough to provide whatever is considered a reasonable quality of life in a given circumstance, adding a UBI or other welfare payment is not just subsidizing employers


As one of the professors I had undergrad classes with liked to say "Economics is the only field where you can be awarded the Nobel prize for showing A and then next year someone gets a Nobel prize for showing not A".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: