Hacker Newsnew | past | comments | ask | show | jobs | submit | beacon473's commentslogin

LLMs are decent at providing feedback and validation. I often have few sources of that from humans, and long periods without it are bad for my motivation and sense of well-being.

LLMs filling that hole is great if it's done in discrete and intermittent bumps. TFA shows the psychological risks of binging on artificial validation.

All things in moderation, especially LLMs.


how do you convince yourself that it is real? I think if you know some linear algebra and read Vaswani et al. 2017 it can be very difficult to maintain the suspension of disbelief. I had great hopes for a future AI companion, but knowing how the trick is done seems to have ruined the magic for me.


Think of it as talking to yourself. LLMs can be a 10x multiplier to giving yourself a pep talk. Also 10x negative thoughts, so it's a sharp tool.


I guess we could look at it as the ghost of humanity talking to us, the same way long dead authors can whisper in our ears.


If you don't like Claude's personallity, ask him to behave differently. It's common for me to periodically say 'don't be so sycophantic' and 'be more critical' when working on technical projects.

In your case, try saying 'be nicer' or 'be more jovial'.


I had tried and it started being defensive about this too, haha. Hence my flip-the-table reaction, deciding to just cancel my subscription. Maybe I will come around later.


HN news is great because the crazies are high-functioning.


Crazy high functioning people pretty much came up with every technical and social innovation modern civilization runs on.

No crazies = no Newton, no Dirac, no Marx, no Descartes, no Curie, etc etc etc.


What's wrong with using an LLM to learn about politics and religion?

I've found Claude to be an excellent tool to facilitate introspective psychoanalysis. Unlike most human therapists I've worked with, Claude will call me on my shit and won't be talked into agreeing with my neurotic fantasies (if prompted correctly).


Because unlike a human who can identify that some lines of reasoning are flawed or unhealthy, an LLM will very happily be a self-contained echo chamber that will write whatever you want with some nudging.

It can drive people further and further into their own personal delusions or mental health problems.

You may think it's being critical of you, but it's not. It's ultimately interacting you on your terms, saying what you want to hear when you want to hear it. That's not how therapy works.


> You may think it's being critical of you, but it's not. It's ultimately interacting you on your terms, saying what you want to hear when you want to hear it.

That's been my experience with human therapists. When I tell Claude to stop being sycophantic, it complies. When I tell a human to stop being sycophantic, they get defensive.

I agree that an ideal human therapists would be better than Claude, but most that I've worked with are far from ideal. Most are not very bright, easily manipulated, and quick to defensiveness when questioned. And Claude won't try to get me to take random meds with the only justification for the specific medication being 'got to start somewhere'.


No you are gaslighting yourself without recognizing it. That's what we are talking about.


It's impossible to gaslight yourself as it requires intention.


He wants to be called a good boy, so the LLM calls him a good boy. Since the LLM is a machine that does what you want, he's essentially doing it to himself. It might not be a conscious choice, but there's still intention behind it. Kein Herr im eigenen Haus. (No master in one's own house.) - Sigmund Freud. He was wrong about a lot of stuff but this is one thing that still stands.

It's called unconscious intention, and here's a pretty interesting paper that'll bring you up to speed: https://irl.umsl.edu/cgi/viewcontent.cgi?article=1206&contex...


I'm sure you can be unconsciously intent on things, but gaslighting is a unique concept. Here's the definition I am relying on: manipulate (someone) using psychological methods into questioning their own sanity or powers of reasoning.

In your provided example, the user is obviously not trying to manipulate someone into questioning their sanity, nor power of reasoning. Quite the opposite. Lying to themselves (your example) for sure.


gaslighting is the act of invalidating one’s own, true experience and yes you can do it yourself.

https://philpapers.org/rec/MCGAIG

https://www.psychologytoday.com/us/blog/emotional-sobriety/2...


Artificial intelligence doesn't negate the need for human intelligence.

The jackhammer replaced the hammer and chisel for busting concrete, and the user's physical strength is important to both the manual and automated tool.

AI is a multiplayer to the user's intelligence, as the jackhammer is a multiplayer to physical strength.


https://theghostinthemachine.medium.com/a-conversation-with-...

Comment made by psychosisizer at the end.

Walking the knife edge between sanity and psychosis is exciting, but definitely has risks.


The factory is the product.


It's clearly not.


I heard that


Just a box


Why do some sites require SSO, without an option for a local (better term?) account?

I prefer to have a unique username and password for each service. KeepassXC is my SSO provider.


keepassxc is not a good SSO provider for 100 employees.


https://github.com/wavetermdev/waveterm

I'm a Warp fanboy. Claude Code has it beat for writing software, but Warp is magic for linux sys admin. I SSH into my home server and feel like a wizard, no more constantly switching to a web browser to Google stuff. The experience of staring at a text only terminal for hours without ever switching to a different window feels like using DOS before the internet. It's magical.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: