LLMs are decent at providing feedback and validation. I often have few sources of that from humans, and long periods without it are bad for my motivation and sense of well-being.
LLMs filling that hole is great if it's done in discrete and intermittent bumps. TFA shows the psychological risks of binging on artificial validation.
how do you convince yourself that it is real? I think if you know some linear algebra and read Vaswani et al. 2017 it can be very difficult to maintain the suspension of disbelief. I had great hopes for a future AI companion, but knowing how the trick is done seems to have ruined the magic for me.
If you don't like Claude's personallity, ask him to behave differently. It's common for me to periodically say 'don't be so sycophantic' and 'be more critical' when working on technical projects.
In your case, try saying 'be nicer' or 'be more jovial'.
I had tried and it started being defensive about this too, haha.
Hence my flip-the-table reaction, deciding to just cancel my subscription.
Maybe I will come around later.
What's wrong with using an LLM to learn about politics and religion?
I've found Claude to be an excellent tool to facilitate introspective psychoanalysis. Unlike most human therapists I've worked with, Claude will call me on my shit and won't be talked into agreeing with my neurotic fantasies (if prompted correctly).
Because unlike a human who can identify that some lines of reasoning are flawed or unhealthy, an LLM will very happily be a self-contained echo chamber that will write whatever you want with some nudging.
It can drive people further and further into their own personal delusions or mental health problems.
You may think it's being critical of you, but it's not. It's ultimately interacting you on your terms, saying what you want to hear when you want to hear it. That's not how therapy works.
> You may think it's being critical of you, but it's not. It's ultimately interacting you on your terms, saying what you want to hear when you want to hear it.
That's been my experience with human therapists. When I tell Claude to stop being sycophantic, it complies. When I tell a human to stop being sycophantic, they get defensive.
I agree that an ideal human therapists would be better than Claude, but most that I've worked with are far from ideal. Most are not very bright, easily manipulated, and quick to defensiveness when questioned. And Claude won't try to get me to take random meds with the only justification for the specific medication being 'got to start somewhere'.
He wants to be called a good boy, so the LLM calls him a good boy. Since the LLM is a machine that does what you want, he's essentially doing it to himself. It might not be a conscious choice, but there's still intention behind it. Kein Herr im eigenen Haus. (No master in one's own house.) - Sigmund Freud. He was wrong about a lot of stuff but this is one thing that still stands.
I'm sure you can be unconsciously intent on things, but gaslighting is a unique concept. Here's the definition I am relying on: manipulate (someone) using psychological methods into questioning their own sanity or powers of reasoning.
In your provided example, the user is obviously not trying to manipulate someone into questioning their sanity, nor power of reasoning. Quite the opposite. Lying to themselves (your example) for sure.
Artificial intelligence doesn't negate the need for human intelligence.
The jackhammer replaced the hammer and chisel for busting concrete, and the user's physical strength is important to both the manual and automated tool.
AI is a multiplayer to the user's intelligence, as the jackhammer is a multiplayer to physical strength.
I'm a Warp fanboy. Claude Code has it beat for writing software, but Warp is magic for linux sys admin. I SSH into my home server and feel like a wizard, no more constantly switching to a web browser to Google stuff. The experience of staring at a text only terminal for hours without ever switching to a different window feels like using DOS before the internet. It's magical.
LLMs filling that hole is great if it's done in discrete and intermittent bumps. TFA shows the psychological risks of binging on artificial validation.
All things in moderation, especially LLMs.