Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand how you don't understand. Trying to recreate someone's internal thoughts and attitudes from looking at their search history is a pale imitation of this. Just the thought experiment of a customs officer asking ChatGPT to summarise your political viewpoints was eye opening to me.


How so? You'd have a very, very good understanding of my political viewpoints from the log of my Google searches. I'm asking sincerely, not simply to push back on you.


It seems fairly easy to figure this out with a little thought…

When talking to a chatbot you're likely to type more words per query, as a simple measure. But you're also more likely to have to clarify your queries with logic and intent — to prevent it going off the rails — revealing more about the intentions behind your searches than just stringing together keywords.

It'd be harder to claim purely informational reasons for searching if your prompts betray motive.


(Not op)

Maybe not you in particular, but I expect people to be more forthcoming in their writing towards LLMs vs a raw google search.

For example, a search of "nice places to live in" vs "I'm considering moving from my current country because I think I'm being politically harassed and I want to find nice places to live that align with my ideology of X, Y, Z".

I do agree that, after collecting enough search datapoints, one could piece together the second sentence from the first, and that this is more akin to a new instance of an already existing issue.

It's just that, by default I expect more information to be obtainable, more easily, from what people write to an LLM vs a search box.


Asking Google for details about January 6th is different than telling ChatGPT I think the election was stolen, and then arguing with it for hours about it.

It would be harder to frame it in front of a jury that what you typed wasn't an accurate representation of what you were thinking and that you were being duplicitous to ChatGPT.


I don't think it really is in the circumstances we contemplate this threat in. In both the search engine case and the ChatGPT case, we're talking about circumstantial evidence (which, to be clear: is real and legally weighty in the US) --- particularly in the CBP setting that keeps coming up here, a Border Agent doesn't need the additional ChatGPT context you're talking about to draw an adverse conclusion!

I think at this point the fulcrum of the point I'm making is that people might be inadvertently lulling themselves into thinking they're revealing meaningfully less about themselves to Google than to ChatGPT. My claim would be that if there's a difference, it's not clear to me it's a material one.


Ah. Yeah you're more boned if you confess to ChatGPT that you've killed your wife than if you just googled for how to bury a body, but at the edges where people are using ChatGPT as a therapist and someone disappears, and the person who did it is smart enough to use incognito mode to search how to bury a body so it doesn't show up in court, how everyone feels about the deceased is gonna get looked at, including ChatGPT conversations. That's new.


If a nefarious actor opens your browser what is the process for them to quickly ascertain your viewpoint on issue X?

Write a script to search and analyse? Versus just asking their specific question.


Grab search history and ask an AI to analyze it.


So a few steps more than just ask the AI, and still relies on AI.


The point is that the data is there from search engines (and more data, from more people anyway). Whether you automate reading it or do it manually, it is 100% unrelated to the topic of ChatGPT being an informant.


Google completely owns most people's browsers, and the government has made it clear that they do not care.


users type a lot more into gpt and share a lot more of their personal files and access to their cloud services


Users type a lot more often into search engines, and the largest one keeps files on all of their egresses and correlates it with full advertising profiles and what they do within other google properties (which may include their browser itself.)


Google has all of that and more, right? They control the browser and devices that you use to access an AI app. They control the content shown to you in leisure and work. ChatGPT doesn't have that much exposure and surface area yet


Apple has a lot of customers




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: