Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Someone once asked a question online about what each person's biggest fear was regarding the future of AI generated content. I thought about it.

While the fears of not being able to earn money for creative pursuits are a concern, my biggest concern remains around anonymity.

At some point, I fear that participating online with other humans will require "proof of self" and as AI becomes more and more able to generate convincing images/text/video/voice of being human, the systems will ask more and more of us to prove we are real humans which could lead to awful consequences in disallowing anonymity entirely.

That worry remains right up there in my list of AI related concerns.

The parallel concern to that are online communities become tightly gated with stringent requirements of relationships (i.e. invite only, possibly with multiple "referees") and proof of quality in order to participate. This outcome has its merits but can also lead to exclusionary environments which has many downsides, esp for newcomers. It could very well feel like participating in low quality ranked levels of a game for a long time before being allowed to climb out of the cesspool into higher levels where people take stuff more seriously. Not necessarily a bad thing but it's still an inversion of the idea of "participation allowed by default but you can lose the trust you are given if you behave poorly".



> I fear that participating online with other humans will require "proof of self"

I fear something worse: that it WON'T require proof of self, because the LLMs will be able to tell if you're human... and your age, wants, needs, everything else; that's the nightmare scenario for me: LLMs integrating all the little bits and pieces of yourself you shed online, all the information you leak, from the speed your mouse movements to how long you read an article -- and what articles you read -- and the words and style of your comments -- everything building into an accurate and detailed profile of yourself.


This is already happening and has almost nothing to do with llm, this is Google and Facebook business model because these are the marketable details.


Is it happening? Is there anybody who would claim that Google and Facebook are showing them better-targeted content now than before?

Google and Facebook care only about shoving monetizable garbage in your face, be it through tainted search results, mindless "recommended" IG Reels/YT Shorts or FB Pages. That has driven people away from these platforms. Maybe not at the levels where their bottom line is being hit, but that is a lagging indicator anyway.


> mindless "recommended" IG Reels/YT Shorts or FB Pages

Only these clearly do work for the majority of people. They're ubiquitous in popular culture (and in fact culture is often built within them) and are essentially the whole business model for those services. If they don't work for you, great, it means you've successfully avoided using those services enough that they can't profile you accurately, but that doesn't mean that they don't work at all.


> claim better targeted content

Any such claim is irrelevant. The personal opinion of the receiver of the content is not relevant, only that the content delivered somehow makes money for the sender.

> That has driven people away from these platforms.

Platforms? These entities do not derive profits only from visits to their own domains. Please inspect the source code of any random site you read next. On the majority of web sites in the Western hemisphere you will find either a Facebook script or a Google script, or both. Often more than these two.


I don't think the current systems used by Google and Facebook can integrate and infer new details as well as the LLMs can; LLMs can do more with less.


I know LLMs are pretty wild technology, but you're wrong about this. LLM's are just models of language. To some extent they can do some of the things you're talking about, but almost by the very nature of the problem other models could do the same better and with much less by way of resources.


What makes you think the details provided by LLMs would be accurate?


And if they know you better than you know yourself, they can control you better than you can control yourself.


> participating online with other humans

There is a more significant case of "the end of anonymity", that of doing any kind of sale or purchase. The more sophisticated the possibilities for fraud become, the harder the authorities that be will (need to) push for public non-falsifiable identification (e.g. linked to your biometrics somehow, as I don't suppose a transplant ("chip") is politically feasible). If you need to trade, that is.

Consider that the past few years the use of cash is increasingly being phased out, or even outlawed (for amounts over a certain size) in various Western countries. With digital money comes digital fraud.

As a spooky aside, the Christian horror story "Mark of The Beast" is remarkably accurate in that respect, even if perhaps a bit too specific in the details (on hand or forehead) - but then magic glasses and -watches are here already.


> I fear that participating online with other humans will require "proof of self"

This is a huge fear of mine as well. I can't see how I could ever accept such a thing, so it would be the end of my use of the web.


The parallel concern - we saw how too open communities just don't work in modern world. Way too many sides have their own 'pitch', be it marketing, politics, propaganda, always to manipulate opinions via emotions to steered goal which is never actual simple truth. Truth doesn't need much marketing among reasonable people.

This is how flat earthers, climate change deniers, blind supporters of putin's war on ukraine etc spawn out of blue and suddenly it feels like half of the world got mentally disabled. Well, they didn't. There was either once a nice community say about patriotism for XYZ, which was gradually subverted into whatever fringe stupid position they hold now, via various negative emotions us-vs-them of participants. Or such a community was already created with such purpose.

Good old Prigozhin's and GRU troll farms well running full steam for past 2 decade, always subverting western countries, especially those former communist ones. All it takes is 1 skilled manipulator, all 3-letter agencies everywhere have whole departments for doing exactly this (and counter-attack it too, but that's far less effective).

With realistic videos, russians can pop out any video of zelensky or biden being a pedophile, and those photos of putin riding a bear will look 100% perfect which is enough for most older russian population.

I will gladly be part of some more closed community, trust is a very important item and currently we already severely lack it. There are of course numerous issues with such communities, ie including new members and anyway steering away from actual truth, but thats 100% problem now already. But some friction is not a bad thing per se, it works the same way in real life in smaller villages everywhere. Clearly good enough model for past 10k years.


You seem mainly focused on what the out group does. Manipulating emotions and discourse is not a outgroup problem, really.

Also, I think algorithmic feeds are the main culprits. Before them, it was way harder to manipulate people on the internet.


[flagged]


Is flat earth a right wing position now?


There is more then just "left" and "right", hence "all sides of the political spectrum".


I dunno, I like the idea of 'quality content' being gamified as a kind of upward filter. Online communication has degraded in-person communication to some degree* and it would be nice if there was a gamelike quality to getting access to the best communities which aligned with the quality of thought and communication - an excellent training ground. The major worry I'd have is that midwit communities can't tell an imbecile from a genius and it'd be hard for a lot of actual geniuses to 'graduate' once they're being judged by midwits. But I'm sure the genius communities would have the means to put a workaround in place.

*I suspect this is at least in part due to memetic contagion rate being selected for at a higher rate online than in close reationships, but does anyone have their own pet theory as to why this is?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: