The phrase "rug pull" is almost cartoonishly evil. It implies waiting until the victims are feeling nice and secure, and then you rip everything out from under them as quickly as possible. The fact that the phrase has become so closely related to crypto activities is really telling.
>But the damage was done. Afterwards, everyone was afraid of being filmed without consent again.
Devil's advocate, but isn't the point of being nude in public that you are destigmatizing it? I know you mentioned this was a "private lake" but it is unclear of if you mean legally private or de facto private. In case of the latter, even creepers have a right to be there and film.
I used to visit a nude beach and there was always some clothed weirdo with their phone out. They would get unfriendly looks by everyone though, and I think the ostracization kept them on the outskirts of the community.
The lake was on private property owned by the parents of one of the participants. But there's an "open to the public" hiking trail at the border of their property. So legally, I'd consider this similar to looking over the fence into your neighbor's garden.
I'd estimate that for most participants, the point of being nude was to feel free. I mean this was students drinking, swimming, and playing beach games for relaxation, not some political rally.
I don't think people really like permanent recordings even when they're wearing clothes. One of my favorite things about the pandemic is the sense of anonymity in public that wearing a mask offers. Less likely to get messages like "hey I saw you slip on that ice on r/all". Instead you're just some rando slipping on the ice!
You can enjoy the freedom of being nude on your own private property then. No need to add "in public" unless you are trying to a) destigmatize or b) be an exhibitionist
Sorry but I just don't buy it. There's no functional difference between swimming in undergarments or swimming nude, so if you choose nude, it's because you don't think it should be stigmatized.
If you think it feels so much nicer, consider that it's because it's a mild rebellion against a social norm.
Look mr prudent, you don't get the motivation why folks do it, fine, but show some respect. And yes there is huge difference between swimming naked and in some swim shorts. I couldn't care less about rebelling against something, and I am definitely not any kind of exhibitionist.
Done it cca twice if I don't count swimming during night, second time got sting on the shoulder from medusa that left burn scar for years and hurt like hell back then. The idea of getting something similar on my johnson makes me shudder even now.
Why is being nude normal in some places, but not others? That suggests that a stigma exists in one but not the other, and so participating in one or the other is implicitly supporting that.
How can it be so difficult to comprehend, that people might want to be nude with other people but not have it shared with the entire world for all eternity?
How can you not comprehend there are stages between "complete and total privacy with one person for a moment" and "film this and display it for all time to everyone"?
You shifted the goalposts into a false dichotomy about "public vs. private" and the concept of de-stigmatization. I am saying the entire premise is bunk, there is not only "private" and "public".
There is clearly a definition of "public" that involves a limited, semi-unrestricted number of like-minded people in a safe setting without tools to record behavior for eternal and unlimited rebroadcast. You do not acknowledge this; you are talking in binary.
I think you have a wrong understanding of nude beaches. People are usually very friendly and relaxed. Most have not a model body and are a bit older. When I was younger I also could no imagine I would go to there, but it's just so a much nicer experience. I feel much more calm on a nude beach than on a normal beach.
Guess you are from the USA, where no digital age concept of privacy exists, and the word still has a 17th century meaning of "behind closed curtains at home" instead of the modern interpretation of "requiring consent of the data subject". I am from europa, our laws, culture and philosophy are different. More specifically i am german, and we have a nudist tradition that has nothing to do with making statements.
Not everything "in public" is a statement made with the consent to be recorded, shared over the internet by bystanders, exploited by corporations and archived for eternity. A public nude beach, and take note that this thread is actually about a private nude beach but let's argue the weaker point, is a place for people to be nude, not a place to be exploited to make porn. The fact that it is physically possibly to exploit a nude beach for softcore porn does not mean it is acceptable. In the same way that being nude at a shopping mall is physically possible, but socially not acceptable and mostly also not legal. Please note that nudism and exhibitionism are not the same.
For most participants Nudism is about freedom of self and a return to nature, it is about oneself, not about society or making statements, and not about pushing ones own nudity into other peoples faces. That would be exhibitionism. Many nudists are quite shy and not interested in becoming someone else wank material, or an actor on national television, or instahub. They do not wish to be recorded. They just want to be nude at the beach, there is no larger meaning or implied consent. Let me repeat that: being nude at a nude beach does not automatically imply consent to be filmed by anyone for any purpose. And we can argue this finer detail of "privacy" as being different from "privatly/publicly" and how it hinges on consent, without looking at nudism in particular:
Imagine every time you leave the house a national television crew follows you around. It doesn't actually matter what you do, they will cut and manipulate the footage to fit their narrative. You are not getting paid and are not consenting, and you have no influence on what the show is about. But it is going to be degrading, let's call it "americans most stupid". You should have stayed inside if you don't want to be exploited like that. This is the american idea of privacy: once you step outside, you have none. The usa does not differentiate between "being seen on the street, at a bar, at a beach" and "being published on national television, on instagram, on pornhub". If you want to use the former, you must accept to be exploited by the later and their endless supply of unpaid content creators. The european interpretation says that these people are not content creators, but creeps that are violating your human right to privacy and self determination by recording and publishing your activities without your consent.
The public is facing the tragedy of the commons as "public spaces" have become freely and easily exploitable by corporations in the age of surveillance capitalism and social media.
Note that european style privacy law is differentiated in the finer details: a person who films at a nude beach can claim to do so as a technological extension of their own personal memories and with no intention to publish the material and that is not in violation of privacy laws. This is the case in the thread starter. For the law it is the exploitation of the material, turning personal data of unwilling subjects into commodities without their consent, which is illegal. This is a detail most people at nude beaches do not like: they find the act of filming itself to be as creepy as a wanker sitting in the bushes.
Thanks for the detailed reply, but I disagree with almost every point that you made. You took the extreme version of "no privacy" with the "America's most stupid" example, so allow me to take the opposite extreme. Imagine every conversation and every form of public interaction going through real-time government censors to decide if it is appropriate. If it's not appropriate (for some subjective definition of 'appropriate'), you're arrested or fined for offense. Sounds dystopian right? I'd much prefer being followed around by a malicious film crew in public all day.
> Note that european style privacy law is differentiated in the finer details: a person who films at a nude beach can claim to do so as a technological extension of their own personal memories and with no intention to publish the material and that is not in violation of privacy laws.
This would be reasonably clear-cut if images were being published on a professional pornography site or whatever, but what happens when the voyeur changes their mind and sends a pic to a friend who never re-shares it?
There are two possibilities here: either the law is unenforceable in cases like these and acts more like a security blanket than any sort of protection to be relied upon, baiting people into a false sense of privacy where they're open for exploitation by creeps - or you've got mandatory on-device image scanning / no E2E / etc, as compromising private communications is required in cases where the material would never hit public services.
Btw, I don't even really see this as a US vs. EU philosophy-of-law thing - the US has plenty of dumb unenforceable laws that do more harm than good as well, but imo does at least get the privacy in public issue roughly correct.
My uncharitable take is that this is the result of this style of privacy protection's unenforceability problems. If it works so well, why the decline in participation / increase in electronic voyeurism?
i do not agree with the idea that every human right and corresponding laws that are not perfectly and fully enforcable under all circumstances must either lead to an ever increasing trend towards a surveillance state, or be dropped entirely. Both of those are terrible choices. Almost all human rights have some edge case were lack of discoverability prevents enforcement of the laws and prosecution of severe violations. Even murder cases go cold. Does not mean it should be legal.
That’s quite a bit of a slippery slope you’ve got there. Intent matters a ton in law. It’s the difference between man-slaughter and murder. It’s the difference between negligence and property damage. Why can’t it also be the difference between recording memories and publishing without consent?
I agree that such a government agency would be dystopian, but not that it is a logical extreme of a consent based right to privacy. Quite the contrary such an agency would be in violation of the right as it spies on all public interactions. Yet I am not surprised you radicalize a negative liberty of freedom from harassment by a malicious film crew, towards an intrusive government agency that ensures their absence. Consider the right to not be subject to violence, which you hopefully agree we have, and tell me, where is my government issued bodyguard ensuring absence of harm 24/7? I can claim a right, demand others limit their actions in respect of it, sue them if they violate it, and likely win, without needing a totalitarian surveillance state. But you can't win trial against a malicious film crew, if you have no right to privacy in the first place.
Let's meet in the middle, at "freedoms end where rights begin". For most interactions this balance is kept not by a government agency, but by the people respecting each others rights. The censor that decide if it is appropriate in real time is not part of some government agency, it is the little voice of moral and reason in your head that says "don't punch him in the face" and, if you get my drift, "don't film at a nude beach". The government steps in after people sue.
I bring the european understanding of a human right to privacy based on consent into this discussion, as a consideration about limiting the right to film, as a counterpoint to your "even creepers have a right to be there and film.", which you made as the devils advocate and which is true, but ignores the creepers disrespect for the right to privacy of those they film. The american interpretation is that humans have no right to privacy in public spaces, at all, that the creepers freedom to film is unrestricted in such a situation because privacy only exists behind closed doors and drawn curtains. This further means the creepers right to sell the content to distributors is unrestricted and their right to edit and frame this material is unrestricted, without any consent of he people filmed, because in an american public space their human right to privacy is non-existent. In the post I answered to, you reinforced this by claiming a person going to a public space makes a statement, implying consent. I reject that. There is a fundamental difference between using the commons and consenting to be exploited. I think the american threshold of where one sides freedom ends and the others sides rights begin in this matter is unfit for postmodern times where cameras have become cheap and omnipresent and publishing of the filmed material turned into a big market.
The american understanding of privacy comes from a time when the discussion was about being seen by neighbors, not about being filmed and published on the internet for millions to gawk at. Had the founding fathers bathing nude in a public lake not only implied that some fellow people present there could see them, but that pictures and videos were being made available on the world wide web, the concept of privacy in the bill of rights may be very different. Times have changed, technology changes possibilities and the evaluation of the freedom to do whatever you want in respect to other peoples rights to not be subject to whatever someone else wants must change with that as well.
As a moral and constitutional framework, i prefer consent based privacy above curtain based privacy.
> Devil's advocate, but isn't the point of being nude in public that you are destigmatizing it?
Destigmatizing doesn't mean people are free to stare or film you. It's the same if someone were filming on a regular beach because they find people's bodies in bikinis attractive (or to have a wank, as admitted by that old creep). You probably wouldn't like that very much.
In the US, basically any public place (nude beaches included) is open for people to film. I believe there are some exceptions like public restrooms, where there exists a "reasonable expectation of privacy."
But yes, people are also free to complain to the person and make them feel uncomfortable for the scumbag behavior.
In Europe (Slovenia) you can't film people without their consent even if in public, unless if there's a huge number of people. Thus you're filming the crowd and not just a select few.
And I'm pretty sure that even in European counties that theoretically require consent for publishing identifiable pictures of people in public, many thousands of such photos are posted every day.
There's the law and then there is etiquette. They do not necessarily match up. In this case the law is a lower limit, and you may face social counter-action: a person standing to block your camera, angry words, etc.
Sorry, but I'm not following. How do you make the connection from going out in public to you have to accept what others do? It feels like you're bringing in NAP or something without actually saying it.
No, I mean, please explain why you have to accept what others do. Are there people who are going to make me? If so, then they have some justification to do so - what is it?
> No, I mean, please explain why you have to accept what others do.
Because the law allows them and prevents you stopping them.
> Are there people who are going to make me?
Police officers perhaps. Depending what you mean by not accepting it.
> If so, then they have some justification to do so - what is it?
Who and what? I don't really follow. People are free to look at what they like in public. I feel I'm repeating myself, I don't quite know what the difficulty is with this.
Thanks for your patience. I'm trying to understand your first principles, and they seem to be what is lawful is moral and what is moral is lawful. I simply find it difficult to accept that as an axiom.
NAP is the non-aggression principle[1]. I brought it up, not because I agree or disagree with it, but to jump to that part of the dialog were that your basis for framework.
Those are not my "first principles", but otherwise I'm not really interested in explaining to people on the internet what they are.
But people are free to be in public and look at things that are in public view. This is not a statement of my beliefs or principles it's just a matter of fact. If you disagree, can you provide evidence?
Sorry, but matters of fact, in my experience, rarely are. Rather, they are typically assumptions, and personal and cultural projections about reality. It's a common trope around here to state opinions, assumptions, and unexamined ideas as fact as a rhetorical device, and when pressed, to avoid such an examination. At the end of the day it's not up to me to prove that they aren't; it's up to you to prove that they are.
I never said anything about what people can or cannot do in public or in private, I only pointed out that "just looking" is not an ironclad defense.
Harassment is better defined in terms of both side intent, personal effects, and reasonable expectations, especially when anonymity and safety can be at risk.
isn't the point of being nude in public that you are destigmatizing it?
Context matters. The act itself can be destigmatizing (doesn't have to be the goal), but recording it and displaying it out of context usually causes more stigmatization.
I've worked at several startups, but my experience has been that I've been relatively siloed in my work. Maybe I wasn't taking advantage of the startup experience, but aside my limited view at standups, I didn't know how the sausage was made in different areas of the company. I couldn't see how it all fit together, because I had an incomplete picture.
I've had IRL conversations with almost fanatical climate change activists, and I believe they look at it this way: even if they're completely 100% wrong, nothing is lost by improving how we treat the environment.
That might be their feel good belief, but its not based on reality. If you raise the price of Energy, the people at the bottom will suffer the most. There is real risk of relatively affluent Europeans going cold and hungry this winter, due to misguided beliefs that solar will provide their energy needs, in cold and overcast places like Germany.
The cost of energy even affects how much fertilizer cost to produce. And that is even more impactful of the risk of people going hungry all over the world.
Nobody is arguing for just increasing the price of energy. The whole point is to transition to other, greener energy sources, and while that's more expensive, soften the blow through subsidies and the like.
> There is real risk of relatively affluent Europeans going cold and hungry this winter, due to misguided beliefs that solar will provide their energy needs, in cold and overcast places like Germany.
Nope. Due to the misguided belief that Russia and its gas can be trusted as a transitional energy source until there's no more need for fossil fuels. Nobody expected that German solar would power the whole of Europe, the goal was always diverse energy generation methods (hydro, solar, wind onshore and offshore, tidal, nuclear). It was stupid to rely on gas for the transition, IMHO, and it was even stupider to rely on Russian gas. If it was Russian, Algerian, Azeri, Qatari gas in equal quantities, it wouldn't have been a problem that Putin is an insane warmonger (bar the emissions associated).
> The cost of energy even affects how much fertilizer cost to produce
Yes, which is where subsidies would apply until there are alternatives like green hydrogen.
> Due to the misguided belief that Russia and its gas can be trusted as a transitional energy source
I find it ironic that the right-leaning people that are pro oil & gas are often also isolationist and nativist in their orientation. Oil and gas are the least sovereign sources of energy and make you dependent on both the local government and foreign nations.
That is a very dangerous prespective. Every policy has unexpected effects that cannot be foreseen and nobody can be sure that something is totally positive and safe.
It’s pretty good idea not to dump huge quantities of anything into rivers, sea, atmosphere. Is your position that people who want to curb emissions should stop to think about unintended consequences? Maybe we should have done that
earlier when building this system with no accountability for externalities.
Climate fanaticals want more, much more than "not to dump huge quantities of anything into rivers, sea, atmosphere".
> That action must be powerful and wide-ranging. After all, the climate crisis is not just about the environment. It is a crisis of human rights, of justice, and of political will. Colonial, racist, and patriarchal systems of oppression have created and fueled it. We need to dismantle them all.
>once they figure out how to control potentially harmful generations
Is it just me, or does anyone else think that this is an impossible and futile task? I don't have a solid grasp on what kind of censorship is possible with this technology, but the goal seems to be on par with making sure nobody says anything mean online. People are extremely creative and are going to find the prompts that generate the "harmful" images.
Reminds me of a toy girl doll I heard about which had a speech generator which you could program to say sentences but had "harmful" words removed, keeping only wholesome ones.
I immediately came up with "Call the football team, I'm wet" and "Daddy lets play hide the sausage" as example workarounds.
It's entirely pointless. Humans are vastly superior in their ability to subvert and corrupt. Even if you were able to catch regular "harmful" images humans would create a new categories of imagery which people would experience as "harmful", employ allusions, illusions, proxies, irony etc. It's endless.
Furthermore, the possibility that we create an AI that can outsmart humans in terms of filtering inappropriate content is even scarier. Do you really want a world with an AI censor of superhuman intelligence gatekeeping the means of content creation?
If you squint and view the modern corporation as a proxy for "an AI censor of superhuman intelligence gatekeeping the means of content creation" - then that's been happening for a long while now.
Automatic review of content, NSFW filters, SPAM filters etc... have been bog standard since the earliest days of the internet.
I don't think anyone likes it. Some fight it and create their own spaces that allow certain types of content. Most people accept it though and move on with their lives
I'm down with calling a corporation intelligent (as long as you don't call it a person). But automatic content review is regularly bypassed, they can't even keep very obvious spam off YouTube comments, such as comments copied from real users, posted with usernames like ClickMyChannelForXXX.
So if the corporation is an intelligent collective, then it's regularly outsmarted by other intelligent collectives determined to bypass it.
We can look back further at the Hays code. That's just religion plain and simple. The feeling of, "we're sliding into a decadence which will lead to the downfall of our civilization" is a meme propagating this very sentiment. It's not a simple as just the government, but that does co-occur.
Isn’t that basically what OpenAI and Google tried to do and it lasted all of 3 months.
Problem with tech is once it’s known to be possible if you choose to try and monetize it by making it public as OpenAI and Google were planning to do then it’s only a matter of time before another smart team figure out how you’re doing it.
You can do the Manhattan Project in secret and in 500 years someone else might not realize it’s possible. But the second you do a test of that concept the sign you did that is detectable everywhere and the dots of what you did will connect in someone’s brain somewhere.
In England you discover that English has actually two different existences… The ordinary one and then the “dirty” one. Almost any word has or can be made to have a “harmful” meaning…
"It's entirely pointless. Humans are vastly superior in their ability to subvert and corrupt. Even if you were able to catch regular "harmful" images humans would create a new categories of imagery which people would experience as "harmful", employ allusions, illusions, proxies, irony etc. It's endless."
This is employing a fallacy that people have infinite amounts of energy and motivation to devote to being hateful. I have been on countless online communities in video games and elsewhere and when the chat in them doesnt allow you to say toxic, hateful stuff... guess what a whole lot less of that shit is said. Are there people who get around it by changing out characters to ones that look the same that dont trigger the censor or by using slang or by mispelling? Of course but the fact is I think if you talk to someone who runs communities like this they would laugh in your face if you said a degree of censorship of hate speech wasn't fundamentally beneficial.
A big aspect has got to do with the fact that if everybody agrees to be part of a community, part of that agreement is a social contract not to use hate speech and if someone flaunts that they are bypassing it.. in the obvious flaunting of the social contract established (it is obvious they had to purposely mispell the word) these people are alienating themselves by underlining the fact that the 99% of the community finds their behavior pathetic and unacceptable.
I (and I would assume the OP) agrees that saying "entirely pointless" may be a bit hyperbolic
However the point stands that as a concept, humans will find a way to exploit and corrupt any technology. This is unquestionably true.
Bertrand Russell famously makes exactly this point as well, albeit specifically when it comes to violent application of technology in war. That: until all war is illegal every technological development will be used for War.
Your point however is also true, in that in certain spaces for certain audiences (communities), participants make it more difficult to exploit these things in ways that they don't want to and to explout them in ways they do.
Ergo, Technology is and remains neutral (as it has no will of it's own) and the people using and implementing technology are very much not neutral and imbue the will of the user onto the tool.
The real question you should be asking is, how powerful can a free tool/knowledge get before people start saying that only certain class of "clerics" can use it or that most communities agree that NO community should have it.
Notice on that last point how not-hard we're trying to get rid of Nuclear Weapons
I don't think swearing in a video game is comparable to art.
If I swear at a video game and it comes out as ** I might think "OK, maybe I'm being a bit of an asshole, there could be kids here and it's a community with rules so I'll rather not say that".
If a tool to make art doesn't let me generate a nude because some American prude decided that I shouldn't, though... my reaction is going to be to fight the restriction in whatever way I'm able.
Importantly, we're posting on a forum where this exact idea is true. HN doesn't stop all hate speech, or flaming and what have...but the moderation system stops enough that people generally don't bother.
It seems pretty well-agreed that the HN moderation works because of dedicated human moderators and community guidelines etc.
I think spaces that effectively moderate AI art content will be successful (or not) based on these same factors.
It won't depend on some brittle technology for predicting if something is harmful or NSFW. (Which, incidentally, people will use to optimize/find NSFW content specifically, as they already do with Stable Diffusion).
But this is a forum of interaction between people. These models can and should do things privately. It's the difference between arguing for censorship in HN or Microsoft Word.
Sure it would be a fools errand to filter out "harmful" speech using traditional algorithms. But neutral networks and beyond seems like exactly the kind of technology that is able to respond to fuzzy concepts rather than just sets of words. Sure it will be a long hunt but if it can learn to paint and recognize a myriad of visual concepts it ought to be able to learn what we consider to be harmful.
One of the insurmountable problems, I think, is the fact that different people (and different cultures) consider different things 'harmful', and to varying degrees of harm, and what is considered harmful changes over time. What is harmful is also often context-dependent.
Complicating matters more is the fact that something being censored can be considered harmful as well. Religious messages would be a good example of this; Religion A thinks that Religion B is harmful, and vice-versa. I doubt any 'neutral network' can resolve that problem without the decision itself being harmful to some subset of people.
While I love the developments in machine learning/neural networks/etc. right now, I think it's a bit early to put that much faith in them (to the point where we think they can solve such a problem like "ban all the harmful things").
>There's way too much moralizing from people who have no idea what's going on
>All the filter actually is is an object recognizer trained on genital images, and it can be turned off
I'm not sure if you misread something, but neither I or the person I was replying to was talking about this specific implementation, but in a more general sense?
I'm pretty sure you are the one who missed the point of the parent post and mine.
It's not that simple. The model was not trained to recognize "harmful" action such as blowjobs (although "bombing" and other atrocities of course are there).
The model was trained on eight specific body parts. If it doesn't see those, it doesn't fire. That's 100% of the job.
I see that you've managed to name things that you think aren't in the model. That's nice. That's not related to what this company did, though.
You seem to be confusing how you think a system like this might work with what this company clearly explained as what they did. This isn't hypothetical. You can just go to their webpage and look.
The NSFW filter on Stable Diffusion is simply an image body part recognizer run against the generated image. It has nothing to do with the prompt text at all.
The company filtered the LAION 5b based on undisclosed criteria. So what you are saying is actually irrelevant, as we do not know what pictures were included or not.
It is obvious to anyone who bothers to try - have you? - that a filter was placed here at the training level. Rare activities such as "Kitesurfing" produces flawless, accurate pictures, whereas anything sexual or remotely lewd ("peeing") doesn't. This is a conscious decision by whoever produced this model.
Well it ought to be able to be trained for a number of scenarios and then on generation be told to generate based on certain cultural sensibilities. It's not going to be perfect but probably good enough?
Isn't this part of the AI alignment problem? To be able to understand what kinds of output is unacceptable for a certain audience? To be polite?
> Well it ought to be able to be trained for a number of scenarios and then on generation be told to generate based on certain cultural sensibilities. It's not going to be perfect but probably good enough?
Do we want the AI to generate based on Polanski's sensibilities, even if he's the only audience member? I suspect for most people the answer is no.
I find it very immoral too, it's like the islamist trying to prevent the prophet pictures drawn. Not that I want to offend muslims or make "harmful" content but this notion that specific type of content creation needs to be imposed is very very problematic. Americans freak out of nudity all the time, something that is not considered harmful in many other places. The fear of images and text and the mission to restrain it is pathetic.
Anyway, it won't be possible to contain it. Better spend the effort on how to deal with bad actors instead of trying to restrain the use of content creation tools.
Yeah, it's taking the impulse to control everything from our own mind and putting it into an artificial one. Seems to me a lot of our suffering is borne of that impulse.
OpenAI's filters are a total joke. I tried to upload The Creation of Adam (from the Sistene Chapel), blocked for adult content. "Continued violations may restrict your account". Yeah, it has naughty bits in it, but it's probably in the top ten most recognizable pieces of art ever made. I tried to generate an image of "yarn bombing", blocked for violence. They have the most advanced AI in the world and they can't solve the Scunthorpe problem?
They're not content filters as much as Doing Something filters. They're there to convince people that they're doing something, and of course if it wasn't zealous and regularly tut-tutted people for desiring a rubber duck, you wouldn't know they were doing something.
The reason why this is such a game changer is that it is not controlled on some central server.. its like saying paper and pencils can be revoked from people if somebody doesn't like what you do with it... its an amazing new technology.. let people use it..
Regardless of the practicality: why do they think it’s their role to be the morality police?
If there’s anything we’ve learned from history, it’s that we’ve always been morally wrong in some way, very often in our most strongly held beliefs. This AI in a different time would be strictly guided to produce pro-(Catholic Church/eugenics/slavery/racist/nationalist) content.
And the corporate creators freaking out about the profanity. Microsoft's Tay wouldn't be remembered so fondly if Bill didn't immediately pull the plug when channers made her say the n-word.
> Regardless of the practicality: why do they think it’s their role to be the morality police?
It’s not just morality - there reportedly have already been multiple subreddits of non-consensual porn trying to mimic real people and underage porn. The legality of that is a minefield but it doesn’t end there. If that’s what they become known for it affects funding, hiring, people deciding whether to use their software, etc. and the more prominent that is the more likely that they’ll be hauled before legislators to talk about problems. Even simple things like legal demands to remove celebrities from the training sets could be pretty time-consuming.
Stable diffusion does run a filter on the output in its default configuration. Any image it deems 'unsafe' gets replaced with a picture of Rick Astley.
The thing about that is that it is open source, so you can trivially disable that filter if you like.
Reminds me of a joke: three guys get locked up for a long time. Out if boredom they start telling jokes to each other, but as the supply is finite, they are retelling them all the time. Eventually they number them, then just shout out eg "27", and they are all laughing.
Then a new inmate joins, doesn't know what's going on but figures that if you say a number, people laugh. So he goes "14!". But nothing happens. The others tell him "you didn't tell the joke right".
How is the poor AI meant to know that jokes 6, 13 and 38 are sexist?
I was once a guest at a tech think tank, early 2000s, people all in their 60s at the time
They spent years grappling with online worlds because of the idea that people might/could represent themselves as a different gender, they wanted the technology to exist and had dreamed about it for decades they just got caught up on that
That was comical because it was also out of touch at the time period as well
Its interesting how people squirrel and spiral over useless things for some time
Even in the 90's they had to fight hordes and hordes of Californian nutjobs (Diane Feinstein et. al.) that wanted to ban violent video games. These people would be certainly cancelled in today's world, wouldn't hold a chance. Because, how dare you allow violence in video games to ...children!?
Our civilization depends on allowing wacko's do their thing as far as it is within limits of the law. Let them be offensive as fuck. These are the people that herald and propel society forward by their heterodox thinking. Society is going to decay fast, it already is.
Yea definitely when they started a studio in Dallas, I don't remember the congress persons that were on similar stance as Diane. During the 90's, progressives played a larger role though. There was also Mortal Kombat fiasco:
> During the U.S. Congressional hearing on video game violence, Democratic Party Senator Herb Kohl, working with Senator Joe Lieberman, attempted to illustrate why government regulation of video games was needed by showing clips from 1992's Mortal Kombat and Night Trap (another game featuring digitized actors).
> During the 90’s, progressives played a larger role though.
Could be true, maybe, but today conservatives have willingly taken over that seat, and the NRA is heavily involved and actively blaming video games after each mass shooting to deflect from the debate on gun rights. https://www.usgamer.net/articles/the-nras-long-incoherent-hi...
In terms of trying to moderate swearing and sexuality in games and music and movies, the religious right has long been and still is the group most vocally opposed to such free expression... if we’re talking about where to address censorship today.
Why does this matter? Regardless of the party, my original message stands. It is an irrelevant detail. Not sure what's causing defensiveness everytime I bring up or criticize progressives. My bad I only remembered Diane Feinstein's name from the book, jeez.
Oh I thought you were suggesting we should stop censoring legal but offensive behavior? The issue of exactly who’s doing the censoring seems absolutely and completely relevant to the subject of censorship, no? If it’s irrelevant, then I don’t understand the point of your top comment. Why do we need to allow offensive wackos to do their thing, what offensive things are we talking about, and who needs to allow them?
Perhaps a more important discussion, if you do care about censorship, is to define more thoughtfully what you mean about “within the limits of the law”. In the US, the law, up to and including the constitution, makes clear that offensive behavior is anywhere from not protected free speech up to criminal activity. Politicians are debating what the limits of the law should be, and sometimes they blow hot air, and sometimes they write bills. Either way, the results of Congressional bills are establishing the limits of the law, and so define the acceptable legal bounds of offensive media & speech. Here’s one of the bi-partisan congressional sessions on games (it included Feinstein, among many others, but she didn’t testify). https://www.govinfo.gov/content/pkg/CHRG-109shrg28337/html/C...
In response to people jumping on to defending progressives of the 90's. The amount of defensiveness that's invoked here on HN for stating the facts is quite alarming.
I really should have left out the Diane Feinstein and "California nutjobs" in the original post. This is what happens when you mistakenly poke HN every single time when it comes to political one-sidedness.
The original Doom had "Italian cannibal film" levels of gore, heavily pixelated of course (not as if they had a choice in 1992), but in such that you could see that it was scans. Plus of course a lot of over-the-top satanic cliches to tick off the fundamentalists. But nothing remotely sexual - that's a bridge too far in the US.
Dianne Feinstein never attempted to control video games or doom. She just said that she was worried about the impact once, in April 3 2013, and Fox News has been screaming her name ever since. She's never introduced any law about this at all.
Only one California politician has ever attempted to do much of anything to video games: republican Joe Baca who tried a dozen times, and is mostly famous for his attempt in 2009 to get a warning sentence on boxes. Calling that censorship is pearl clutching
The only genuine attempt to do something an adult would consider censorship to video games were Jack Thompson, now banned republican, or that brief 2018 thing with Trump.
Democrats have never attempted to censor video games. All three major attempts were Republican.
It's important to get the details right if you are going to build an intuition of who's actually doing this
I’m sorry but none of what you said is true. At this point, the facts are indisputable. Check out my other replies that point to the congressional hearings.
I don't see the point. Idiots are fooled by far less convincing images.
Humanity has had the ability to lie with pictures since the invention of photography. The field of special effects can be described as lying about things that don't matter.
Without using Stable Diffusion, I can still photoshop an image or deepfake a video. Stable Diffusion isn't really changing what's possible here, and arguably is less advanced than what's possible with Deepfakes or even the facial filters available on social networks.
Like with all deceptive imagery: one just needs to use their noggin.
* Also I might add: the article is actually out of date on some aspects, because this technology is evolving so rapidly. Literally every day there is a new and interesting way that people are applying the tech.
It's no different than Google images, which is also voluntarily polite by default.
In both tools you can get naughty images, but you have to tell the tool that's okay.
This is not about censorship or moralizing.
It is just having the tool know when it's allowed to do that stuff. It's a key basic product feature if you're actually using the thing for content and not just having fun making pictures
Everyone acting like there's some kind of free speech issue should go into their account and turn the filter off, then try to calm down
It makes sense if the intent is to protect Midjourney from being blamed for misuse. If they saw the potential misuse yet chose to do nothing about it, they'd be blamed. Lack of perfect solution is not an excuse for not offering any protection.
I literally spent the whole first 3 hours figuring out ways to generate porn. They don't allow words like sex, cock, etc so you use prompts like intercourse and phallus. At one point I thought they were screening for particular names so you'd say things like "the brother of mako in the legend of korra" instead. It's just an endless game of cat and mouse not worth putting effort into. Got bored, now I'm playing with the dev api. People have been showing how to integrate this into Photoshop and Gimp and it's pretty cool.
The goal is to have a checkbox which keeps the system from generating naughty images in casual use.
This has absolutely nothing to do with censorship. It's a nonsense concept and it's not clear what you think censorship actually is.
If you set the system to make tall rectangles, are you censoring squares?
It's absolutely exhausting how people on HN attempt to cast any form of telling a tool what you want the tool to make as if you're somehow morally governing something
It's just telling the machine what to make
Not everything is a desperate ethical dilemma
Sometimes you just want the things you create to be straightforwardly usable
You understand that the filter is voluntary, and that the initial delay requirement (long gone) was about Discord adult image rules, right?
You're not just moralizing censorship by habit where there was none, trusting hn to overreact when that word was abused, right?
Agreed. I'm also not sure how this is practically supposed to work if they really publish the entire model. Right now, all they do is design a specific license, right? Or are there certain safeguards built into the model itself?
That being said, I'd still think publishing the model (vs keeping it as a closed-source API) is a good move. Otherwise, we'd move forward into a world where one of tge most significant technological advancements must be gatekept forever, which I'd frankly find even more dystopic.
Well, it depends.. are you talking about significantly mitigating harmful uses of stable diffusion or completely stopping them? The latter... of course it isnt going to happen but there are plenty of practical things that can be done to mitigate.
If we can't even do this, how are we ever going to align AGI? I see these efforts as part of a nascent effort at alignment research (along with the more proximate reason, which is avoiding bad PR from model misuse).
Yeah, the best they can do is the filter's on top of the output. These models are complex enough that with some reverse engineering you can find "secret" languages to instruct them that would be able to get around input filtering.
devil's advocating, given they have trained it so well to generate images in spite of all expectations, is it really so hard to imagine that they can't also train it to understand what images not to generate? It already had to understand not to generate things that don't make sense to humans. How does this not just amount to "moar training"? The hardest thing is that the training data it will need is a gigantic store of objectionable (and illegal) content ... probably not something many groups are eager to build and host.
The thing is that people can make harmful art themselves. Photoshopping people's faces on nudes and depicting graphic violence has been a thing since digital photography if not painting in general. I mean, look at all the gross stuff which is online and was online way before these Neural Networks.
The issue with these neural networks isn't the content they create, it's that they can create massive amounts of content, very easily. You can now do things like: write a Facebook crawler which photo-shops people's photos on nudes and sends those to their friends; send out mass phishing emails to old people with pictures of their grand-kids bloody or in hostage situations; send out so many Deepfakes for an important person that nobody can tell whether any of their speeches is legitimate or not. You can also create content even if you have no graphic design skills, and create content impulsively, leading to more gross stuff online.
Spam, misinformation, phishing, and triggering language are already major issues. These models could make it 10x worse.
Where today it takes some far-from-Jesus deviant artists a whole day to draw a picture of Harry Potter making out with Draco Malfoy, with the power of AI, billions of such images will flood the Internet. There's just no way for a young person to resist that amount of gay energy. It's the apocalypse fortold by John the Revelator.
> It's the apocalypse fortold by John the Revelator.
I literally read a chapter of Inhibitor Phase where there's a ship called "John the Revelator" less than an hour ago. I haven't otherwise seen that phrase written down for years.
Spooky (and cue links to the Baader-Meinhof Wikipedia article).
> Spam, misinformation, phishing, and triggering language are already major issues. These models could make it 10x worse.
Or 10x better, as the barriers to entry for doing this kind of thing right now aren't high enough to make it not happen... they are only high enough to make it sufficiently hard to pull off that people can feel comfortable assuming that most of the content they see is legitimate; in a world where nothing is necessarily legitimate I'd expect you'd see a massive shift in peoples' expectations.
Seems like the link with the specific ID in my above comment has become the de-facto shared room now, at least as long as my comment stays at the top :)
Hmm, what I was thinking of was that everyone seemed to be competing with each other to click random UI elements on the virtual browser as rapidly as possible, I assume because there wasn't a way to speak with one another / coordinate. Or maybe I just missed the chat area?
Yeah we kept the MVP lean and didn't add chat or audio/video calling. Our goal (which we're not meeting) is to keep the Three.js example line count below 200.
The student loan forgiveness debate is such a mess. Here's just a sampling of the arguments I've seen, and I'm inclined to think there's some validity to all of them:
* Loan forgiveness creates perverse incentives
* If we can afford to forgive/bail-out X, then we can do student loans
* This punishes everyone who was responsible with their loans
* This is a huge relief to everyone with student loans
* Why should everyone else have to pay for your bad choices
* College should be free anyways, this is that, with extra steps
The college loan programs obviously already created perverse incentives. It should come as no surprise that the loan forgiveness does as well.
It is almost as if there's some motivation other than to allow children of poor families to obtain a college education.
As for the college should be free anyways like it is in Europe, then perhaps we should administer universities like they do in Europe. Only a few select students are allowed to go and they don't provide training leagues for professional sports teams.
"A few select students"?? Maybe in some countries, but in plenty of countries it's whoever wants to go. That's the problem when talking about "europe", it's not a homogeneous group of countries.
As for training leagues, they still have professional sport teams in Europe right?
Which countries don't have university acceptance rates? I looked at Denmark and Sweden, they still exclude people. Someone mentioned in Germany if you don't get accepted, you go on a waiting list.
Good point about free college, most people don't realize that college is not the same in Europe as it is in America, there are reasons for it being able to be free.
When I was twenty years old I enrolled at the University of Bern for Computer Science. For that I had to go to preparatory school (Gymnasium) between 15 and 19 years old. It was a harsh school. I learnt the languages French, Latin and English and I had to write essays in German. Then there was geography, math, biology, chemistry, physics, human sciences, programming. Finally there was a round of examinations. 4 hours of math, 4 hours of essay in German, 2 hours of essay in French, oral examinations, etc.
For the university I had to pay about 1000 francs, about 2000 dollars a year for tuition (the conversion rate is somewhat arbitrary because 30 years ago the conversion rate was completely different, so look at this more like a ballpark figure). I was able to pay this myself because I had some odd jobs financing myself. I could live at my parent's the first years then I moved out into an appartment sharing with friends.
This is typical for Europe: students usually are able to finance themselves.
I tried to verify the claim. At least for Sweden, their universities still have an acceptance rate, with the highest being 80%, and the next highest being 35%[0]. Denmark's highest acceptance rates are in the upper 70s, low 80s[1].
Yes the acceptance rates are relatively high, but the fact that they have an acceptance rates for their universities suggests that not everyone who wants to go to university can.
i think for a lot of us it came out of left field: i assumed our congress was too divided to pass something of this sort, so i never really sorted out my thoughts on the actual idea.
turns out i neglected to consider that loan forgiveness could happen without congress. my jerk reaction was fairly negative: not because i had thought critically about loan forgiveness, but because at a certain level i recognized that we just engaged in a very large act of redistribution almost completely outside the democratic process, and it’s hard to move past that mental shock and confront the outcomes objectively and separate from the process.
It probably can’t happen without congress. But can you even imagine the shitshow when half a trillion in debt that was wiped out somehow has to be put back on the books?!
The article linked there is a pretty good analysis of the obstacles to challenging it in federal court just by virtue of the action occurring, but it doesn't foreclose all possibilities. (And it's not obviously illegal to me, but that's irrelevant to the standing question.)
Imagine this scenario: A loan servicer neglects to adjust the amount a student loan borrower owes based on this action (or views it as illegal and intentionally doesn't adjust it), and bills the old amount.
If the borrower pays the higher amount, they may have standing to reclaim the overpayment under the argument that it was validly canceled. If they are not in the same state as the servicer, diversity jurisdiction might allow this to occur in federal court, and this overpayment is probably sufficiently concrete, particularized, and personal to generate Article III standing.
If the borrower pays the Biden-adjusted lower amount, then debt collection efforts may start and there may be a credit report impact, when with correct servicer implementation of the cancelation no debt would be sent to collections. There are all sorts of opportunities for standing to be created here, such as if a federal or state debt collection statute is violated by the servicer or if the borrower has another financial consequence (like a refused or higher-rate mortgage) from the inaccurately reduced credit score.
If any of this manages to land in state court, standing requirements can constitutionally be weaker anyway depending on the requirements of the state constitution.
So, we'll see, but it can probably be tested in the courts somehow, unless the Administrative Procedure Act rules somehow preempt any other lawsuit at all from using "Biden's action was illegal" as an argument in the service of some other cause of action.
Here's the biggest one. There will be a reduction in the maximum percentage of a borrower's income that they will need to pay per month for a loan and a reduction in the number of years until the debt is wiped out.
Colleges are going to be looking at this and deciding they can charge whatever they want - no one is ever going to have to pay it back. Whatever limits are in these laws (sorry, rules, this isn't a law), they will charge up to those limits, whatever loopholes there are in these rules, they will find them.
Take a look at how law schools are working the existing loan forgiveness programs. That's going to be every college now. I'm too old to switch careers, and I'm not a psychopath, but this makes me want to completely drop any moral scruples I have and get into the education fraud business. People are going to get filthy, and technically legally, rich.
>Well it sounds like Facebook chose to self-censor after talking with the FBI. But they still had the legal right to not do so.
The optics around this interaction are extremely unethical. Whether or not there was any actual legal pressure is only half of the issue. The other half is the appearance of unethical behavior.