So you clarify, you are okay with down voting and removing comments but are not okay withbanning accounts? Bans, even temporary, are crossing the line?
My feelings on the above are likely contingent of the nuances of implementation. Do downvotes ultimately censor posts? Are votes equal weight? Etc.
From a higher level, I'd grant that (a) platforms have a fundamental right to try to realize their vision, which may include promoting and demoting various types of content & (b) spam and astroturfing is a constant reality in any social platform (and users are better served by less of both).
So I think there are justifiable reasons for censoring, or at least decreasing visibility. I've been on forums long enough and have too low an opinion of the average internet denizen to think otherwise. :-)
Hence, to me, the emphasis on ad vs post hoc restraint.
If I allow you to make speech, and then, on the basis of that piece of speech and NOT on your identity as its speaker, decrease its virality in a way that's still fair (e.g. yank it from feed promotion but still allow direct linking) and then (in rare cases) absolutely censor it, that feels fair. To me.
If I proactively identify you, godelski, as someone likely to say *ist things and consequently ban you or pre-censor everything you post, irregardless of the individual pieces of content, that does not feel fair. To me.
As well, and I should have punched this more in my comment, as emphasizing "individual." Which is to say "1 human person, 1 share of public speech rights."
IMHO, if free speech is a right that flows from our existence as sentient beings then it's difficult to get from there to "you deserve more / less free speech than I do."
---
And finally, because I know I'll get this response eventually, yes, I know playing whack-a-mole with bad actors on a public platform is a nigh impossible task. I've done it. Maybe actually impossible.
Tough.
Uber skirted labor laws in pursuit of profit. Social media platforms are doing the exact same in terms of nuanced moderation in pursuit of profit. "It's difficult" or "It costs a lot to employ and train the headcount required to do it" isn't an acceptable defense, and we shouldn't accept it.
I'd grant that (a) platforms have a fundamental right to try to realize their vision, which may include promoting and demoting various types of content & (b) spam and astroturfing is a constant reality in any social platform (and users are better served by less of both).
I don't think that platforms should have any such fundamental right wrt user produced content. I think that platforms should work as either publishers (where they produce and are responsible for all the content), or as common carriers (where they are forbidden by law from interfering with legal content). I think that platforms should have to explicitly choose one model once they reach a certain number of participants or when they incorporate.
I am all for shielding platforms from liability for user content if they act like a common carrier and limit themselves to removing illegal content. However I don't see why we as a society should shield companies from liability when they selectively pick and choose which user content to promote and which to suppress, according to their own preferences.
I wonder if you've thought this through properly. I suspect that, if your vision were to be enacted, there would be no more forums. No more Facebook, Twitter, Reddit, Hacker News, niche PHP forums, comment sections, etc, etc. Why? Because they'd devolve into spam and/or people arguing past each other. For example, given your current definition I believe it would be acceptable for someone to write a script to post useless replies to every single Hacker News post and comment, effectively rendering the board useless.
Right now, HN has the right to delete those. If it was a common carrier, it would presumably not. Arguably they're spam, but I cannot imagine a way you can define "spam" that is narrow enough to not be redefined by everyone as "things I disagree with", but broad enough to capture someone posting excessively to a forum. Note that this wouldn't violate the CAN-SPAM act because it's not advertising anything commercial.
I have thought that through and considered putting that in my post, but I didn't want to dilute the original thought.
Of course I support the idea of "off topic", but it is something that needs a lot of consideration and however one writes such a restriction, it's liable for abuse and probably has a zillion edge cases. E.g. is it off topic if I post "<your favorite politician> hates cat videos" onto a cat video forum? What if I am a moderator with <other side> political views, and I allow those posts if they come from people that agree with me and disallow them if they come from people who agree with you? What if I divide my site into multiple sub-fora, one of which is "politics of cat videos"? What if de facto use of my forum or a sub-forum becomes political discussion, but that everyone attaches a cat video to each post?
In other words, yes, I have thought a lot about it, but this warrants its own separate discussion.
That's a very reasonable response. I'd add another thought: A site (e.g., Twitter) could define "off-topic" as anything that goes against its Terms of Service or Code of Conduct (CoC), which would probably result in exactly where we are now. Same problem with a lot of similar ideas - "Spam? Anything that violates our CoC is spam"; "Degrades the experience for the user? We feel that violating our CoC degrades the user experience"; and so on. Not a problem easily (or even able to be, IMO) solved.
This position means that if I create a forum for fans of a band, neither I nor anyone else should have the right to remove comments trying to sell hair products or discussing cooking recipes. Not to mention sharing (legal) pornographic images.
I am ok with downvoting, if I have the ability to change my settings so that I can view downvoted comments. However, I think that downvoting is a less desirable than individual-centric controls.
I strongly prefer to have have the individual ability to block/mute/suppress any comment or commenter, and I am ok delegating that ability to someone or something else as long as I can withdraw my delegation and undo any changes that were made. To put it differently- I might decide that I trust some organization or individual to build block/filter lists and I might consume those lists (as I do for spam blocking, ad blocking, etc.), as long as I can observe what they're doing and opt out at any time. It seems social media is long overdue for that.
I am NOT ok with anyone (or anything) else doing these for me without my explicit opt-in, especially if I don't have any way to see what decisions they made on my behalf or to reverse those decisions.