So in the existing §230 structure, intermediaries are immune for liability related to speech of their users. However, the intermediaries are still immune if they, for example, remove things they disagree with. That does mean that openly editorially biased sites can and do benefit from the immunity.
This still seems to be point-of-view neutrality on the government's part: an anti-fooist site that removes fooist comments gets the same immunity as a fooist site that removes anti-fooist comments (or a neutral site that removes neither). Is your view that it's wrong for the government to, in a sense, help the fooist site in the first place even though it's equally willing to help the anti-fooist site in the same way? Does that mean that there shouldn't ever be a subsidy for "newspapers" (open to any newspaper regardless of its editorial line or policies), but only for "neutral newspapers" (that don't editorialize)?
I don't think this is the same as a subsidy. It's about immunity from legal consequence.
If a website can curate what is said so that it is visibly filled with defamatory material about a specific target, the organization curating the material maintains complete immunity from what would otherwise destroy a newspaper or any other organization that is actually liable for what is printed in it.
I don't think this should be possible for any organization, regardless of who they are. This immunity should require an extreme impartiality on behalf of the website. It's meant to protect organizations from speech they don't control -- when they assert any control over that speech, it's now their speech.
This is complete nonsense its entirely impossible to run a large scale conversation without stepping in editorially to some degree. The impartiality criteria you are specifying is probably in fact impossible to define sufficiently in practice while doing so and the fact that people may in fact say nasty things on the internet is an acceptable cost to having a free global communication system.
A site like Twitter or Facebook that has millions of users contributing content can edit the feed to convey a particular message by only showing posts that fit. Just like an author who chooses particular quotes that fit an article, but on a larger scale, and the "article" is now made only of quotes.
IOW, the message can be crafted by the platform even if the words are provided by the users.
There is no article trying to reduce a many to many conversation to a simpler construct in order to justify regulating it is poor analysis.
Platforms may or may not be slanted people can pick a different platform if they don't like their current ones slant or make their own. People ought to have no right to have a platform to be neutral in what communication they facilitate between their users and any attempt to enforce that is inevitably going to devolve into being slanted towards those with the juice to hire the lawyers to suppress speech.
Whether you call it an article or something else, the feed as a whole, if chosen in a biased way, represents the point of view of the selector.
Just as an article made of quotes would.
And if the message in the feed/article is libelous, it's shameful (though perhaps legal) to hide beyond the argument "I was only quoting other people."