Not necessarily, I think what we have today is far better than a world where a few people got to control what was said. That only works if they are well intentioned and trustworthy, but it’s corruptible. And they certainly aren’t necessarily arbiters of truth.
What we have today is orders of magnitude better. The main challenge moving forward will be designing information systems so as to promote challenging opinions rather than reinforcing them, which requires these companies to move away from optimizing engagement but something else.
IMO, solving that problem is a step in the correct direction.
Implementing systems that rely on credentialing and moderation are a net regression, even if they (maybe) solve this specific problem. It’s just going back to systems in the past where things appeared great but weren’t actually. Think of all the people who, today, legitimately thrive because we’ve broken down some of the credentialing gatekeepers (Ben Thompson at Stratechery comes to mind).
I'm not sure the flow of information is even necessarily the problem. People just aren't trained to think critically - and doing so for everything is exhausting however increasing the focus of education on how to think, not what to think would help immensely.
In the interim i would like to see some level of moderation, maybe citizens or states able to exact some kind of consequences against people who via any sort of media spread information which either they know to be or should know to be false (or alternative facts as some people call them). i.e. some kind of process where i as a person or an independent body have standing to sue breitbart for knowingly/negligently distributing false information.
I know we have to be careful with this but at the moment there is simply no way of holding organisations to account if i am not being directly libeled despite the fact that the misinformation harms me directly.
Hmm, the thing is though, this info is rarely 100% false... people have tried things like labeling posts with truth contents, etc. and it falls flat. In cases where it is 100% false I think your approach works (although it might not actually be a sufficient deterrent).
In practice they always seems to be mixed with partial truths (when you read past the headline)... plus, determining what's "true" is kind of a nonstarter, especially at that scale.
I am also certain you would find things on NYT, WaPo, etc. that is not 100% truthful, even if the consequences are not as egregious. It's not just Breitbart. I say this because people on the other side of the aisle will take whatever you design and throw it back at you.
Identifying critical thinking as the problem (as you did in your post) is IMO somewhat right but maybe too hard of a problem to solve (as you pointed out).
The way I see it... we're in the middle of a tug-of-war between the old guard that used news & other power structures to broadcast what they wanted, and the new, democratized users who felt they didn't need to trust them while simultaneously being empowered to have their own voice heard.
The former would manipulate our perception of what was going on through news (print / media), while the latter are empowered by companies that control the new media landscape (Twitter / FB / IG).
What's interesting about the latter is that they leverage a system that was designed innocuously ("serve better ads") but can be adapted to control someone's perception of what is happening in the world (targeted content / engagement).
As of late, I've been thinking that the way forward is going to be identifying the forces that drive extreme polarization and re-imagining them to reign them in.
When I look at what happens with the way feeds are designed, there is such a crazy reinforcement of what I already "like" that drives polarization that maybe it really is just as simple as re-designing the feed to promote more diversity of content. It could be enough to temper some of the extreme outliers of craziness we see on either side of the aisle.
What we have today is orders of magnitude better. The main challenge moving forward will be designing information systems so as to promote challenging opinions rather than reinforcing them, which requires these companies to move away from optimizing engagement but something else.
IMO, solving that problem is a step in the correct direction.
Implementing systems that rely on credentialing and moderation are a net regression, even if they (maybe) solve this specific problem. It’s just going back to systems in the past where things appeared great but weren’t actually. Think of all the people who, today, legitimately thrive because we’ve broken down some of the credentialing gatekeepers (Ben Thompson at Stratechery comes to mind).