To those saying "Oh I can tell it's fake, obviously!", consider: will your parents/grandparents?
And will you still be able to tell in five years, when this tech has had 20 new iterations, each addressing the very tells that right now let you notice it's fake?
I know, I know, photoshop and fake pictures have always been around. But now, everyone can do it in 30 seconds. That changes things.
> To those saying "Oh I can tell it's fake, obviously!", consider: will your parents/grandparents?
No, they won't. There's not a chance.
Yesterday we were visiting an elderly relative, who received a video in WhatsApp family group chat with an AI generated child saying a prayer. I had to repeatedly tell them it was not a real child on the video. To me it was obvious since the whole body was static and only the face was moving slightly, and the child was talking in a way that you knew it was text-to-speech.
There is sadly no hope for the elderly generation. If your parents or grandparents receive a video with your face, asking them for money - they will believe it. Even if you stand right in front of their face in real life telling them the video on their device is not real, they will believe it more than you. You will be considered not real rather.
My sister tried to get my mother to play an April fools prank on me this year. She wanted her to send me an email saying "I got your voicemail and I sent the money! I hope that's enough but it was all I had available!"
My mother thought it was a bad idea and didn't want to get involved.
It would have been incredible and I would absolutely have fallen for it.
Anecdata: Reportedly my great grandma, born in the 19th century viscerally reacted to things that were fictional movies of the 1960s like they actually happened. She could not distinguish between recorded news and recorded drama.
I think it's a matter of our 'priming', what we're used to: If you're used to digital recordings that seem real to actually be real, you'll be tripped up.
If you have no such expectation, you're less inclined to buy into the reality of something that is shown to you. That you have to believe some version of events that you pick up is a false dichotomy.
You can also not believe any version until you get more substantial proof.
What was life like 150-200 years ago? Back then you had only the written word to go by for news of the outside world and it was just as easily manipulated. I think we've been living through a brief and unique period of truth-y-ness with a plethora of photographic and video evidence.
Agreed, we’ve been hoodwinked by photos and video and this is the wake up call we need for everyone to realize that seeing an image of something is in no way comparable to actually being there. We’re about to enter a much more skeptical world and that’s a good thing.
A healthy amount of skepticism is good. You can easily take your skepticism too far and turn into a tinfoil hat wearing conspiracy theorist who doesnt believe anything and has a completely cynical view on the world. I know a few of these...
Interesting thought. Sturgeon’s Law is normally referring to the quality of things created honestly, like sci fi novels, and not to the level of dishonesty or disinformation. I’ll have to think about whether those are actually different, but I feel like it’s relevant in the sense that Sturgeon’s Law is normally tautological: you can decide for sci fi novels what percent of them you would define as “good”, and the rest are “crap”. 90% always works, but so does 80% or 98%, the top group is always better than the bottom group by definition. So in that sense, saying that 90% of information might be dishonest and justify skepticism isn’t supported by Sturgeons Law, I think.
> What about the tinfoil hatters that believe everything just because it's on some 2h YT video?
Classic contrarianism. Where evidence of pushback is used as evidence of the argument itself. It's commonly used by cults to increase cohesion, e.g. sending members into the world to get rejected.
After seeing the QAnon folks, I think you'll just see in-fighting about what the "truth" is, when in reality, both sides are full of shite but for different reasons.
> To those saying "Oh I can tell it's fake, obviously!", consider: will your parents/grandparents?
Well, my parents and grandparents are dead, so probably not.
But, when they were alive, they had enough experience of the real world that they would be suspicious of bandages worn outside of clothes, the printed document wrapped around the arm, and the gauze that rather than wrapping fully around the head seamless merges into the forehead.
But, more importantly, it would be trivial to do what all the really successful fake images of this war and most precious ones have done: just take a real picture, from a real conflict (sometimes, even the same one) and lie about the context. Lying with pictures is not a novel threat.
>But, when they were alive, they had enough experience of the real world that they would be suspicious of bandages worn outside of clothes, the printed document wrapped around the arm, and the gauze that rather than wrapping fully around the head seamless merges into the forehead.
Would they have noticed them at the quick glance that this image invites? Assume an astute person but with media literacy based on:
* newspapers (where fake photos are possible but more primitive), and
* TV (where the image is moving and appears on the screen only for you to notice it, but not to examine it closely).
* also possibly from the tiny screen of a janky smartphone mandatorily plugged into the dark forest of social media.
At a glance, this hypothetical person would likely see something like "a guy was hurt in a war, he is now miserable in a hospital, here he is, _there is no other information in this picture_, read on". And then they are more likely to pay a little more attention to the text, because they just saw an image containing things that prime them for experiencing compassion.
Disregarding that experience becomes outdated, and the target audience for propaganda also includes inexperienced people and those who never developed good thinking habits in the first place, elderly people are also more likely to have degraded senses, and may be less interested in playing "spot the 10 differences" than a young digital native absolutely fascinated with this fun new technological development.
I for one spotted none of the things that were wrong with the photo, even though some of them were really obvious. But I didn't look twice until seeing them pointed out in the replies. And even then, I though "maybe it's a scratch that only takes a large band aid and not a full bandage, what the hell do I know about bandaging a head anyway" (defaulting to trusting the image and excusing inconsistencies). And, since, the context was being already established by the poster so I didn't even look twice. Besides, now that many camera phones have "AI smoothing filters", that blurs the boundary even more, making real photos look AI generated. The overall "AI smoothness" that I noticed about the image (where it's the "notional" resolution that is degraded, not the rasterization) might be completely lost on people who are visually impaired, or just don't stay up to date with the novelties of image processing.
So I fall back to the same heuristic that your grandparents, bless them, probably also used: if it's on the news, it's somewhat fake by definition. And how much attention you should pay to it depends on how much your interests align with those of whoever's paying for you to see it.
Ofc, becoming stuck in a local optimum bubble of fake perceptions that confirm each other, and gaining a "political identity", is nothing new, either. Our generation just got blindsided by the idea that computers and the Internet would somehow make this fuckery less necessary. Can't wait to see what scams would target me when I'm old but my elderly parents did fall for a fake phone bill because the guy brought it in person - once again, no AI necessary, neither is there a viable way for AI to help with this problem. (AI doorbells recognizing scammers I guess? but that can turn real dystopic real fast.)
> Would they have noticed them at the quick glance that this image invites?
Probably; I mean, it took me more than a quick glance to even figure out that the thing that looked like a discontinuous strip of printed cash register receipts with red stains across the outside of his sweater was probably supposed to be bandages, even with the priming effect of reading the text. And the document wrapped partly around his arm is quite jarring.
But, ultimately, may main point isn’t that AI images are categorically unconvincing—I’ve generated more convincing AI images. But AI barely matters; the same training data that enables AI image generation contains thousands of images that can be used, without modification, with a false caption to the same effect — and not just in theory, but this is actively done all the time. While AI may increase the risk of convincing fake imagery of specific individuals (though that, too, has been common before AI image generation), the kind of generic propaganda highlighted here is both simple and a hundreds-of-times-a-day thing with more convincing imagery without AI.
Does it actually change anything? Maybe AI will led random people make their own pictures, but what's the real impact of that compared with what those people are likely already doing, which is repeating propaganda created by others using only marginally more involved methods like photoshop?
Also the particular image in this tweet doesn't seem like a great example of the power of AI propaganda. Unless I'm missing something, it's a fairly generic image of a nameless person...the propaganda is all in the story attached to it. The same story attached to a stock photo seems like it would have virtually the same impact.
One distinguishing factor to previous times is the almost infinite volume (quantity) that is now available to propagandists throughout the world. Soon enough (if not already), it is possible to automate and continuously update thousands or even tens of thousands of websites full of AI generated propaganda. If platforms are not careful, the same goes for AI bots and the like.
> To those saying "Oh I can tell it's fake, obviously!", consider: will your parents/grandparents?
Doesn't matter. The best propaganda isn't fake, but truthful. It emphasizes true stories that further its goal, and de-emphasizes or buries stories that hinder it. E.g. https://ifamericansknew.org/media/nyt-report.html
It works other way too. This elder person I know refuses to believe anything bad about his motherland by claiming it is fake news/propaganda. Even when it is reported by the biggest news agencies in both, his motherland and the west.
The ageism is silly (though not uncommon on HN). Plenty of young people are in the same boat and in the west! It's nothing new. Discerning propaganda has always been extremely difficult.
I don't think it does change things. There's only going to be a certain percent of people who want to do bad things. It would fluctuate surely, but historically it doesn't appear availability of new technology affects it much, if at all.
The relevant statistics I can think of (crime/violence etc) show nefarious acts worldwide on the decrease. So, if there is something linking them and tech it may be the reverse of what popular commentary seems to expect.
Better question is will you be able to tell it’s fake if you’re scrolling without critical thought? If you’re not paying attention? If you’re on low sleep, stressed, nodding off, watching a stream in another screen, etc? If it’s only presented to you in a small icon while your favorite debate bro debates on it, or a political pundit pundits over the thumbnail?
> To those saying "Oh I can tell it's fake, obviously!",
I couldn't tell the photo was fake from a first or second glance, only after I had started reading the posts and taking a 3rd and 4th look at it I could sort of notice some uncanniness.
But, even then, had no-one told me that the image was fake I would have certainly continue to at least have some doubts about the image's fake-ness.
> But now, everyone can do it in 30 seconds. That changes things.
Does it? I've seen limited amounts of text-based propaganda / misinformation and anyone can do it 30 seconds, yet somehow we're not drowning in a flood of it. Society got better at verifying textual information, even though some individuals remain susceptible. In my chat groups, it is always the same 2 people that fall for - and propagate - scams/snake-oil/misinformation regardless of the medium.
That is explictly the idea of a lot of propaganda: "the firehose of falsehood".
Keep hammering them with facts, details, allegations, baseless claims, even slivers of truth. The average person doesn't have the time, interest, or capabilities to dig through all of those claims, and will eventually settle on consuming the facts they want to hear. Keep em too confused to do anything except what feels right to their gut.
Unsurprisingly, this was pioneered by the Soviets, and heavily used by the Russians, both via foreign agit-prop, but also heavily on their domestic audience.
You can think of it as "ain't nobody got time to sort through all of that". Or you can think of it in Bayesian terms. If you adjust your priors at all based on propaganda, and there's enough of it, then eventually your priors become what the propaganda says.
The more important thing is to plant a core idea in someones brain. If someone thinks that liberals are all degenerate scum that want to degrade society, no amount of anti republican propaganda is going to convince them otherwise.
We're only about 6 months into AI tools being openly available and this is what they are producing - and there is still a basic bar for using them.
What's interesting about the soldier image is it has the same emotional impact at first glance as if it were real, before your critical faculties engage to sort it, which means its effect has already occurred. If your feeds are full of indignation and outrage, it doesn't matter whether it's real or not, you're going to have a physical and uncritical association with the sensations it creates. It's straight Pavlovian response. Your perceptions literally come through a feed.
Maybe we recognize how thoroughly propagandized we are already, and these examples are a merciful uncanny valley that can let us step back and really question the shit we are letting pile up in our psyches. Even as a self check, do a word association exercise and then ask how closely your associations reflect objective or even an ideal reality. I do these occasionally to test the quality of my beliefs, the results are reliably poor. Apprehending anything close to reality at all requires constant vigilance and asking how you know the things you know, and we're just at t=6mos, what does t=36 look like?
Very interesting. But still the most concerning thing about propaganda is more basic than that. It is that people don't know or are in denial about it's use against them and more generally in all wars.
Wars are strategic actions by nations. But humans generally will not engage in mass killing for strategic reasons. They need moral reasons. So propaganda is necessary for warfare in order to frame war in a moral manner. The enemy or enemy leaders are depicted as evil or inhuman. Or their most despicable acts are emphasized to create a sense of 'morally-justified' hatred or the idea that they must be stopped or punished at all costs. Such as killing millions of people if necessary or destroying a country.
Actually, if it serves their interests and especially if all of their neighbors are not protesting, humans will go along with pretty much anything. But you do need to at least give them a cover story.
Technology should theoretically be able to help reduce the influence of propaganda through things like new types of decentralized news distribution.
> Technology should theoretically be able to help reduce the influence of propaganda through things like new types of decentralized news distribution.
Why and how? Maybe I don’t know what “decentralized news distribution” means, but whether or not it’s decentralized seems irrelevant to me. People pick sources to follow and share news with others; if those sources are producing propaganda, then people are amplifying propaganda.
If only a few people at an event publish photos and video there's a lot of dead space/time to insert stories, but if nearly everyone publishes their video feed the majority of the event will be locked down.
And not just multiple angles of something, but distributed images through the entire crowd while it's happening. Let's say they show a fight, does the rest of the crowd move appropriately to make room for it? If they show a politicians speaking does the crowd surrounding the closer videos cheer in time with the distant crowd, or are they perhaps spliced together?
Forensics will get harder and only more data will give us a chance. Ultimately, analyzing the data is easier than faking it consistently and scale is our advantage.
Off topic. Scrolled a little bit past the thread and almost got caught up in Twitter’s hate bait algorithm. Sexual imagery and I didn’t ask for. This is why I need to remember never to follow links to Twitter, no matter what, ever.
One of the hazards of browsing HN: old darlings like Twitter will be tolerated even after they become NSFW by default.
There is an open-source browser extension called Privacy Redirect which will turn all Twitter links you click into Nitter links [0].
This also turns Reddit links into Libreddit/Teddit links, YouTube links into Invidious links, etc.
Basically you get to browse an Internet without intrusive pre-roll ads or outrage algorithms. I think based on your comment that this might be of interest to you.
Thanks. I've tried it out of curiosity after reading your comment, and found Libredirect extremely confusing and frustrating :
- it rewrites URLs in content that I write, so when I posted a HN story about a youtube video, it changed the URL to some random Invidious instance
- its icon is not shown in the extension list in Firefox, so I never know whether it's on or not
- it seems to not work on Youtube anymore after I disabled it temporarily, despite the toggle being back on
- to change the config, I can't just click the icon (like I would in, say uBlock Origin or even Privacy Redirect), no, I need to go to Extensions > Libredirect > Click the three dots > Settings
I feel like the promise and the actuality of this plugin are worlds apart, sadly.
The source in the last tweet says its AI generated. I'm also not completely sure whether the writer was trying to pass the image off as legit, they don't directly say it.
The source[0] marks the image as such: "Иллюстрация на обложке: изображение сгенерировано искусственным интеллектом", which I think is probably the best "tell"
So like almost every news website they added a title picture to every article to drive engagement. The problem here is that they used a picture where a face was visible, they have to use some generic image instead.
The whole trend of adding stock pictures to everything only creates distractions. It's bad for the reader, good for the publisher. The pictures are either useless or misleading.
For the record: the weird appspot.com URL is a hack to circumvent internet censorship inside Russia (temporary domain basically), the canonical URL is https://baikal-journal.ru/2023/04/24/my-dlya-komandirov-kak-... , this is quite well known regional outlet ("Lyudi Baikala"/"People of Baikal").
I have little fear about AI generating propaganda. It's cheap to write a crappy article and fake a photo or two - or choose a real photo but twist the story around the photo.
What I worry about is "artificial / generated consent". You read some upsetting story, and your skeptical brain holds it at arms length. Then you read commentary in a forum you trust and you see message after message of thoughtfully worded support for some position. I think reading gobs of "informed real people" commentary is far more persuasive - and subtly so - than reading an article from someone you KNOW is pushing a specific perspective.
I like to believe I'm an independent thinker, but a big part of my process is to seek out many different points of view and judging which feel well supported and well reasoned. Consensus DOES play a role in my judgement forming. If consensus is easily faked, yikes.
>I like to believe I'm an independent thinker, but a big part of my process is to seek out many different points of view and judging which feel well supported and well reasoned.
Ultimately that's the best most people can do short of intensive 'independent' research on most topics which outside of your personal expertise generally isn't entirely possible (even if you have good research skills, there's time limitations.)
>Consensus DOES play a role in my judgement forming. If consensus is easily faked, yikes.
Even prior to widespread AI tools this has been a strong method in information warfare, that's why it's detrimental to not show dislikes/downvotes in rating systems, it can make a far greater consensus appear to exist where there is far more disagreement among a topic.
As far as I'm concerned, the removal of those metrics are to enforce that specific purpose.
Political ads serve no meaningful purpose at this point; it would be better to ban all of them, most do more harm than good by presenting missleading information, or using fear, hyperbole, etc to provote an emotional reaction. When I receive any political adverstisements in the mail I don't look at them, they go straight into the recycling bin - even for issues, politicians I would vote for.
> Political ads serve no meaningful purpose at this point; it would be better to ban all of them...When I receive any political adverstisements in the mail I don't look at them
You're proposing to ban speech that, by your own admission, you do not read.
Amateur prompt work for sure. I could make a convincingly realistic (although not indistinguishable) image and I’m by no means an expert. All the major giveaways cited in the thread can be cleaned up by iterating and retouching with other AI tools. There are people out there using MJ and DALL-E plus other tools 8+ hours a day making mind blowing stuff.
They're not trying to pass it off as real though, the YouTube description is “An AI-generated look into the country's possible future if Joe Biden is re-elected in 2024.”
Basically, it says "this was cheaper then conventional film making techniques."
There are all kinds of propaganda; white, grey, and black. The use of deepfakes probably belongs to the black category. But what I've always found more intriguing is the white propaganda; it provides interesting historical insights into major events in world politics.
Thus, can someone with access to DALL-E 2 or similar feed it the public archives of propaganda posters [1], generate a few samples, and deliver these for a discussion here at Hacker News? It would be interesting to see what kind white propaganda an AI and its users would generate!
Maybe I'm chronically online but I can see instantly if something was AI generated
Even had AI generated linkedin profiles message me I guess for scams or hacking or something
Yeah, I would like to credit my powers of AI detection also, but here's the thing: how would you know if you had seen an AI generated piece of media that perfectly mimicked what human could do?
To illustrate, the Last Samurai (2003) was a special effects heavy movie but the casual user would barely have noticed any SFX, that was 20 years ago. We are rapidly approaching a similar moment with AI image generation capabilities, if not already past it.
I have a theory that propagandists' jobs are safer than they would seem.
Thing is, when you're arguing online with some political consultant who four years ago was convinced Joe Biden was senile (because they supported Amy Klobuchar), who is now just as convinced he's sharp as a razor and ready for another term, are they really trying to convince you with their arguments?
No, I don't think so. I think the whole point is what they're doing to themselves. Look at me, how loyal I am. I won't stop at anything to win. I don't care if my past words indict me, that was then, this is now and everything is at stake! Winners never quit! Never mind what you think, don't you feel my determination?
So it's not about the quality of the words that come out of their mouth. GPT4 can surely produce much better words, but that's not what's supposed to convince. It's the example, the example of fanatical organizational-personal loyalty, that's supposed to convince. GPT4 would need to lie and convince people it's a real person - or rather, many real people - in order for that sort of thing to work. But political consultants are already doing that at scale. It's probably not any better at lying, and even if it is, what they have is good enough.
Substitute Biden and Klobuchar for any other politicians, obviously. It's not a left/right thing.
But who are they signaling to when they do this? Not the person they're arguing with. Not me, either, if I'm reading the conversation. To me, they're signaling "I refuse to actually consider anything the other person says. All I will do is argue and repeat my talking points. I might as well be a robot."
They're signaling to the people who pay them, and to the people who might hire them in the future, not to their readers. To their readers, they're kind of anti-signaling. I mean, are many people genuinely persuaded by whoever can yell the longest? (Because that's what they're doing. They always reply with something, and that something is never an indication that the other side might have even a shred of a point. But is anyone actually persuaded by some "brick wall" posting the last word at the end of some long back-and-forth?)
I think you've got it: the point isn't persuasion, the point is exhaustion and disengagement. If you can keep shouting until the other side goes home, your opinions become more prevalent. You can then make your opinions so prevalent that other folks assume there's no point to engaging (voting? commenting?) because of how outnumbered contrary opinions are. At that point, your opinion has become reality. It relies on psychological tricks to subvert the one-voice-one-vote rule (c.f., State Street).
Well, I'm sure some of them are signalling to people who pay them, too. But I think it goes well beyond people who are paid. They're telling the world "This is what it means to be a [republican/democrat/etc.], and if you aren't like that yourself, you're with the enemy!". It's like an ultimatum to commit.
We need ai regulation so quickly, and one thing that would be interesting for ai articles to be required to have a validation check marking it as AI, and ai platforms should be required to prove their content came from that ai
When people can run AI locally, what good is regulation going to do? Laws don't stop crimes, it's just they'll get punished, if caught. It's the internet. They likely won't even be in the same continent, let alone country. Propaganda from malicious actors abroad isn't going to be prevented by regulation, they're going to have entire departments dedicated to it, so is Uncle Sam.
That's like licensing all photocopiers to make sure the state knows who can produce propaganda. Not only is it an obvious rights violation, but it will never work.
If watermarking is the solution it needs to be applied to the legitimate content.
Yeah, I am not qualified to develop the solution, but I am qualified enough to agree that this is going to be a difficult, slippery slope.
What I thought was going to be a massive cyber-war with russia/ukraine/nato/US/China, did not turn out so.
But once we have the first major cyber attack on [sinfrastructure] with an AI based crawling weapons, such as an AI developed STUXNET/DUQU - then we will have crossed the threshold into the next information era.
Is the accompanying story untrue, or is this just a case of someone using an AI tool to create a generic stock image for an otherwise-truthful article, and then the provenance of the image being lost?
I didn't read the story but the outlet is quite well known ("Lyudi Baikala"/"People of Baikal"), they mentioned the illustration is AI-generated in the end of the article. As I see it the confusion started from Twitter user ChrisO_wiki which retells independent Russian media stories in the form of Twitter threads, and he used the image without mentioning that it was AI-generated.
> they mentioned the illustration is AI-generated in the end of the article
This is a really dark pattern, which is clearly being used for deception and plausible deniability (the article is reposted with the embedded picture - everyone sees the picture and assumes it's a photo without checking - it's not us, we warned you it's generated, with a tiny line decoupled from the picture nobody ever paid attention to).
Such use of generated images is extremely myopic and will backfire, or already backfired, sowing doubt in everything they and their side writes, regardless of their credibility. The case will be heavily abused by the opposite propaganda as well.
People with agenda started to quote tweet ChrisO_wiki's English translation thread with captions like "look at this blatant pro-Ukrainian propaganda", and it got 1.5M views.
Hmm, if that's all this is, I'm not sure I would count this as "AI Generated Propaganda". It's not really someone using AI tools to try to fool people, it's just a case of poor communication.
Well, everybody does propaganda. When they say they don't do propaganda, that's propaganda.
Governments engaged in geopolitical conflicts do it 24x7x365.
One of the things that an adult has to face growing up, is that at least some of the ideas that you hold in your head, have been cooked up in a think-tank or secret government bureau.
EDIT: If you haven't realized this, then change some to a lot.
So then why wouldn't they do a better job, to avoid detection as "propaganda to suggest that your enemies do propaganda"? Seems likely it's propaganda to suggest that your enemies do "propaganda to suggest that your enemies do propaganda"...
Nobody should be trusting anything they see online anyways. Trump and his whole cohort were real people spewing real lies all the time, so is it really that different if its a midjourney photo with some GPT text? We've been in an age of needing to check your facts from multiple reliable sources for some time, and I think this might actually accelerate some important developments that will help us by not just combating AI but also lying leaders. Information needs to be cryptographically signed by the device/creator/journalist/organization and reputation needs to be tracked.
Tucker Carlson's texts have exposed the fact that he was lying throughout the Trump presidency. The number one news anchor on the most watched channel in the US was lying to the public and then privately texting with friends and colleagues about it. He should be shunned from public media for good now.
The internet has been full of crap for years, so my prediction is AI generated content has a burst of utility early on for bad actors, but it will quickly be normalized and cast aside like most of the garbage we wade through today.
No amount of propaganda can defeat what’s been implanted by teachers in school. It takes almost zero technology to require a child to believe a lie as a condition of their freedom. Don’t bother with AI; just put it on a test. Belief of anything beyond that point is controlled by cognitive dissonance.
AI will bring out the "devil" in our collective consciousness. Midjourney is filled with dark images. People that gravitate to the dark side found an easy way to evoke it and put it out into the world.
Political news will follow.
Were you around when Internet 2.0 was supposed to bring democracy all over the world?
It has brought a slew of right wing populists.
We need to stop viewing technology as neutral, and "oh it will all turn out nice, like the railways did". These are tools which manipulate the human psyche, and not the same as stagecoaches to be disposed of.
Populism is implicitly an argument that, on some issue or portfolio of issues, there is a coherent will of the decisive majority of all people, such that the opposing view on the issue(s) is democratically illegitimate (because it's an elite interest, or a concentrated interest in a collective action problem). Populism is thus an implicit rejection of pluralism, the idea that groups of people are complicated and contain multiple overlapping and often incoherent sets of issue preferences, and that the goal of politics is to accommodate that.
Populism can be tactically useful in a democracy, especially if the ground truth on some issue is that its politics really have been captured by elite interests, and that all of the variance in opinion on that issue comes down to "elite vs. popular". But those issues are pretty rare, and as a formula for the broader project of governing, populism is a disaster.
> can't think of anything more democratic then appealing to the majority of the people
Populism, pejoratively, means rampant majoritarianism, a known failure mode of democracies. Unfortunately it’s a term, like liberal, that’s been expanded to the point of meaningless.
The railroads did not enable the Nazi genocides, they merely shaped them.
Rwanda killed a higher percentage of people, more quickly, without any advanced infrastructure. Without railroads the Nazis would have put the camps closer to the population centers or killed people where they lived. During that period the same technology and eventually even the same rails enabled the Allies to break the Nazis and free the remaining prisoners.
And will you still be able to tell in five years, when this tech has had 20 new iterations, each addressing the very tells that right now let you notice it's fake?
I know, I know, photoshop and fake pictures have always been around. But now, everyone can do it in 30 seconds. That changes things.