He sells a storytelling course... perhaps this is meant to be a 'gotcha' where he reveals the con after the fact? My guess is there are people reading this who know something isn't quite right.
I can personally guarantee that this piece is 100% non-fiction.My course also focuses on narrative writing techniques.Does 'Storytelling' have to imply fiction?In Chinese "story(故事)" just things happened, it can be real or fiction .
There are a lot of subtleties about connotation here. I would say that "storytelling" traditionally primarily meant fiction, but some modern uses also include narrative technique generally, including nonfiction and also marketing. There may also be older traditions of nonfiction storytelling, but that has some connotation of a ritualized or formalized activity (e.g. children sitting in a circle listening to a recitation).
The term that has no connotations of fiction is probably "narrative".
I think many languages have closely related words for fictional narratives and nonfictional narratives.
I don't think that's true. Perhaps in your dialect of English, but if I was down the pub and someone started with, "Did I ever tell you the story of when I...", I certainly wouldn't assume it was fictional.
I think "tell you the story of" has a different connotation from "storytelling"!
E.g. if you said someone was good at "storytelling" as a skill, then I would expect it to be most likely fictional. I agree that "tell you the story of..." could easily be nonfictional.
His story recounts some of your paragraphs word for word, e.g. the execution of the murderer who hacked another man with an axe, right down to the judge uttering the exact same phrase, the same anecdote about using sorghum liquor for the smell, that "the whole Gobi desert smelled of liquor", etc. It's too much coincidence for the two of you to write the exact same things, word for word, all from memory, after so many years.
Did you write the exact same paragraphs by chance, were the two blog posts a collaborative effort, did you get together and pooled your recollections, or what?
It is no coincidence. In a secluded and tightly-knit community like Plant 404, an extreme event like this would immediately spread throughout the entire area.
The author you mentioned is Li Yang. We know each other, and our parents know each other as well. He published his piece before I did. Since the person involved was his classmate, he was able to provide more first-hand details, such as the part about riding a bicycle to see that boy.
When we had safety education at school, the teachers would still use examples from twenty years ago—like someone getting hit by a car. This is how it was in the plant: once something happened, people would keep talking about it for twenty years.
But right down to the same figures of speech, "the Gobi desert must have smelled of sorghum liquor", the exact same utterance by the judge, etc? Even the photos seem taken by the same person (in composition, style, etc).
It's the kind of unverifiable story that we would like to believe, but there's almost zero way of having independent confirmation. The photos could be from anywhere. The author seems likeable and writes an interesting story, but who knows how much of it is true.
The story seems almost tailored to cater to HN, with secret projects, nuclear power, China, and secrecy.
Agreed. In my opinion, too much strange embodied experience in this engaged and engaging Part 1.
If I told you stories from my childhood as an 10-year old child of an undercover operative in West Germany in 1962-1963 I think many would claim “fiction”. If I did not have my sister as an independent memory backup, even I might have doubts. She was lucky and unlucky and had a big brother.
There was a lot of weird stuff going on in China in the 70s and 80s (and perhaps into the early/mid 90s). Any Gen X Chinese adult will have a lot of stories to tell, like how it was like to join the Tiananmen Square protest in 1989 (my gf in college was from Beijing). I wouldn't discount this story at all based on its contents, and it just wouldn't be worthwhile at all to make it up, so let's give him the benefit of the doubt.
As an American Gen X, I don’t think very exciting things happened in our youth. We were kind of rich, kind of broke, we had recessions but not upheaval, not hardship, not a society that was more similar to North Korea is today liberalizing at a rapid clip. I could be romanticizing it as an outsider, but I think Chinese Gen Xers have much better stories to tell than we do.
From someone neither from the US nor from China, you sure did your share of weird things too. So I think that yes, you are romanticizing it as an outsider.
Almost all of the stories we get told in the West are from the US perspective, so there's that: anything from China feels fresh in comparison.
As an American, the US perspective is pretty dominate in the US. But still, I never went through a protest that ended in a massacre before, I never had to apply for travel permits to leave my town, nor did I need an exit permit to travel abroad. My first trip to China was in 1999 and things were pretty trippy even that late in their development.
The US...what sort of stories do you get told? Are they experiences that Gen X had in general, or just outliers that perhaps were glamorized by Hollywood? Let me tell you, we really didn't have much going on in general.
> But still, I never went through a protest that ended in a massacre before
Yet these happened in the US. Bizarre and secret government projects also happened. Executions also happened.
That you didn't witness them doesn't mean much. I'm sure most Gen X Chinese, as you call them, had pretty uneventful lives without any massacres either. I do think this is a case of laser-focusing on those who had more "interesting" lives, much like focusing on US antiwar activist who got shot or imprisoned during Vietnam war protests, or KKK activity: interesting, but surely not the norm.
> I never had to apply for travel permits to leave my town, nor did I need an exit permit to travel abroad.
Doesn't seem too exciting to me. It does reinforce the narrative that China = bad, US = good (though this is harder to believe in the Trump era). But it's not something particularly interesting to read about, plus every HN reader "knows" this is life in China, they are authoritarian, etc etc.
> Yet these happened in the US. Bizarre and secret government projects also happened. Executions also happened.
Are you confusing GenX with Baby Boomers?
> I'm sure most Gen X Chinese, as you call them, had pretty uneventful lives without any massacres either.
Most? Maybe, I've never met one that hasn't though. So maybe the selection of people I meet is biased?
> It does reinforce the narrative that China = bad, US = good (though this is harder to believe in the Trump era).
Something that was true pre-1995 hardly says anything about China today. Stop reading into supposed western bias where there is none. You would never compare China to North Korea today, but 30 years ago there were some remaining resemblances that quickly dissipated as China hit 2000.
> plus every HN reader "knows" this is life in China, they are authoritarian, etc etc.
Again, you are just projecting some sort of insecurity with this statement.
> Most? Maybe, I've never met one that hasn't though
I'm sure you acknowledge you're not an expert on general Chinese experience. You were an expat, surely while your first-hand experience was valuable it was also heavily limited to what a Westerner in China would see and be told?
> Something that was true pre-1995 hardly says anything about China today.
"Hardly says anything" is a bonkers statement. The recent past of any country definitely says something about its present. We agree China 30 years ago was different from China today, but what does it have to do with anything?
Do you disagree there's a strong anti-China bias on HN? (Whether justified or not).
> Again, you are just projecting some sort of insecurity with this statement.
Insecurity? I think it's an accurate assessment of groupthink about China here. I may have misinterpreted what you were trying to say though, in which case I apologize.
I was invited to lunch near factory 541, tank city, a pseudo closed area sprawled in some Shanxi valley. Turned out it was lunch and a show, they were going executing some drug traffickers from strike hard. Impromptu don't do drug lesson from uncle. We had to turn around because I had naturalized western citizenship and weird dialect by then and they figure security would not let us through. It was pretty surreal experience vs how nice and insular danwei life was otherwise.
That makes sense. I’ve heard harsher stories in China.
I lived in west Richland Washington as a kid, my dad worked at Hanford which is a giant nuclear reservation in the western USA. It was mostly typical American kid life, so nothing on your experience, except my dad eventually died of a rare cancer and we got a settlement from the US Department of Energy.
I spent 9 years living in Beijing but first visited in 1999 when thinks were kind of still brutaleski. I’ve had a couple of experiences with the PLA (living in a building where I wasn’t supposed to be living and some off limit areas on the border for foreigners that they don’t tell you about).
Feels AI-ish as well, and OP used em-dashes in some of their replies. But it could be attributed to a language barrier of sorts requiring the use of LLMs to communicate
It's published in China many years ago, and it's nonfiction. I just used AI translate to English. And can you make up something to cater HN, like nuclear power stuff?
I apologise. I write too and I've been bothered by LLM-generated content masquerading as the work it takes to tell an effective narrative. It was the combination of generated responses in the comments alongside what I thought was a generated image that set me off, but I was clearly being far too militant.
Why... not? Hardly adult-only content, and kids famously like everything Disney until a certain age, seems like a good thing to instill into kids, rather than "Top 3 reasons Disneyland might kidnap Donald Duck" or whatever the alternative would be.
I expected it to be far too lengthy and a bit dry for a kid. But nope, he was captivated. He absolutely loves the combination of engineering and illusion.
That’s so great! My dad exposed me to computers at a very young age. That lead to a career in software engineering. You never know what a kid will find interesting and what it may lead to later in life.
Technically the lords don't have the same power as an elected second house in other bicameral legislatures, that's correct. They can propose bills and delay bills proposed by the House of Commons, though, and the latter can be enough to kill momentum during some parliaments.
Stallman is stalwart. The dogma is the point and his obstinate, steady nature is what I love best about him. Free software continues to be incredibly important. For me, he is fresh air in the breathlessly optimistic, grasping, and negligent climate currently dominating the field.
He's famously a curmudgeon, not lazy. How would you expect him to respond?
> Totally ignoring the history of the field.
This criticism is so vague it becomes meaningless. No-one can respond to it because we don't know what you're citing exactly, but you're obviously right that the field is broad, older than most realise, and well-developed philosophically.
> Ignoring large and varied debates as to what these words mean.
Stallman's wider point (and I think it's safe to say this, considering it's one that he's been making for 40+ years) would be that debating the epistemology of closed-source flagship models is fruitless because... they're closed source.
Whether or not he's correct on the epistemology of LLMs is another discussion. I agree with him. They're language models, explicitly, and embracing them without skepticism in your work is more or less a form of gambling. Their undeniable usefulness in some scenarios is more an indictment of the drudgery and simplicity of many people's work in a service economy than conclusive evidence of 'reasoning' ability. We are the only categorically self-aware & sapient intelligence, insofar as we can prove that we think and reason (and I don't think I need to cite this).
> He's famously a curmudgeon, not lazy. How would you expect him to respond?
Not lazily, clearly. You can argue he's not lazy, but this is a very lazy take about LLMs.
> Stallman's wider point (and I think it's safe to say this, considering it's one that he's been making for 40+ years) would be that debating the epistemology of closed-source flagship models is fruitless because... they're closed source.
You are making that point for him. He is not. He is actively making this fruitless argument.
> This criticism is so vague it becomes meaningless. No-one can respond to it because we don't know what you're citing exactly, but you're obviously right that the field is broad, older than most realise, and well-developed philosophically.
I don't get what you are missing here then. It's a broad field and LLMs clearly are within it, you can only say they aren't if you don't know the history of the field which is either laziness or deliberate in this case because RMS has worked in the field. I notice he conveniently puts some of his kind of work in this field as "artificial intelligence" that somehow have understanding and knowledge.
> embracing them without skepticism in your work
That's not a point I'm arguing with.
> as we can prove that we think and reason (and I don't think I need to cite this).
Can we? In a way we can test another thing? This is entirely distinct from everything else he's saying here as the threshold for him is not "can think and reason like a person" but the barest version of knowledge or understanding which he attributes to exceptionally simpler systems.
> Not lazily, clearly. You can argue he's not lazy, but this is a very lazy take about LLMs.
Feel free to check out a longer analysis [1] (which he also linked in the source).
> You are making that point for him. He is not. He is actively making this fruitless argument.
Are we reading the same thing? He wrote:
> Another reason to reject ChatGPT in particular is that users cannot get a copy of it. It is unreleased software -- users cannot get even an executable to run, let alone the source code. The only way to use it is by talking to a server which keeps users at arm's length.
...and you see no connection to his ethos? An opaque nondeterministic model, trained on closed data, now being prepped (at the very least) to serve search ads [2] to users? I can't believe I need to state this, but he's the creator of the GNU license. Use your brain.
> I don't get what you are missing here then. [...] I notice he conveniently puts some of his kind of work in this field as "artificial intelligence" that somehow have understanding and knowledge.
You're not making an argument. How, directly and in plain language, is his opinion incorrect?
> Can we? In a way we can test another thing [...] to exceptionally simpler systems.
Yes... it is one of very few foundational principles and the closest thing to a universally agreed idea. Are you actually trying to challenge 'cogito ergo sum'?
> ...and you see no connection to his ethos? An opaque nondeterministic model, trained on closed data, now being prepped (at the very least) to serve search ads [2] to users? I can't believe I need to state this, but he's the creator of the GNU license. Use your brain.
You seem very confused about what I'm saying so I will try again, despite your insult.
It is extremely clear why he would be against a closed source thing regardless of what it is. That is not in any sort of a doubt.
He however is arguing about whether it knows and understands things.
When you said "debating the epistemology of closed-source flagship models is fruitless" I understood you to be talking about this, not whether closed source things are good or not. Otherwise what did you mean by epistemology?
> Feel free to check out a longer analysis [1] (which he also linked in the source).
Yes, I quoted it to you already.
> You're not making an argument. How, directly and in plain language, is his opinion incorrect?
They are AI systems by long standing use of the term within the field.
> Yes...
So we have a test for it?
> it is one of very few foundational principles and the closest thing to a universally agreed idea. Are you actually trying to challenge 'cogito ergo sum'?
That is not a test.
I'm also not sure why you included the words "to exceptionally simpler systems" after snipping out another part, that doesn't make a sentence that works at all and doesn't represent what I said there.
> You seem very confused about what I'm saying so I will try again, despite your insult.
I'd call it an observation, but I'm willing to add that you are exhausting. Confusion (or, more likely a vested interest) certainly reigns.
> It is extremely clear [...] Otherwise what did you mean by epistemology?
We are talking about both because he makes both points. A) Stallman states it possesses inherently unreliable knowledge and judgment (hence gambling) and B) When someone is being imperious there is a need to state the obvious to clarify their point. You understood correctly and seem more concerned with quibbling than discussion. Much in the same way as your persnickety condescension I now wonder if you know and understand things in real terms or are simply more motivated by dunking on Stallman for some obscure reason.
> They are AI systems by long standing use of the term within the field.
No. ChatGPT is not. It is marketed (being the operative term) as a wide solution; yet is not one in the same manner as the purposeful gearing (whatever the technique) of an LLM towards a specific and defined task. Now we reach the wall of discussing a closed-source LLM, which was my point. What I said previously does not elide their abstract usefulness and obvious flaws. Clearly you're someone familiar, so none of this should be controversial unless you're pawing at a discussion of the importance of free will.
> Yes, I quoted it to you already.
I'm aware. Your point?
> That is not a test.
The world wonders. Is this some sort of divine test of patience? Please provide an objective rubric for the proving the existence of the mind. Until then, I'll stick with Descartes.
> I'm also not sure why you included the words "to exceptionally simpler systems" after snipping out another part, that doesn't make a sentence that works at all and doesn't represent what I said there.
Must I really explain the purpose of an ellipsis to you? We both know what you said.
> We are talking about both because he makes both points.
He does. I am talking about one of those points. The epistemology side is entirely unrelated to whether closed source things are good or bad.
> A) Stallman states it possesses inherently unreliable knowledge and judgment (hence gambling)
He says it does not contain knowlegde, not that it is simply unreliable. He happily says unreliable systems have knowledge.
> No. ChatGPT is not. It is marketed (being the operative term) as a wide solution;
A wide solution is nothing to do with things being called AI. This is true both in the field of AI and how RMS defines AI. You can argue that it doesn't meet some other bar that you have, I don't really care, I was responding to his line and why it does not make sense.
> The world wonders. Is this some sort of divine test of patience? Please provide an objective rubric for the proving the existence of the mind. Until then, I'll stick with Descartes.
You said we could prove we could think. I asked if we can do that *in a way we can test other things to see if they can think* because I do not think such a thing exists. You said yes, we do have a test. Now you're complaining that such a thing does not exist
> Now we reach the wall of discussing a closed-source LLM, which was my point.
Ok, it's nothing to do with what I'm talking about though (which should have been extremely clear from what I wrote originally, and given my explicit clarifications to you), so you can have that conversation with someone else.
> Must I really explain the purpose of an ellipsis to you? We both know what you said.
You need to explain how you're using them, because what you're doing to my sentences makes no sense. Here's what I said, and why, broken down for you:
> Can we? In a way we can test another thing? This is entirely distinct from everything else he's saying here
The question of thinking is entirely distinct from what he is saying. It is not at all related, it is a side thing I am responding to you about.
> as the threshold for him is not "can think and reason like a person"
This is key. We are not talking about comparisons to people. He does not talk about thinking or reasoning. He is talking about knowledge and understanding.
> but the barest version of knowledge or understanding
And not even a large amount, we are not comparing to humans, smart humans, animals, etc.
> which he attributes to exceptionally simpler systems.
He says trained yolov5 models have knowledge and are AI.
He says that xgboost models have it and are AI.
He says that transformer based and closed source models are AI.
Let it go, man. I think that you are wilfully misinterpreting what both he and I are saying and being obtuse to boot. Whatever the case, we're clearly not going to convince one another.
Most of this is very simply telling you what he's said. I'd recommend you read the link you pointed me to, because it contains exactly what I've told you is in there. Closed source transformer models and decision trees and yolo models have knowledge and are AI, chatgpt he argues does not. That's not an argument I'm trying to convince you of, I'm just telling you exactly what he's written.
Not everyone is Rupi Kaur. Speaking for the erstwhile English majors, 'formal' prose isn't exactly foreign to anyone seriously engaging with pre-20th century literature or language.
reply