Hacker Newsnew | past | comments | ask | show | jobs | submit | blast's commentslogin

> She was given some anti-psychotics and sent away

But that confirms the main point of the experiment, which was that people who didn't need psychiatric treatment were given it anyway.

It's only of secondary importance that the prescribed treatment changed from hospitalization in 1973 to drugs in 2004. The primary point is that there was no objective way to determine who genuinely needed treatment. She didn't, but was diagnosed anyway.

This objection is so obvious that she must have addressed it in the book. Do you remember if she did?


I happen to have the book handy.

> HERE’S WHAT’S DIFFERENT: I was not admitted. This is a very significant difference. No one even thought about admitting me. I was mislabeled but not locked up. Here’s another thing that’s different: every single medical professional was nice to me. Rosenhan and his confederates felt diminished by their diagnoses; I, for whatever reason, was treated with palpable kindness.

Seems she would disagree with your assessment that being prescribed some likely-harmless pills is the same as losing your freedom.

There's also a section earlier where she presents an argument the actual finding of the study is that mental healthcare is not set up to handle adversarial or dishonest patients, which is still a problem and a tough one to solve.


Mental healthcare does care about dishonest patients in some cases, mainly where it's an avenue for drug-seeking. But no-one's trying to get ahold of anti-psychotics for recreational purposes.

Publications could use watermarking to encode the name of the account an article is being served to, but they don't seem to. I wonder why.

It helps to have a statistician and a geostatistician as your clients!

I do wonder if they already had a feeling it was not supposed to work that way hence the info gathering. This is one of my favorite all time IT stories because the client was right, and the engineer was left almost going crazy.

When I was a Junior I asked an honest question to the senior I was working with at the time, great dude, I basically asked him because everyone joked about the "works on my machine" crowd, so I said, so what the heck do I do if it only works on my machine? He said you have to figure out what's different. It sounds obvious or simple, but if you go with that mindset, when someone's stuck in the "it works on my machine idk why" sure enough I ask "what is DIFFERENT from your machine and this one?" and it almost always leads to the right answers. It triggers something in our brains. I usually follow up whats different with "what was the last change?" in the case of a production issue.


I haven't noticed it being awful again. Can you give examples?


Why the specific application to install scripts? Doesn't your argument apply to software in general?

(I have my own answer to this but I'd like to hear yours first!)


It does, and possibly this launch is a little window into the future!

Install scripts are a simple example that current generation LLMs are more than capable of executing correctly with a reasonably descriptive prompt.

More generally, though, there's something fascinating about the idea that the way you describe a program can _be_ the program that tbh I haven't fully wrapped my head around, but it's not crazy to think that in time more and more software will be exchanged by passing prompts around rather than compiled code.


> "the way you describe a program _can_ be the program"

One follow-up thought I had was... It may actually be... more difficult(?) to go from a program to a great description


That's a chance to plump for Peter Naur's classic "Programming as Theory Building"!

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

What Naur meant by "theory" was the mental model of the original programmers who understood why they wrote it that way. He argued the real program was is theory, not the code. The translation of the theory into code is lossy: you can't reconstruct the former from the latter. Naur said that this explains why software teams don't do as well when they lose access to the original programmers, because they were the only ones with the theory.

If we take "a great description" to mean a writeup of the thinking behind the program, i.e. the theory, then your comment is in keeping with Naur: you can go one way (theory to code) but not the other (code to theory).

The big question is whether/how LLMs might change this equation.


Even bringing down the "theory" to paper in prosa will be lossy.

And natural languages are open to interpretation and a lot of context will remain unmentioned. While programming languages, together with their tested environment, contain the whole context.

Instrumenting LLMs will also mean, doing a lot of prompt engineering, which on one hand might make the instructions clearer (for the human reader as well), but on the other will likely not transfer as much theory behind why each decision was made. Instead, it will likely focus on copy&pasta guides, that don't require much understanding on why something is done.


I agree that it will be lossy because all writing is lossy.


That theory, or mental model, is a lot like a program, but of a higher kind. A mental model answers the question: what if I do this or that? It can answer this question with a different level of detail, unlike the program that must be executed completely. The language of a mental model is also different: it talks in terms of constraints and invariants, while the program is a step-by-step guide.


"The map is not the territory" applies to AI/LLMs even more so.

LLMs don't have a "mental model" of anything.


But if the person writing the prompt is expressing their mental model at a higher level, and the code can be generated from that, the resulting artifact is, by Naur's theory, a more accurate representation of the actual program. That would be a big deal.

(Note the words "if" and "by Naur's theory".)


TBH, I doubt that this will happen...

It is much easier to use LLMs to generate code, validate that code as a developer, fix it, if necessary, and check it into the repo, then if every user has to send prompts to LLMs in order to get the code they can actually execute.

While hoping it doesn't break their system and does what they wanted from it.

Also... that just doesn't scale. How much power would we need, if everyday computing starts with a BIOS sending prompts to LLMs in order to generate a operating system it can use.

Even if it is just about installing stuff... We have CI runners, that constantly install software often on every build. How would they scale if they need LLMs to generate install instructions every time?


That's basically what I was thinking too: installation is a constrained domain with tons of previous examples to train on, so current agents should be pretty good at it.


What are those techniques? I'd like to learn more.


Mostly entropy in it's various forms, like KL divergence. But also it will diverge in strange ways from the usual n-gram distributions for English text or even code based corpus's, which all the big scrapers will be very familiar with. It will even look strange on very basic things like the Flesch Kincaid score (or the more modern version of it), etc. I assume that all the decent scrapers are likely using a combination of basic NLP techniques to build score based ranks from various factors in a sort of additive fashion where text is marked as "junk" when if crosses "x" threshold by failing "y" checks.

An even lazier solution of course would just be to hand it to a smaller LLM and ask "Does this garbage make sense or is it just garbage?" before using it in your pipeline. I'm sure that's one of the metrics that counts towards a score now.

Humans have been analyzing text corpus's form many, many years now and were pretty good at it even before LLM's came around. Google in particular is amazing at it. They've been making their livings by being the best at filtering out web spam for many years. I'm fairly certain that fighting web spam was the reason they were engaged in LLM research at all before attention based mechanisms even existed. Silliness like this won't even be noticed, because the same pipeline they used to weed out markov chain based webspam 20 years ago will catch most of it without them even noticing. Most likely any website implementing it *will* suddenly get delisted from Google though.

Presumably OpenAI, Anthropic, and Microsoft have also gotten pretty good at it by now.


That's a bittersweet mix of troubles and sweetness. I hope the troubles sort out without too much heartache, and that 2026 brings many good things.


It will be a second dose of Streisand if they do.


> some random homeless guy

Was he homeless? I haven't seen that mentioned in the articles.



New York Post states it in a YouTube video titled 'All About Brown, MIT Shooting Suspect Claudio Neves Valente – who BARKED During Massacre'.


> John posted about the encounter on Reddit after the shooting

Anyone have the Reddit link? (I wonder why the article doesn't include it)




I feel sorry for this guy. His Reddit inbox is probably fucked, and he's absolutely going to get doxxed and hounded by news people, and I wouldn't be surprised if even worse things happened to him.

Good on him for reporting what he saw. He also went to the police the next day and reported it directly. But now the media machine is going to make him regret he ever said anything, which is unfortunate.


He’s already public, but he can make a new Reddit account.

> Now the media machine is going to make him regret he ever said anything

We’ll see how it turns out, but I don’t see why even the internet mob would hate him. He probably can’t live in Brown’s basement anymore, but maybe with the reward money and recognition he can find a real place.


This is admittedly very tangential only, but as a non-native speaker / not a US-American, I found this sentence from the NYT reporting[0] a bit confusing:

> John said that the suspect’s clothing was inappropriate for the weather and that they had made eye contact.

Why is the report mentioning the eye contact? Is that culturally significant, as in, in the US you don’t normally do eye contact with strangers, and if a stranger does make eye contact, it’s suspicious?

[0]: https://www.nytimes.com/2025/12/19/us/brown-mit-shooting-inv...


I think the eye contact bit is useful as a signal that the witness got a very good look at the suspect's face.


I think the eye contact in question was a prelude to the two of them kind of following each other around and a minor verbal altercation, so the later context shows that it was probably kind of suspicious eye contact, rather than a friendly "what's up?"


I suppose that made eye contact = the face was clearly visible for a second or two, and thus recognized with more certainty.


I agree with the other comments that this sentence is just poorly written.

In cities people tend to not make eye contact while walking by each other, though in smaller towns it is more common to acknowledge each other in passing.

In neither case would it be accurate to find eye contact suspicious. The sentence appears to be a summation of several things the person saw, convincing them poorly and creating the ambiguity.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: