Hacker Newsnew | past | comments | ask | show | jobs | submit | didgeoridoo's commentslogin

The problem is that critical theory at the foundational level (Adorno, Foucault, even Butler) is an extremely useful and coherent way of thinking about power and situated perspective.

Unfortunately the revolutionary praxis that emerged from it is what we typically see in the academy under the label of “critical theory”, which smuggles in a lot of “liberation” ethics under the guise of critique — so it’s no longer “this is how to think about power”, but rather “power is evil and should be destroyed, or even better given to me”. Foucault literally called these people “saviors” and he didn’t mean it nicely.

(It doesn’t help that this praxis is simplistic, ties into friend/enemy emotions, and gives people “something to fight for” in an era where meaning is hard to come by.)

No matter how contaminated the bathwater, though, I think the baby is probably worth saving.


Part of the problem seems to be that he’s trying to derive a large portion of philosophy from first principles and low-n observations.

This stuff has been well-trodden by Dennett, Frankfurt, Davidson, and even Hume. I don’t see any engagement with the centuries (maybe millennia) of thought on this subject, so it’s difficult to determine whether he thinks he’s the first to notice these challenges or what new angle he’s bringing to the table.


> I don’t see any engagement with the centuries (maybe millennia) of thought on this subject

I used to be that person, but then someone pointed me to the Stanford Encyclopedia of Philosophy which was a real eye-opener.

Every set of arguments I read I thought "ya, exactly, that makes sense" and then I read the counters in the next few paragraphs "oh man, I hadn't thought of that, that's true also". Good stuff.


Meta commentary: there’s something interesting in the fact that my first instinct was “great another piece of vibeslop”, which inverted completely to genuine interest when I recognized your username.

The “personal brand” and track record might be getting even more important now that the bar to building something has dropped to the floor.


can confirm. we just got the kids those giant magna-tiles. they’ve played with nothing else today.


Google Gemini has literally no idea how it is being used. It made that up.


Yes it sounds like a bold statement. I called Gemini out on that and it admitted that it over-egged its confidence on that assertion.

But presumably the LLMs do have some knowledge about how they are used?

On further probing Gemini did give a plausible justification - in summary:

"Creation is easy. Selection is hard. In an era of infinite content, the "most successful" writer isn't the one who can produce the most; it's the one with the best taste. Using an LLM as a distillation machine allows a writer to iterate through their own ideas at 10x speed, discarding the "average" and keeping only the "potent."


LLMs have no knowledge (really “knowledge-like weights and biases”) outside their training set and system prompt. That plausible justification is just that — a bunch of words that make sense when strung together. Whether you’d like to give that any epistemic weight is up to you.


More like whether one is correct in giving it any epistemic weight. Not everything is opinion, some things really are clearly right and clearly wrong and attributing thought, reasoning, and analysis to an LLM is one of those things that’s clearly wrong.

Why would Gemini (the text model part) have that info? I'm sure that Google has some kinda analytics, or so on, but it wouldn't necessarily be part of training, the system prompt, or distillation directly.


The only plausible avenue I see is Gemini ingesting Google press releases about how cool their AI is.

Leave it to the reader to decide how informative that would be.


I don’t know, that’s pretty daring and determined to me.


Yikes. The “funnel,” the requirement for instant obedience… this gave very https://elan.school vibes.


Ugh I wish I could just read things without the ChatGPT “it isn’t X, it’s Y” tic jumping off the page at me.


Yeah, I closed the page after about 10s.

It's wildly easy to spot these days.


Unless these are new-construction luxury apartments (which the article doesn’t specify), no net migration is implied by a sale. Someone sold, someone bought. What a strange article.


The occupancy of apartments, especially luxury ones, cannot be assumed. The average number of people living in them also cannot be assumed to be constant.

Of course the article doesn't mention statistics around either of these.


There's an article about luxury apartments sitting empty. Condos of the living dead or something similar that gets passed around in the construction field around these parts.

Notably prescient


Those that have other assets which are outperforming real estate can sometimes be the only ones who can actually afford some of the properties, which can change hands until they come to rest under such a situation.

At which point they can afford to hold on to them through lean times in anticipation of future appreciation, and in that case it doesn't make much difference if there is anyone living there or not.


If it’s true that both people and investment dollars are flowing into NYC, but investment dollars are flowing even faster than the people, that’s absolutely hilarious.


Packing bunk beds into apartments can easily allow mass migration into cities.

Not that I would want to.


Feels like the article started with a conclusion and worked backwards to fulfill it


Get out of here, Nwabudike. This doesn’t concern you.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: