Inheritance is not necessary, but then very few programming constructs are absolutely necessary. The question is does it help program clarity or not. I think that in some cases, used sparingly, it can. The main danger of inheritance is not that it is OO, but that it is not OO enough. It breaks encapsulation by mixing properties and methods between base classes and derived classes without clear boundaries. Composition is safer because it preserves encapsulation. In general, I think that protected abstract methods are a code smell, because they usually indicate close coupling of details that should be kept separate between the base and derived classes. But used correctly, inheritance can be more succinct and convenient.
Two of the three Starbucks locations in my home town removed all of the seating. Across the street from one of them, an independent coffee shop opened up with lots of seating. Whenever I walk by the Starbucks is empty and there are a lot of people inside the independent shop. I have to wonder about their strategy.
The Starbucks near me doesn't even brew coffee any more. They switched to these automatic machines that "brew" a cup in about 15 seconds (ie. vending machine quality). Its undrinkable now. In future would only order espresso drinks or cold brew.
I had a similar issue recently. I used the Windows Photo app to import & delete photos from my iPhone. When it finished, I realized that a significant fraction of the photos had been corrupted. Not sure where in the pipeline it happened, or if they were already corrupted on the phone.
It would be interesting to explore further why this topic is still so sensitive for a lot of people, and why the metaphorical talk is still so appealing. I feel like the reasons given in this article couldn't alone explain its enormous staying power (even to this day).
The problem itself is at least centuries old, if not millennia. In his "Essay Concerning Human Understanding" (1689), John Locke phrased the same problem clearly, using different words:
"How any thought should produce a motion in Body is as remote from the nature of our Ideas, as how any Body should produce any Thought in the Mind. That it is so, if Experience did not convince us, the Consideration of the Things themselves would never be able, in the least, to discover to us." (IV iii 28, 559)
The Ethiops say that their gods are flat-nosed and black, While the Thracians say that theirs have blue eyes and red hair. “If oxen and horses and lions had hands and were able to draw with their hands and do the same things as men, horses would draw the shapes of gods to look like horses and oxen would draw them to look like oxen, and each would make the gods' bodies have the same shape as they themselves had.
What with this and your previous post about why sometimes incompetent management leads to better outcomes, you are quickly becoming one of my favorite tech bloggers. Perhaps I enjoyed the piece so much because your conclusions basically track mine. (I'm a software developer who has dabbled with LLMs, and has some hand-wavey background on how they work, but otherwise can claim no special knowledge.) Also your writing style really pops. No one would accuse your post of having been generated by an LLM.
Exactly. In many organizations this is a coordination problem at the organizational level, not an individual lack of initiative. I imagine if someone looked at an army, they would say, hey these guys are just polishing their boots, and filling out paperwork and doing meaningless tasks, why does the military pay their salaries? Well, you need those people there to be ready when the war breaks out. The same thing often happens in corporations.
If you click through to the breakdowns by gender, you'll see the despair numbers are actually higher in young women than in young men. So this isn't a male-only issue.
Like with most things it takes a lot more conviction to actually do something instead of just attempting it.
For woman it's usually a cry for help that's why there suicide methods are less lethal. They know there are whole industries focused on supporting them. Meanwhile majority of men know they are alone regardless of there despair.
My niece just graduated two years ago summa cum laude with both a BA and a BS. Her entire peer group is entirely unemployed, in NY and CA, and basically living off of seasonal part time gigs where they can even get them. _ONE_ of her friends managed to snag a job as a NYC Sanitation worker.
In fact, she's flying across the country and staying with a friend for the next four months just to do an internship at a state park after being ghosted by all of the several hundred other opportunities she's applied for...
> I'm not going to give too much more detail here. Obviously not STEM but nothing fluffy.
Which letter? From my (probably mistaken) perspective
S usually requires a PhD to get into the field and if you get that far, it’s a battle for a handful of poorly paid positions waiting for you
T doesn’t need too much explanation here. Not great right now
E alright, but seems to be pretty sparse. A lot of actual engineering positions seem to have been outsourced long ago. Might also be hard to compete with just an undergrad.
M probably the most valuable right now IMO. Wide range of jobs you could have available, often decent if not high paying, but outside of those (if you can get them) you might as well be a philosopher.
The STEM push seemed to be a grift through and through.
Maybe it's too soon to say that autonomous LLM agents are the wave of the future and always will be, but that's basically where I'm at.
AI code completion is awesome, but it's essentially a better Stack Overflow, and I don't remember people worrying that Stack Overflow was going to put developers out of work, so I'm not losing sleep that an improved version will.
The problem with the "agents" thing is that it's mostly hype, and doesn't reflect any real AI or model advances that makes them possible.
Yes, there's a more streamlined interface to allow them to do things, but that's all it is. You could accomplish the same by copy-and-pasting a bunch of context into the LLM before and asking it what to do. MCP and other agent-enabling data channels now allow it to actually reach out and do that stuff, but this is not in itself a leap forward in capabilities, just in delivery mechanisms.
I'm not saying it's irrelevant or doesn't matter. However, it does seem to me that as we've run out of low-hanging fruit in model advances, the hype machine has pivoted to "agents" and "agentic workflows" as the new VC-whetting sauce to keep the bubble growing.
I don't want to blame Alan Turing for this mess, but his Turing Test maybe gave people that idea that something that can mimic a human in conversation is also going to be able to think like a human in every way. Turns out not to be the case.
Well, I agree with you. But I'd be remiss not to say that this is a lively controversy in the world of cognitive science and philosophy of mind.
To one camp in this discursive space, who of course see themselves to be ever the pragmatists, the essence of the polemic about whether LLMs can "think" is not about whether they think in exactly the same ways we do or capture the essence of human thinking, but whether it matters at all.
Well, it's an interesting question. I'm not sure we really know what "thinking" is. But where the rubber meets the road in the case of LLM agents is whether they can achieve the same measurable outcomes as a human agent, regardless of how they get there. And it seems not at all clear how to build those capabilities on top of an admittedly impressive verbal ability.
It may be because I've a writer/English major personality, and so am very sensitive to the mood and tone of language, but I've never had trouble distinguishing LLM output from humans.
I'm not suggesting anything so arrogant as that I cannot be fooled by someone intentionally deploying an LLM with that aim; if they're trained on human input, they can mimic human output, I'm sure. I just mean that the formulations that come out of the mainstream public LLM providers' models, guided however they are by their pretraining and system prompts, are pretty unmistakably robotic, at least in every incarnation I've seen. I suppose I don't know what I don't know, i.e. I can't rule out that I've unknowingly interacted with LLMs without realising it.
In the technical communities in which I move, there are quite a few forums and mailing lists where low-skilled newbies and non-native English speakers frequently try to disgorge LLM slop. Some do it very blatantly, others must believe they're being quite sly and subtle, but even in the latter case, it's absolutely unmistakable to me.