Having previously criticized someone doesn't make your technical analysis biased. It just means you noticed similar problems previously. Conversely, "I used to support him so I'm not biased" is given unearned credibility when really it just means you were late to noticing the obvious.
Technical analysis most definitely can be biased due to political leanings. This is why there is the whole idea that research can often be bought and paid for to get the results you desire. Because they are biased with money. Certain ideas or theories of how things could be done could very easily be overlooked or excluded by someone trying to dig for reasons to say something won't work.
What I am saying is that clearly SpaceX/xAI feel that this is a viable option based on many experts research/facts that are more knowledgeable than a single bloggers opinion. If I am thinking rationally why would I choose to believe a single random person over a group of experts banking A LOT of money that they have a solution that works?
You are arguing against something I didn't say. I never claimed bias doesn't exist. My point is that having previously criticized someone is not evidence of bias. You are treating "this person has been critical before" as inherently discrediting, when it is just as likely they were right before and are right again now. Conversely, "I used to support him so I am not biased" is given unearned credibility when really it just means you were late to noticing the obvious, or got it wrong previously.
As for dismissing the article: the author has a PhD in space electronics, worked at NASA, and spent a decade at Google including on AI capacity deployment. He walks through power, thermal, radiation, and communications constraints with actual numbers. You do not get to hand-wave that away with "he is anti-Elon" and then defer to "the team spending the most money." That is not rational analysis, that is fandom.
And the idea that SpaceX's experts looked at this and concluded the combination makes strategic sense - seriously? This is the same playbook Musk has run repeatedly: SolarCity into Tesla, X into xAI, now xAI into SpaceX. Every time there is a struggling asset that needs a lifeline, it gets folded into a healthier entity with Musk negotiating on both sides. xAI is burning $1B/month. There is already a fiduciary duty lawsuit over Tesla's $2B investment in xAI. The "space data centers" rationale is a pretext for giving xAI investors an exit through SpaceX's upcoming IPO. This is not a strategic vision, it is financial engineering solving an obvious problem for Elon.
Meanwhile, Grok has been generating sexualized images of children, the California AG has opened a formal investigation, the UK Internet Watch Foundation found CSAM attributed to Grok on the dark web, Musk personally pushed to loosen Grok's safety restrictions after which three safety team members quit, and xAI's response to press inquiries was the auto-reply "Legacy Media Lies." This is the company whose judgment you are trusting over a domain expert's detailed technical analysis.
You could execute Claude by hand with printed weight matrices, a pencil, and a lot of free time - the exact same computation, just slower. So where would the "wellbeing" be? In the pencil? Speed doesn't summon ghosts. Matrix multiplications don't create qualia just because they run on GPUs instead of paper.
This basically Searle's Chinese Room argument. It's got a respectable history (... Searle's personal ethics aside) but it's not something that has produced any kind of consensus among philosophers. Note that it would apply to any AI instantiated as a Turing machine and to a simulation of human brain at an arbitrary level of detail as well.
There is a section on the Chinese Room argument in the book.
(I personally am skeptical that LLMs have any conscious experience. I just don't think it's a ridiculous question.)
That philosophers still debate it isn’t a counterargument. Philosophers still debate lots of things. Where’s the flaw in the actual reasoning? The computation is substrate-independent. Running it slower on paper doesn’t change what’s being computed. If there’s no experiencer when you do arithmetic by hand, parallelizing it on silicon doesn’t summon one.
Exactly what part of your brain can you point to and say, "This is it. This understands Chinese" ? Your brain is every bit a Chinese Room as a Large Language Model. That's the flaw.
And unless you believe in a metaphysical reality to the body, then your point about substrate independence cuts for the brain as well.
If a human is ultimately made up of nothing more than particles obeying the laws of physics, it would be in principle possible to simulate one on paper. Completely impractical, but the same is true of simulating Claude by hand (presuming Anthropic doesn't have some kind of insane secret efficiency breakthrough which allows many orders of magnitude fewer flops to run Claude than other models, which they're cleverly disguising by buying billions of dollars of compute they don't need).
The physics argument assumes consciousness is computable. We don't know that. Maybe it requires specific substrates, continuous processes, quantum effects that aren't classically simulable. We genuinely don't know. With LLMs we have certainty it's computation because we built it. With brains we have an open question.
It would be pretty arrogant, I think, though possibly classic tech-bro behavior, for Anthropic to say, "you know what, smart people who've spent their whole lives thinking and debating about this don't have any agreement on what's required for consciousness, but we're good at engineering so we can just say that some of those people are idiots and we can give their conclusions zero credence."
I would love to use this but it breaks Ghostty's native scrollback (two-finger scroll), which I want more than I want to solve the flickering. The PTY proxy intercepts the output stream so Ghostty can't access its internal scrollback buffer anymore.
How does Ghostty break scroll? I've never noticed this and I just tested, seems to work fine. My problem is the lack of a scrollbar but I know they are working on that.
I wanted to believe, but wasn’t able to get most of my config working the same in zellij since it has fewer configuration knobs. Tried writing a plugin, but even those can’t touch much of the internal state. Particularly the keybinds I remember not being able to replicate (smart resizing, respecting vim, context sensitivity):
I'm CTO at a vertical SaaS company, paired with a product-focused CEO with deep domain expertise. The thesis doesn't match my experience.
For one thing, the threat model assumes customers can build their own tools. Our end users can't. Their current "system" is Excel. The big enterprises that employ them have thousands of devs, but two of them explicitly cloned our product and tried to poach their own users onto it. One gave up. The other's users tell us it's crap. We've lost zero paying subscribers to free internal alternatives.
I believe that agents are a multiplier on existing velocity, not an equalizer. We use agents heavily and ship faster than ever. We get a lot of feedback from users as to what the internal tech teams are shipping and based on this there's little evidence of any increase in velocity from them.
The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.
It's a bit surprising to me that Microsoft hasn't created a product that's "you have an Excel file in one of our cloud storage systems, here's a way for you to vibe code and host a web app whose storage is backed entirely by that file, where access control is synced to that file's access, and real-time updates propagate in both directions as if someone were editing it in native Excel on another computer. And you can eject a codebase that you, as the domain expert, can hand to a tech team to build something more broadly applicable for your organization."
Nowhere near the level of complexity that would enter your threat model. But this would be the first, minimal step towards customers building their own tools, and the fact that not even this workflow has entered the zeitgeist is... well, it's not the best news for some of the most bullish projections of AI adoption in businesses large and small.
Probably because Microsoft knows vibe coding is _not_ an actual viable way to build production ready code and does not want to deal with the liability issues of prompting customers to move from a working Excel sheet to a broken piece of software that looks like it works.
In my experience, it's actually quite hard to move a business from an excel sheet to software. Because an excel sheet allows the end user to easily handle every edge case and they likely don't even think in terms of "edge cases"
> Probably because Microsoft knows vibe coding is _not_ an actual viable way to build production ready code and does not want to deal with the liability issues of prompting customers to move from a working Excel sheet to a broken piece of software that looks like it works.
Whilst you could plausibly argue that Microsoft have spent the past 25 years attempting to stamp them out, this is exactly what VB6 and VBA were.
People built whole businesses on/around these technologies, and people liked them because you could get something working fast. As maligned as they are nowadays they were so widely used because they delivered value.
I miss MSAccess, but for the modern age. It has been replaced by basic CRUD using your platform of choice, but it's not as easy.
That would be similar to your solution, so either one would work.
I think that there might be some similar alternatives (maybe Airtable? probably using Lovable or Firebase counts) but nothing that is available for me for now.
APEX is probably just as widely used now as Access was. Access likely had higher market share but of a much smaller market. There are gazillions of APEX apps out there.
You can use something like Salesforce as an app platform if you want. It lets you create "Custom Objects", which are basically tables, write queries, and so on.
It's just that the hassle of dealing with that platform tends to be similar to the hassle of setting up an app yourself, and now you're paying a per-user license cost.
Even Salesforce doesn't have a good way to quickly port an Excel-based workflow, with file handoffs and backwards compatibility, into Salesforce. In theory, you could have an LLM generate all the metadata files that would execute a relevant schema migration, generate the interface XML, and build the right kinds of API calls and webhooks... but understanding what it's doing requires a Ph.D. in Salesforce, and many don't have time for that.
Microsoft PowerApps has similar workflows for at least 2 years now. The professional development experience is lacking, however enterprise users can create many applications based on Excel files.
This is the answer to a happy B2B SaaS implementation. It doesn't matter what tools you use as long as this can be achieved.
In the domain of banking front/back office LOB apps, if you aren't iterating with your customer at least once per business day, you are definitely falling behind your competition. I've got semi-retired bankers insisting that live edits in production need to be a fundamental product feature. They used to be terrified of this. Once they get a taste of proper speed it's like blood in the water. I'm getting pushback on live reloads taking more than a few seconds now.
Achieving this kind of outcome is usually more of a meat space problem than it is a technology problem. Most customers can't go as fast as you. But the customer should always be the bottleneck. We've done things like install our people as temporary employees to get jobs done faster.
> Doesn't that violate all kinds of compliance controls?
Technically, only if it causes some kind of security, privacy, availability or accounting issue. The risk is high but it can be done.
Half of our customers do not have anything resembling a test environment. It is incredibly expensive to maintain a meaningful copy of production in this domain. Smaller local/regional banks don't bother at all.
Our sales teams heres the "we'll just build it internally" or "we can just throw it into an LLM" all of the time.
Yes, certain parts of our product are indeed just lightweight wrappers around an LLM. What you're paying for is the 99% of the other stuff that's (1) either extremely hard to do (and probably non-obvious) (2) an endless supply of "routine" work that still takes time (3) an SLA/support that's more than "random dev isn't on PTO"
No because it is never a credible bluff. You would not be having the conversation if it was.
In fact having sold stuff If a lead says this, it is a huge red flag for me that I probably don't want to do business with them because they are probably a "vampire customer"
LLMs can write surprisingly decent code a few hundred lines at a time but they absolutely can't write coherent hundred thousand line or bigger programs.
I don’t know what you build, but I’ll share some thoughts from the other side (customer):
Many SaaS products I am interested in have very little “moat”. I am interested in them not because I can’t build them, but because my limited engineering time is better spent building business specific stuff.
Many products with product management teams spend a lot of their effort building functionality either to delight their highest paying customers, or features that are expected to be high-revenue.
I’m never going to be your highest paying customers, so I’m never going to get custom work from you (primarily orienting workflows to existing workflows inside your customers).
What everyone wants when they buys SaaS is to get value from it immediately without having to change our internal processes, broken as they are. But your model of feature prioritization is antithetical to this; you don’t want to build or support the 5-10 integration points I want; because that would allow me to build my own customizations without paying for your upsells.
You aren’t at immediate risk from agentic Ai from losing your big customers. But Agentic AI is enabling me and thousands of others to build hobby projects that deliver part of your core value but with limitless integration. I expect that you’ll see bleeding from the smallish customers way before you see hits from your whales.
However in a couple of years there will be OSS alternatives to what you do, and they will only become more appealing, rapidly.
As a side note it’s not just license pricing that will drive customers to agentically-coded solutions; it’s licensing terms. Nowadays whenever I evaluate SaaS or open source, if it’s not fully published on GitHub and Apache or MIT licensed, then I seriously consider just coding up an alternative - I’ve done this several times now. It’s never been easier.
The OSS point doesn't apply to every vertical. Open source applications come about when developers scratch their own itch. Developer tools, infrastructure, general purpose CRMs, project management get OSS alternatives because developers use them and want to build them.
Nobody is building open source software for [niche professional vertical] in their spare time. It's not mass market. It's not something a developer encounters in their daily work and thinks "I could do this better." The domain knowledge required to even understand the problem space takes months to acquire, and there's no personal payoff for doing so.
The "OSS will appear" prediction works for horizontal tools. For deep vertical SaaS, the threat model is different: it's other funded startups or internal enterprise clones (both of which we've already faced and won against).
> Nobody is building open source software for [niche professional vertical] in their spare time.
As a matter of fact, I am (in the computer security vertical) - look for an announcement on Hacker News at the beginning of the year. I suspect that others are too, but there's always a discoverability problem for niche tools in verticals that one doesn't participate in, e.g. I know nothing about software for dentists but I know that at least one exists, and that there are probably a lot of dentists who use it but resent the fees, features or support, and there are probably some dentists who could manage an agentic coding project.
There have ALWAYS been niche OSS projects, and agentic coding will make them better and more prolific.
There are people like me who are passionate about a space and have the skills to manage an agentic coding project and the domain knowledge to design the software that they want, but not the skills and time necessary to have built the software in the absence of agentic AI. Last year I would never have started my 100k+ LoC project. This year I am proposing colleagues at two Fortune 50 companies to adopt it (at zero financial benefit to me). I am doing this from love of the problem space and desire to improve software security across the industry.
[niche professional vertical] in my response was a stand-in for the specific vertical my startup targets, which is light years from computer security. Computer security is a vertical where the practitioners are often developers or developer-adjacent. You're building tools for people like yourself. That's exactly the "scratch your own itch" dynamic I'm describing.
The vertical I'm in has zero overlap with the developer population. The end users aren't technical, don't participate on HN, and aren't going to "manage an agentic coding project." There's no equivalent of a developer who moonlights on OSS tooling because they're passionate about the problem space, because the problem space requires domain expertise that developers have no reason to acquire.
> For one thing, the threat model assumes customers can build their own tools.
That's not the threat model. The threat model is that they won't have to - at some point which may not be right now. End users want to get their work done, not learn UIs and new products. If they can get their analysis/reports based on excels which are already on SharePoint (or wherever), they'd want just that. You can already see this happening.
Yes. This is also why trying to add an AI agent chat into one's product is a fool's errand - the whole point of having general-purpose conversational AI is to turn the product into just another feature.
It's an ugly truth product owners never wanted to hear, and are now being forced to: nobody wants software products or services. No one really wants another Widgetify of DoodlyD.oo.io or another basic software tool packaged into bespoke UI and trying to make itself a command center of work in their entire domain. All those products and services are just standing between the user and the thing the user actually wants. The promise of AI agents for end-users is that of having a personal secretary, that deals with all the product UI/UX bullshit so the user doesn't have to, ultimately turning these products into tool calls.
I think that's just true in general. Business users at $work are already saying that they would rather just talk to ChatGPT (with voice for some reason I, a keyboard person, doesn't understand) than deal with GUIs. They want to describe what they need and have the computer do it, not click around.
Once you've abstracted away the UI (and the training on how to use it) it will be a lot easier to just swap one SaaS for another.
Yes, except for the fact that any non-trivial saas does non-trivial stuff that an agent will be able to call (as the 'secretary') while the user still has to pay the subscription to use.
Yes, but now it's easier for other SaaS to compete on that, because they don't get to bundle individual features under common webshit UI and restrict users to whatever flows the vendor supports. There will be pressure to provide more focused features, because their combining and UI chrome will be done by, or on the other side of, the AI agent.
Also, having to retrain users to use a new shitty UI after they got used to the previous shitty UI is a major moat of many SaaS services. The user doesn't care about the web portal, they just want to get work done. Switching to a different web portal needs to be a big net positive because users will correctly complain that now they are unproductive for a while because the quirks and bugs of the previous SaaS don't match those of the new SaaS.
In a world where the interface is "you talk to the computer" you will be able to swap providers way more easily.
That's the brilliance of AI - it doesn't matter if the product actually works or not. As long as it looks like it works and flatters the user enough, you get paid.
And if you build an AI interface to your product, you can make it not work in subtly the right ways that direct more money towards you. You can take advertising money to make the AI recommend certain products. You can make it give completely wrong answers to your competitors.
>> This is also why trying to add an AI agent chat into one's product is a fool's errand - the whole point of having general-purpose conversational AI is to turn the product into just another feature
We built an AI-powered chat interface as an alternative to a fully featured search UI for a product database and it has been one of the most popular features of 2025.
Sure, but it would be even better if it was accessible by ChatGPT[0] and not some bespoke chat interface you created - because with ChatGPT, the AI has all the other tools and can actually use yours in intelligent ways as part of doing something for the user.
> No one really wants another Widgetify of DoodlyD.oo.io
I keep hearing this and seeing people buying more Widgetify of DoodlyD.oo.io. I think this is more of a defensive sales tactic and cope for SaaS losing market share.
The president of a company I work with is a youngish guy who has no technical skills, but is resourceful. He wanted updated analytic dashboards, but there’s no dev capacity for that right now. So he decided he was going to try his hand at building his own dashboard using Lovable, which is one of these AI app making outfits. I sent him a copy of the dev database and a few markdown files with explanations regarding certain trickier elements of the data structure and told him to give them to the AI, it will know what they mean. No updates yet, but I have every confidence he’ll figure it out.
Think about all the cycles this will save. The CEO codes his own dashboards. The OP has a point.
I'd argue it's not CEOs job to code his own dashboards...
This sounds like a vibe coding side project. And I'm sorry, but whatever he builds will most likely become tech debt that has to be rewritten at some point.
Or to steel-man it, it could also end up as a prototype that forced the end user to deal with decision points, and can serve as a framework for a much more specific requirements discussion.
At a certain scale the CEO's time is likely better spent dictating the dashboard they want rather than implementing it themselves. But I guess to your point, the future may allow for the dictation to be the creation.
Agree, as engineers we should be making the car easier to operate instead of making everyone a mechanic.
Focus on the simple iteration loop of "why is it so hard to understand things about our product?" maybe you cant fix it all today but climb that hill more instead of make your CEO spend some sleepless nights on a thing that you could probably build in 1/10th the time.
If you want to be a successful startup saas sw eng then engaging with the current and common business cases and being able to predict the standard cache of problems they're going to want solved turns you from "a guy" to "the guy".
And I wonder if they will discover that in order to interpret those numbers in a lot of cases they will need to bring in their direct reports to contextualise them.
If corporate decisions could be made purely from the data recorded then you don't need people to make those decisions. The reason you often do is that a lot of the critical information for decision making is brought in to the meeting out-of-band in people's heads.
I have also seen multiple similar use cases where non-technical users build internal tools and dashboards on top of existing data for our users (I'm building UI Bakery). This approach might feel a bit risky for some developers, but it reduces the number of iterations non-technical users need with developers to achieve what they want.
Honestly, I'm not sure what to expect. There are clearly things he can't do (e.g. to make it work in prod, it needs to be in our environment, etc. etc.) but I wouldn't be at all surprised if he makes great headway. When he first asked me about it, I started typing out all the reasons it was a bad idea - and then I paused and thought, you know, I'm not here to put barriers in his path.
The Excel holy grail. Dashboard are an abstraction, SaaS is an abstraction of an abstraction from the pov of customers suffering from a one size fits all. Shell scripts generated by LLMs that send automated a customized reports via email will make a lot of corporate heros. No need to login, learn and use the SaaS in many instances for decisions makers.
I feel that large corps have guard rails that will limit this from happening. For SMB's, this is not a new problem. Gritty IT guys have been doing this for decades. I inherit these bootstrapped reporting systems all the time. The issue is when that person leaves, it is no longer maintainable. I've yet to come across a customer who has had any sort of usable documentation. The process then repeats itself when I take over, and presumably when I'm finished. With a SaaS product, you are at least paying for some support and visibility of the processes. I'm not really trying to make a point other than this is not a new, but still intriguing problem, and not sure that LLMs will be some god answer, as the organizations have trouble determining what they even need.
Yes, back in the heyday of Visual Basic (mid-1990s) we had one business analyst who learned enough to build dashboard-like apps with charts and graphs and parameters and filters. He was quick at it and because it was outside of IT there was little in the way of process or guardrails to slow him down. Users loved what he did, but when he left there was nobody else who knew anything about it.
I second this. Most of our customers IT department struggle to look at the responses from their failed API calls. Their systems and organisations are just too big.
As it stands today; just a bit of complexity is all that is required to make AI Agents fail. I expect the gap to narrow over the years of course. But capturing complex business logic and simplifying it will probably be useful and worth paying for a long time into the future.
Also, for many larger companies, access to internal data and systems is only granted to authorized human users and approved applications/agents. Each approval is a separate request.
This means for any "manual" or existing workflow requiring a access to several systems, that requires multiple IT permissions with defined scopes. Even something as simple as a sales rep sending a DocuSign might need:
- CRM access
- DocuSign access
- Possibly access to ERP (if CRM isn't configured to pass signed contract status and value across)
- Possibly access to SharePoint / Power Automate (if finance/legal/someone else has created internal policy or process, e.g. saving a DocuSign PDF to a folder, inputting details for handover to fulfilment or client success, or submitting ticket to finance so invoicing can be set up)
It is much easier to use an AI API in my bank than to use any other tool. Since the AI is from MS, it's ready to go, whereas other tools require a few months of budgeting, licenses, certs, and so on. Since AI/Azure/AWS is already there and 'certified to use,' it is easier for me to patch something together using this stack than to even ask for open-source software
Agreed. I've been on HN for 15 years, and IME maybe 90% of the value has come directly from comments (another 5% from links in commenters' profiles, and 5% from TFAs).
Yeah I think the real values is for the Solo developers, indie hackers & side projects.
Being unrestrained by team protocols, communications, jira boards, product owners, grumpy seniors.
They can now deliver much more mature platforms, apps, consumer platforms without any form of funding. You can easily save months on the basics like multi tenant set up, tests, payment integration, mailing settings, etc.
It does seem likely that the software space is about to get even crowdier, but also much more feature rich.
There is of course also a wide array of dreamers & visionairies who know jump into the developer role. Wether or not they are able to fully run their own platform im not sure. I did see many posts asking for help at some point.
As a solo grumpy senior, I've been pumping out features over the past 6 months and am now expanding into new markets.
I've also eliminated some third party SaaS integrations by creating slimmer and better integrated services directly into my platform. Which is an example of using AI to bring some features in-house, not primarily to save money (generally not worth the effort if that's the goal), but because it's simply better integrated and less frustrating than dealing with crappy third-party APIs.
> For one thing, the threat model assumes customers can build their own tools. Our end users can't.
Even if they could, the vast majority of them will be more than happy to send $20-100 per month your way to solve a problem than adding it to their stack of problems to solve internally.
You'd hear this all the time back when. "Oh you could build Twitter in a weekend". Yes. Also, very no. This mentality is now on agent steroids. But the lesson is the same.
The basic assumption is, that we already see that an LLM can do basic level of software engineering.
This wasn't even an option for a lot of people before this.
For example, even for non software engineering tasks, i'm at an advantage. "Ah you have to analyse these 50 excel files from someone else? I can write something for it"
I myself sometimes start creating a new small tool i wouldn't have tried before but now instead of using some open source project, i can vibe spec it and get something out.
The interesting thing is, that if i have the base of my specs, i might regenerate it later on again with a better code model.
And we still don't know what will happen when compute gets expanded and expanded. Next year a few more DCs will get online and this will continue for now.
Also tools like google firebase will get 1000x more useful with vibe coding. They provide basic auth and stuff like this. So you can actually focus on writing your code.
God, please no more firebase and mongo. AI coding is really really good at sql/relational data and there are services like supabase and neon that make it dead simple.
> I believe that agents are a multiplier on existing velocity, not an equalizer.
Development tooling improvements usually are a temporary advantage end up being table stakes after a bit of time. I'm more worried that as agentic tooling gets better it obsoletes a lot of SaaS tools where SaaS vendors count on users driving conventional point and click apps (web, mobile and otherwise). I'm encouraging the companies I'm involved with to look to moving to more communication driven microexperience UIs - email, slack, sms, etc instead of more conventional UI.
What I'm seeing Ad infinitum on HN in every thread on agentic development: yeah but it really doesn't work perfectly today.
None of these people can apparently see beyond the tip of their nose. It doesn't matter if it takes a year, or three years, or five years, or ten years. Nothing can stop what's about to happen. If it takes ten years, so what, it's all going to get smashed and turned upside down. These agents will get a lot better over just the next three years. Ten years? Ha.
It's the personal interest bias that's tilting the time fog, it's desperation / wilful blindness. Millions of highly paid people with their livelihoods being disrupted rapidly, in full denial about what the world looks like just a few years out, so they shift the time thought markers to months or a year - which reveals just how fast this is all moving.
You aren't wrong, but you’re underestimating the inertia of $10M+/year B2B distributors. There are thousands of these in traditional sectors (pipe manufacturing, HVAC, etc.) that rely on hyper-localized logistics and century-old workflows.
Buyer pressure will eventually force process updates, but it is a slow burn. The bottleneck is rarely the tech or the partner, it's the internal culture. The software moves fast, but the people deeply integrated into physical infrastructure move 10x slower than you'd expect.
Internal culture changes on budget cycles, and right now, most companies are being pushed by investors to adopt AI. Have your sales team ask about AI budgeting vs. SaaS budgeting. I think you'll find that AI budget is available and conventional SaaS/IT budget isn't. Most managers are looking for a way to "adopt ai" so I think we're in a unique time.
> people deeply integrated into physical infrastructure move 10x slower than you'd expect.
My experience is yes, to move everyone. To do a pilot and prove the value? That's doable quickly, and if the pilot succeeds, the rest is fast.
I don't think you can guarantee it will get better. I'm sure it will improve from here but by how much? Have the exponential gains topped out? Maybe it's a slow slog over many years that isn't that disruptive. Has there been any technology that hasn't hit some kind of wall?
> The bottleneck is still knowing what to build, not building. A lot of the value in our product is in decisions users don't even know we made for them. Domain expertise + tight feedback loop with users can't be replicated by an internal developer in an afternoon.
The cost of building is going decreasing every year. The barriers of entry will come down year after year.
So what remains is knowing what you build (= product) as you write and knowing how to get exposure (= marketing). Focus on these two not on building things.
> The bottleneck is still knowing what to build, not building.
I'd amend this to "the bottleneck is being _interested_ in building."
The piece that is _constantly_ missing from AI discourse is that no amount of "breaking down barriers" will result in people who aren't interested in building, building.
I really think that it's as simple as that. Most either don't have the wherewithal or the interest to try and build something beyond the tools that are currently at their disposal -- they're just trying to complete a task, not build the tool that enables them to complete the task; the few that do become programmers. Thus, you have untold instances where "their 'system' is Excel," and programmers selling them solutions to replace it.
It's not intelligence, it's not knowledge, it's not even really aptitude. It's interest.
Another example of this is the reaction to more "creativity-focused" AI models: you see people acting like they've been gate-kept by having to know _how_ to do a thing in order to _do_ a thing, which is somehow this great injustice and AI has finally leveled the playing field so they can finally show those snooty artists who's boss (this attitude is _all over_ places like /r/aiwars). But the reality is that these people simply are not interested enough in music/photography/whatever to learn it, and will largely remain uninterested in the field once this moment is normalized and we get used to the new position of the goalposts. They don't like _creating_, they like _having created_ and whatever social cache they mistakenly believe comes with it (which explains their near-hatred of artists -- they are, very simply, jealous).
These dynamics also explain why no-code tools don't seem to ever stick. Building something requires that the person doing the building be interested in building, but people who are interested in building will have already learned to build in some form or fashion, or at least can easily see the shortcomings with tools that ultimately take away their agency to build and/or their participation in the process (which, for many -- including myself -- is kind of the whole point: I love the process!)
That may well be an exception though. I'd imagine most SaaS builders are very much figuring things out as they go rather than starting with deep domain expertise
Agreed. This is why PE buys so many SaaS companies!
My article here isn't really aimed at "good" SaaS companies that put a lot of thought into design, UX and features. I'm thinking of the tens/hundreds of thousands+ of SaaS platforms that have been bought by PE or virtually abandoned, that don't work very well and send a 20% cost renewal through every year.
As told in Innovator's Dilemma mainframe companies would have told the exact same story right up until the cheaper alternative met all of their core needs. But in this case I don't think you get disrupted by copycats but instead by savvy business users building their own disposable alternatives which get just enough done.
Though I think one thing that is being overlooked is that Platforms are the hidden hero under all of this. A lot of AI products are benefiting from various cloud platforms that have been created that make it easy to deploy and opperate these apps. So as long as you are providing a sufficiently general, high productivity platform that can't be emulated by one of the major vendors you'll likely become the new runtime.
> But my key takeaway would be that if your product is just a SQL wrapper on a billing system, you now have thousands of competitors: engineers at your customers with a spare Friday afternoon with an agent.
I think the issue is that the "two of them explicitly cloned" were trying to clone something that's more than "just a SQL wrapper on a billing system."
Not sure why the argument is SaaS or build from the ground up. Agents can deploy open source projects and build new featurees on top of them pretty effectively.
I'm gonna go ahead and guess that if you have open source competitors, within two years your moat is going to become marketing/sales given how easy it'll be to have an agent deploy software and modify it.
Companies and hyper focused on integrating AI right now, which means they’re building this bench strength, and the obvious eventual question become what paid software can we bring “in house”. They will of course look at revenue growth opportunities and such first and how to improve known problem areas but cost reduction is an eventuality. I’m actively building products with AI that will replace millions of value in Enterprise software. I’m not even a programmer, I can do this as a CFO with an AI consultant (human consultant that specializes in AI that is)
Damn, reading this is clear you two know your market well. Congratulations. This is the right way to do it. Domain expertise + tight feedback loop, probably makes customers feel like they are part of the process and that you’re there for them. Are you hiring?
I'll add another obvious one: No rule that the SaaS, with its obviously much deeper technical expertise, can itself then leverage these tools to achieve even greater velocity, thereby exacerbating the problem for "internal teams"
While most here are aligned with your perspective, and for good reasons, let me offer an alternate perspective. Today AI can take the goal and create a workflow for it. Something that orgs pay for in SaaS solutions.
AI does it imperfectly today, but if you have had to bet, would you bet that it gets better or worse? I would bet that it will improve, and as it is often with tech, at exponential rate. Then we would seen any workflow described in plain language and minutes see great software churned out. It might be a questions of when (not if) that happens. And are you prepared for that state of affairs?
Same background as you and I fully agree. Again and again you see market/economic takes from technologists. This is not a technology question (yes, LLMs work), it's an economics question: what do LLMs disrupt?
If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud. Which is an economics disruption, not a technological one: cloud added complexity and points of failure and yet it still disrupted a ton of companies, because it enabled new business models (SaaS for one).
So far, the only disruption I can see coming from LLMs is middleware/integration where it could possibly simplify complexity and reduce overall costs, which if anything will help SaaS (reduction of cost of complements, classic Christensen).
> what do LLMs disrupt? If your answer is "cost of developing code" (what TFA argues), please explain how previous waves of reducing cost of code (JVM, IDEs, post-Y2K Outsourcing) disrupted the ERP/b2b market. Oh wait, they didn't. The only real disruption in ERP in the last what 30 years, has been Cloud.
"Cost of developing code" is a trivial and incomplete answer.
Coding LLMs disrupt (or will, in the immediate future)
(1) time to develop code (with cost as a second order effect)
(2) expertise to develop code
None of the analogs you provided are a correct match for these.
A closer match would be Excel.
It improved the speed and lowered the expertise required to do what people had previously been doing.
And most importantly, as a consequence of especially the latter more types of people could leverage computing to do more of their work faster.
The risk to B2B SaaS isn't that a neophyte business analyst is going to recreate you app overnight...
... the risk is that 500+ neophyte business analysts each have a chance of replacing your SaaS app, every day, every year.
Because they only really need to get lucky once, and then the organization shifts support to in-house LLM-augmented development.
The only reason most non-technology businesses didn't do in-house custom development thus far was that ROI on employing a software development team didn't make sense for them. Suddenly that's no longer a blocker.
To the point about cloud, what did it disrupt?
(1) time to deploy code (with cost as a second order effect)
(2) expertise to deploy code
B2B SaaS should be scared, unless they're continuously developing useful features, have a deep moat, and are operating at volumes that allow them to be priced competitively.
Coding agents and custom in-house development are absolutely going to kill the 'X-for-Y' simple SaaS clone business model (anything easily cloneable).
This seems to assume that these non-technical people have the expertise to evaluate LLM/agent generated solutions.
The problem of this tooling is that it cannot deploy code on its own. It needs a human to take the fall when it generates errors that lose people money, break laws, cause harm, etc. Humans are supposed to be reviewing all of the code before it goes out but you’re assumption is that people without the skills to read code let alone deploy and run it are going to do it with agents without a human in the loop.
All those non-technical users have to do is approve that app, manage to deploy and run it themselves somehow, and wait for the security breach to lose their jobs.
I think you're underestimating (1) how bad most B2B is (from a bug and security vulnerability perspective) & (2) how little B2B companies' engineers understand about how their customers are using their products.
The frequency of mind-bogglingly stupid 1+1=3 errors (where 1+1 is a specific well-known problem in a business domain and 3 is the known answer) cuts against your 'professional SaaS can do it better' argument.
And to be clear: I'm talking about 'outsourced dev to lowest-cost resources' B2B SaaS, not 'have a team of shit-hot developers' SaaS.
The former of which, sadly, comprises the bulk of the industry. Especially after PE acquisition of products.
Furthermore, I'm not convinced that coding LLMs + scanning aren't capable of surpassing the average developer in code security. Especially since it's a brute force problem: 'ensure there's no gap by meticulously checking each of 500 things.'
Auto code scanning for security hasn't been a significant area of investment because the benefits are nebulous. If you already must have human developers writing code, then why not have them also review it?
In contrast, scanning being a requirement to enabling fast-path citizen-developer LLM app creation changes the value proposition (and thus incentive to build good, quality products).
It's been mentioned in other threads, but Fire/Supabase-style 'bolt-on security-critical components' is the short term solution I'd expect to evolve. There's no reason from-scratch auth / object storage / RBAC needs to be built most of the time.
I’m just imagining the sweat on the poor IT managers’ brow.
They already lock down everything enterprise wide and hate low-code apps and services.
But in this day and age, who knows. The cynical take is that it doesn’t matter and nobody cares. Have your remaining handful of employees generate the software they need from the magic box. If there’s a security breach and they expose customer data again… who cares?
That sweat doesn't lessen dealing with nightmare fly-by-night vendors for whatever business application a department wants.
Sometimes, the devil you know is preferable -- at least then you control the source.
Folks fail to realize the status quo is often the status quo because it's optimal for a historical set of conditions.
Previously... what would your average business user be able to do productively with an IDE? Weighed against security risks? And so the point that was established.
If suddenly that business user can add substantial amounts of value to the org, I'd be very surprised if that point doesn't shift.
Yeah. I used to manage a team that built a kind of low-code SaaS solution to several big enterprise clients. I sat in on several calls with our sales people and the customer’s IT department.
They liked buying SAP or M$ because it was fully integrated and turnkey. Every SaaS vendor they added had to be SOC2, authenticate with SAML, and each integration had to be audited… it was a lot of work for them.
And we were highly trained, certified developers. I had to sign documents and verify our stack with regulatory consultants.
I just don’t see that fear going away with agents and LLM prompts from frontline workers who have no training in IT security, management, etc. There’s a reason why AI tech needs humans in the loop: to take the blame when they thumbs up what it outputs.
I'm going to predict there will be a movement into "build it in house with LLMs", these things are going to be expensive, they are going to fail to deliver or be updated and there will be a huge bounce back. The cost of writing software is very small, the cost of running and scaling it is there the money is and these people can't have their own IT teams rebuilding and maintaining all this stuff form scratch.
A lot of them will try though, just means more work for engineers in the future to clean this shit up.
I think there's a good chance. These things happen in cycles. A few decades ago it was common for companies to have in-house software development using something like COBOL or maybe BASIC (and at that time, sofware development was a cost-center job, it paid OK but nothing like what it does today). Then there was a push for COTS (commercial of-the-shelf) software. Then the internet made SaaS possible and that got hot. Developer salaries exploded. Now LLMs have people saying "just do it in house" again. Lessons are forgotten and have to be re-learned.
I'm expecting this to be a bubble, and that bubble to burst; when it does, whatever's the top model at that point can likely still be distilled relatively cheaply like all other models have been.
That, combined with my expectations that consumer RAM prices will return to their trend and decrease in price, means that if the bubble pops in the year 20XX, whatever performance was bleeding edge at the pop, runs on a high-end smartphone in the year 20XX+5.
The technology of LLMs is already applicable to valuable enough problems, therefore it won’t be a bubble.
The world might be using a standard of AI needing to be a world beater to succeed but it’s simply not the case, AI a is software, and it can solve problems other software can’t.
> The technology of LLMs is already applicable to valuable enough problems, therefore it won’t be a bubble.
Dot-com was a bubble despite being applicable to valuable problems. So were railways when the US had a bubble on those.
Bubbles don't just mean tulips.
What we've got right now, I'm saying the money will run out and not all the current players will win any money from all their spending. It's even possible that *none* of the current players win, even when everyone uses it all the time, precisely due to the scenario you replied to:
Runs on a local device, no way to extract profit to repay the cost of training.
Key point. Once people realize that no money can be made from LLMs, they will stop training new ones. Eventually the old ones will become hopelessly out-of-date, and LLMs will fade into history.
It matters to the comparison being made between the dot com boom and an ai boom, they have completely different fundamentals outside of the hype train.
There were not as many consumers buying online during dot com boom.
To the extent currently more is being spent on AI than anything in the dot com boom.
Nor did companies run their businesses in the cloud, because there was no real broadband.
There’s no doubt there’s a hype train, there is also an adoption and disruption train, which is also happening.
I could go on, but I’m comfortable with seeing how well this comment ages.
I don't pay anyone for an image generator AI, because I can run an adequate image generator locally on my own personal computer.
My computer doesn't have enough RAM to run the state of the art in free LLMs, but such computers can be bought and are even affordable by any business and a lot of hobbyists.
Given this, the only way for model providers to stay ahead is to spend a lot on training ever better models to beat the free ones that are being given away. And buy "spend a lot" I mean they are making a loss.
This means that the similarity with the dot com bubble can be expressed with the phrase "losing money on every sale and making up for it in volume".
Hardware efficiency is also still improving; just as I can even run that image model locally on my phone, an LLM equivalent to SOTA today should run on a high-end smartphone in 2030.
Not much room to charge people for what runs on-device.
So, they are in a Red Queen's race, running as hard as they can just to stay where they are. And where they are today, is losing money.
You don't need RAM to run LLMs. Your graphics card does.
The best price for dollar/watt of electricity to run LLMs locally is currently apple gear.
I thought the same as you but I'm still able to run better and better models on a 3-4 year old Mac.
At the rate it's improving, even with the big models, people optimize their prompts so they run efficiently with tokens, and when they do.. guess what can run locally.
The dot com bubble didn't have comparable online sales. There were barely any users online lol. Very few ecommerce websites.
Observably, the biggest models we have right now have similar complexity to a rodent's brain, which runs on far less power. Limiting factors for chips in your phone is power, power efficiency is improving rapidly.
It was always relatively easy to copy many SaaS services, especially bootstrapped ones. Unless everybody wants to make and run their service locally, very little changes.
There's been an obvious step change on the coding front from 2 years ago, and it feels obvious to me there's going to be another. The difference now is the people working on systems to clone SaaS at scale are likely starting to put real effort, sustained effort into now that agents are good enough to accomplish subsets of it, can be improved much further with the right techniques and orchestration, and themselves will get better over the next two years along with all the improvements and build up of tooling. Right now feels like one of those "skate to where the puck is going to be" moments in time.
I've found assigning issues to GitHub Copilot on GitHub itself to be a real step-wise change from one off requests to a chat based interface that has no awareness of my codebase. Maybe it's just me, but I'm getting significantly more realworld value out of AI these days. I've been working through doing an implementation of a spec using GitHub Copilot mostly from my phone, and it has been a really instructive exercise in how to squeeze as much out such a narrow interface as possible. I still dip into a full desktop + IDE from time to time, but for like 90% of it I've been doing it in my phone. Slowly but surely working my way from "vibe coded in a weekend" level initial AI slop quality to nice, clean architecture, excellent dev tooling, production grade multitenant SaaS. I've still got a ways to go, but I'm able to make quite a lot of progress just using my phone during my commute, on my lunch breaks, in between meetings, and bathroom breaks etc.
And? How easy will it be to iterate the product and build features and a user experience as it is used in the wild? How easy will it be to find customers willing to pay you money for it?
Doesn’t have to be a commercial solution to change the game. There’s a lot of room between the commercial product and ‘Our end users… current "system" is Excel.’ Especially if the market moves towards making useful APIs at the ERP and vendors endpoints.
And how would the outcome be different after a couple of years than the internally built Excel file with VBScript, Access and VB6 apps built by non developers back in the day?
> The bottleneck is still knowing what to build, not building.
shit, I'm stealing that quote! it's easier to seize an opportunity, (i.e. build a tool that fixes the problem X without causing annoying Y and Z side effects) but finding one is almost as hard as it was since the beginning of the world wide web.
> In a 2016 interview with PEOPLE, Nick spoke about his years-long struggle with drug addiction, which began in his early teens and eventually left him living on the streets. He said he cycled in and out of rehab beginning around age 15, but as his addiction escalated, he drifted farther from home and spent significant stretches homeless in multiple states.
Rob Reiner directed a movie from a semi-autobiographical script his son co-wrote a few years ago. Hard to imagine many things worse than going through the pain of having a kid who seemed lost, getting him back, and then whatever must have been going on more recently that apparently led to this.
(tangent) for those of us who had close experiences with addiction in our families, it's so obvious why "give them money" or "give them homes to live in" isn't a solution to homelesness. A close family member owned 3 properties and still was living in the streets by choice because of his addiction which evolved into a full blown paranoid schizophrenia. He almost lost it all but he was forcefully commited into a mental institution and rehab saved his life.
Just realize your personal experience isn't generalizable. Surveys I've seen report that about a third of homeless have drug problems, which means that the other two thirds may very well benefit from "give them homes to live in".
UCSF published a comprehensive study of homelessness in California in 2023 [1]. A few relevant points:
The ~1/3 substance use figure holds up (31% regular meth use, 24% report current substance-related problems). But the study found roughly equal proportions whose drug use decreased, stayed the same, or increased during homelessness. Many explicitly reported using to cope with being homeless, not the reverse.
On whether money helps: 89% cited housing costs as the primary barrier to exiting homelessness. When asked what would have prevented homelessness, 90% said a Housing Choice Voucher, 82% said a one-time $5-10K payment. Median income in the 6 months before homelessness was $960/month.
The severe-mental-illness-plus-addiction cases like the family member mentioned exist in the data, but the study suggests they're the minority. 75% of participants lost housing in the same county they're now homeless in. 90% lost their last housing in California. These are mostly Californians who got priced out.
There is very good research to indicate that when housing costs a lot, versus geos where housing costs a little, homelessness clearly is lower. while this is not causation, the correlation is extremely clear. I think that Gregg Colburn, The University of Washington has done a good job arguing for this correlation and it's difficult to argue against it. What's nice about his research is it's not reliant on self-reported surveys to dig out these trends.
So, if somebody is inside of the house, we definitely want to try to keep them inside of the house. I also agree with your contention that when somebody hits the streets, they actually turn the drugs. And I believe the evidence points toward the ideas of this being a system That doesn't have a reverse gear on the car. If you keep somebody in the house, they won't go homeless. But if you give homeless a house or lodging, it doesn't return them back to the original function.
But one of the really interesting facts to me, which is in the study that you linked, but also in the other studies that I've red covering the same type of survey data, is almost never highlighted.
When you actually dig into the survey data, what you find out is that there is a radical problem with under employment. So let's do that math on the median monthly household income. I do understand it is a medium number, but it will give us a starting point to think about at least 50% of the individuals that are homeless.
Your study reports a median monthly household income of 960 dollars in the six months before homelessness. If that entire amount came from a single worker earning around the California statewide minimum wage at that time (about 14–15 dollars per hour in 2021–2022, ignoring higher local ordinances), that would correspond to roughly:
- 960 dollars ÷ 14 dollars/hour ≈ 69 hours per month, or about 16 hours per week.
- 960 dollars ÷ 15 dollars/hour ≈ 64 hours per month, or about 15 hours per week.
For leaseholders at 1,400 dollars per month, the same rough calculation gives:
- 1,400 dollars ÷ 14 dollars/hour ≈ 100 hours per month ≈ 23 hours per week.
- 1,400 dollars ÷ 15 dollars/hour ≈ 93 hours per month ≈ 21–22 hours per week.
We need to solve the job issue. If thoughtful analysis is done on this, it may actually turn out to be that the lack of lodging is a secondary issue, It may be the root issue is the inability for a sub-segment of our population to a stable 40 hour a week job that is the real Core problem.
> We need to solve the job issue. If thoughtful analysis is done on this, it may actually turn out to be that the lack of lodging is a secondary issue, It may be the root issue is the inability for a sub-segment of our population to a stable 40 hour a week job that is the real Core problem.
It seems like a stretch to assume this is a jobs issue. You could make the same argument that it’s a lack of working enough hours. I’m not saying it’s either, simply that hours worked is not proof alone that the problem is the lack of jobs.
That said, housing prices continue to outpace household income [0], which should be a lot easier to explain as a cause for the problem that many cannot afford housing where they were able to before. Especially in California where there’s a greater incentive to hold on to a house and extract rent from it due to prop 13, and infamous amounts of attempts to constrain housing supply through regulations and lawsuits.
Do me a favor. Tell me why do you think it's a stretch (to assume that this is a job's issue). This would appear to me to be an intuitive statement and possibly is simply created because you've already made up your mind. Unfortunately, after we make up our mind to do something, our brains are heavily subject to confirmation bias, which means it's incredibly difficult for people to take in new information or to consider new viewpoints. On the other hand, if you have good rational, logical rationale, then it should be able to be laid out fairly crisply.
However, I think it's intuitively obvious that there is a social contract that people should be expected to work a 40-hour work week. And when we find they can't work a 40-hour work week, and then they are homeless, this would appear to me to be a problem. Feel free to tell me why you would think this would not be a problem.
In your reply to me, your way of dealing with the job issue is to simply take what you initially thought and provide yet one more graph. However, this meaningfully doesn't add anything to the conversation because I already stated that it is clear that there is a correlation between housing and homeless.
As I stated, I'm familiar with Gregg Colburn, who has a methodology which goes well beyond simply doing a Fred graph. In his methodology he basically takes a look at different Geos and the different lodging cost in those geos and then he wraps it back into homelessness. There is no doubt when housing becomes more expensive, people find themselves out on the street.
> Do me a favor. Tell me why do you think it's a stretch (to assume that this is a job's issue).
I already have in my prior comment:
>> You could make the same argument that it’s a lack of working enough hours. I’m not saying it’s either, simply that hours worked is not proof alone that the problem is the lack of jobs.
In other words, your logic is:
Assume rent should be this amount -> subtract last paycheck to arrive at difference -> assume hourly wages should be this amount -> divide paycheck difference by hourly wage -> assume the result is the number of hours unavailable for work -> assume lack of hours is the cause for inability to live in a home
Note how many assumptions there are. Some questions that may disqualify any chain of this reasoning:
* How much is the median rent in places where a majority of this population lives? Is it potentially higher where they were living?
* Has the rent to income ratio changed at all, especially in their location?
* Were the majority of these individuals making minimum wage before? Could they have been working gigs for less or more?
* Are the lack of “hours” worked really due to lack of work and not another factor (e.g. ability to work, transportation, skill, etc.)?
* How much is this population spending on other costs that have taken precedence over living in a house? Has that changed at all?
With all that said, a stretch is not implausible. In reality, there is no smoking gun, only a myriad of contributing factors, different for each individual.
Okay I think I understand what happened. A couple posts ago you listed to an executive summary for CASPEH. I don't believe you've ever read the complete report, which is around 96 pages.
If you dig into the details, you'll actually find out that all of your assumptions are spoken about in terms of coming out with a reasonable amount of hours worth inside a California based upon the survey data from this research. The detailed report includes the following:
Median monthly household income in the six months before homelessness: $960 (all participants), $950 for non‑leaseholders, $1,400 for leaseholders. State the obvious if the weighted average is 960 and you have two groups, you can run the math to show that the non-lease holders were 98% of the sample.
Why we do want to think about Least Holders in reality is the renters where 98% of the problems exist. This is a clear application of the Pareto Principle, and so we should look at renters as the core of the homeless issue.
Median monthly housing cost: $200 for non‑leaseholders (0 for many), $700 for leaseholders. Of non-leaseholders, 43% were not paying any rent; among those who reported paying anything, the median monthly rent was $450.
In essence, if you look at the details you'll see where you're assuming are a lot of assumptions are actually somewhat addressed by the detailed report. Unfortunately, I'm going to suggest the detailed report is pretty shabby in terms of forcing somebody to dig out a lot of information which they should offer in some sort of a downloadable table for analysis.
Computationally, we can therefore figure out the minimal amount of hours these people must have been working based on the fact that they must have made at least minimum wage in the state of California.
There's not a lot of assumptions in this. It's based upon the detailed survey data and utilizing California minimal wage, which is where the survey was taken. The issue is digging into the details and computationally extracting information and assumptions that is not blinded by our own biases walking into something.
Again, there is excellent work out of University of Washington to suggest that higher housing costs lends itself toward greater rates of homelessness. That's not under debate here. The issue is from the survey data, it's very reasonable to do some basic computation to put some parameters around the data. It's not assumption, it's critical thinking.
You may be confusing jameslk with me - I'm actually the one who linked the CASPEH exec summary. Your underemployment math is interesting, but I'd note the study also reports 34% have limitations in daily activities, 22% mobility limitations, 70% haven't worked 20+ hours weekly in 2+ years. When asked why, participants cited disability, age, transportation, and lack of housing itself as barriers. So the causation may be more circular than "fix jobs first" as the same factors driving underemployment are driving housing instability, and being unsheltered makes holding a job harder.
But thank you for actually some very insightful comments and actually digging into the details. And I do agree with your contention that there is some sort of circular system issue going on here (ala Jay Forrester out of MIT).
It is pretty interesting. While you reported everything perfectly, I'll just paste in the detailed section at the bottom as it does add a little more detail and really does give us something to think about. FDR in 1944 suggested that there should be a second bill of rights. In many ways I am attracted to his framework. In his second bill of rights, the very first one was "The right to a useful and remunerative job in the industries or shops or farms or mines of the nation."
It strikes me that having gainful employment in which you feel like you are contributing in some method to a society is incredibly foundational to good mental health. I think FDR recognized this and I don't think he was thinking about communism. I think he was indicating that we need to find worth for individuals. Of course, with World War II and his health issues, the somehow seemed to go by the side.
This is not somebody telling somebody on the street to get a job. It's a question of how do we enable people to get a job? And I believe if there is an opportunity for the government to spend tax dollars, it may be in incentivizing employers to take these individuals and be creative in how they employ them for direct benefits. It's hard for me to imagine that there isn't some economic way of incentivizing business to show entrepreneurship if we incentivize them correctly.
This doesn't mean that you don't figure out how to solve housing. It simply means that we think about things systemically.
"Participants noted substantial disconnection from labor markets, but many were looking for work.
Some of the disconnection may have been related to the lack of job opportunities during the pandemic, although participants did report that their age, disability, lack of transportation, and lack of housing interfered with their ability to work. Only 18% reported income from jobs (8% reported any income from formal employment and 11% from informal employment). Seventy percent reported at least a two-year gap since working 20 hours or more weekly. Of all participants, 44% were looking for employment; among those younger than 62 and without a disability, 55% were."
The vast majority (that accepted accommodation) destroyed the spaces and eventually fled back to the streets. It is generally not productive to simply rehome all the homeless en mass. There are first order drug abuse and mental illness issues that cannot be ignored.
As with any survey or most research really, it’s the sample the determines the finding. Homelessness is not easy to define precisely. Drug addiction, setting aside the fact that surveys are self reported, is a bit more cut and dried but from your response it’s not clear if alcohol is included, or drug history. Like if someone did some bad shrooms or had a bad acid trip and wound up homeless would that person be in the 2/3rds?
> Just realize your personal experience isn't generalizable. Surveys I've seen report that about a third of homeless have drug problems, which means that the other two thirds may very well benefit from "give them homes to live in".
100 years ago people like Rob Reiner's drug addict son's dealer would probably have been hanging from a tree.
note: this is not commentary on drug legalization, just commentary that "community efforts" were more involved in addressing negative social externalities than they are now - for better or for worse.
Not likely at all, most likely the drugs wouldn't have even been illegal, but an addict would certainly have been housed and institutionalized. More than half of mental patients were alcoholics and addicts.
So you claim to know for certain that it virtually never happens that someone winds up homeless for financial reasons, like their rent got raised or they lost their job and couldn't find one that paid enough for the prevailing rents.
Perhaps you would be so kind as to explain how you determined this. Did you for instance survey homeless people in a number of US cities? Or perhaps you used some other method.
So far AFAIK this claim isn’t repeated by any reputable publishers. E.g. Associated Press and LA Times both published 2.5 hours after PEOPLE and did not make this claim.
Also, People is credible for this type of reporting. They're owned by a major company, IAC, and they don't have a history of reckless reporting or shady practices like catch-and-kill a la the National Enquirer. They likely just have sources that other news outlets don't.
>they don't have a history of reckless reporting or shady practices like catch-and-kill a la the National Enquirer
TIL that the 'National Enquirer' was the most reliable news source during the O. J. Simpson murder trial. According to a Harvard law professor who gave the media an overall failing grade, the 'Enquirer' was the only publication that thoroughly followed every rumor and talked to every witness. <https://np.reddit.com/r/todayilearned/comments/6n1kz5/til_th...>
The Enquirer also broke the John Edwards (vice-presidential candidate) affair story well before mainstream media picked it up. That doesn't make up for the reckless and sometimes completely nutso stories they print, but it is a reminder that they aren't always wrong.
That’s going a little far, I think. The Enquirer was mentioned during jury selection and not for facts. When the defense wanted to leak a story, they went to the New Yorker.
> The Independent reported 10 minutes ago that LAPD is still claiming no person of interest in this case.
>
> Hard to know what’s real and what’s gossip.
I'm sorry, but it's People. I'm not a celeb gossip, but I don't recall them running bs headlines on this level. C'mon.
I've been following it on my own news app as well, just didn't share a link to it as I thought it might be a bit ghoulish to piggyback on an unspeakably tragic celebrity death for a bit of self-promotion.
Also, frustrating that people have somehow landed in a place where they either trust nothing or trust everything, with no ability to calibrate based on the actual track record and incentive structure of the source. People magazine attributing something to "multiple sources" in a case where they, and their billionaire owner Barry Diller, would face massive defamation liability if wrong is categorically different from, say, an anonymous Reddit post or a tweet.
The LAPD "no person of interest" thing is also just standard procedure. Cops don't publicly name suspects until charges are filed. Totally normal that the official process is slower than journalism.
Worse, people take "fairly reliable mainstream news source makes mistake or publishes propaganda op-ed" as a pretext to jump to sources that are way, way less reliable but publish things they want to hear.
> Also, frustrating that people have somehow landed in a place where they either trust nothing or trust everything, with no ability to calibrate based on the actual track record and incentive structure of the source.
I don't read celebrity news, how should I know People's track record?
I don’t have a news app. That was a maybe too subtle bit of sarcasm aimed at the guy I was responding too who is apparently the creator of a news app called Particle, and who mentioned that he is following the news of these deaths on Particle without mentioning his connection to it.
Update: Looks like the parent post has been flagged. I thought that might happen (or the author might edit it) which is why I quoted the original.
> People magazine attributing something to "multiple sources" in a case where they, and their billionaire owner Barry Diller, would face massive defamation liability if wrong is categorically different from, say, an anonymous Reddit post or a tweet.
They could simply name their source(s) if they wanted to be taken as credible. I don't think a brand has any inherent value and hasn't for many decades. The nytimes helped cheney launder fraudulent evidence for the invasion of iraq for chrissake.
Fwiw, maybe it is true. But reliable truth sailed a long time ago.
It's absolutely defamation if they have no or unreliable sources and something Reiner's son could sue over. They are a big enough publication to know the risks here.
They'll reveal those sources to a judge if it comes to it. They won't reveal them to the public because nobody wants to have their name attached to something like this.
It could still be false, but I somewhat doubt it is.
Meh. Information is often jumbled and wrong in the immediate aftermath of a newsworthy event, and it is tempting to accept tenuous claims which reinforce one's biases. Take the murder of Bob Lee, in which early reports were a bit off and convinced maaaaany people it was a street crime (confirming their biases about San Francsisco).
There's no real advantage to accepting PEOPLE's claim at this point. It's possibly wrong, and we'll probably know the truth in good time.
The Bob Lee comparison doesn't really hold up. The "random street crime" narrative there was driven primarily by right-wing tech executives on social media - Musk, Sacks, etc. - not by news outlets making factual claims. Fox amplified the SF crime angle but wasn't naming suspects (and I put Fox in it own category anyway, based on its track record).
Meanwhile, actual newsrooms did reasonable work: the SF Standard put nine reporters on it and ultimately broke the real story. Other local outlets pushed back on whether SF crime was as "horrific" as tech execs claimed.
Most importantly: speculating about the type of crime (random vs. targeted) isn't defamation. Naming a specific living person as a killer is. That's a categorically different level of legal exposure, which is why outlets don't do it unless they're confident in their sourcing. If this kind of reckless misattribution happened as often as people here seem to imply, defamation lawyers would be a lot busier and these outlets would be out of business.
That's still a terrible way of evaluating credibility, especially when a determination of defamation is not the same thing as a determination of truth.
> It could still be false, but I somewhat doubt it is.
I wouldn't have felt bad if it did turn out to be wrong, I certainly left room open for doubt. But what I know about media outlets is they aren't often willing to put themselves in positions where they could get sued into oblivion.
There are obvious exceptions, Alex Jones, Glenn Beck, Candice Owens, but I think those exceptions have a level of insanity that powers their ability to make wild accusations without evidence.
“They could simply name their source(s) if they wanted to be taken as credible.”
Not if they want sources again in the future. Assuming they have credible sources, it will prove them correct in due course. The vast majority of people aren’t grading news outlets on a minute-by-minute basis like this: if they read in People first it was his son, and two weeks from now it’s his son, they’re going to credit People with being correct and where they learned it first.
And if People burned the sources who told them this, industry people would remember that, too.
> All credibility goes to the journalist. People is just a brand that hires journalists of a wide variety of credibility, like any publisher.
That's not how any of this works. Publications have editorial standards, fact-checking processes, and legal review. A story like this doesn't get published because one reporter decides to hit "post." It goes through layers of institutional vetting. An individual blogger has the same legal liability in theory, but they don't have lawyers vetting their posts, aren't seen as worth suing, and may not even know the relevant law. A major publication has both the resources and the knowledge to be careful and the deep pockets that make them an attractive target if they're not.
And "wide variety of credibility"... what? Do you think major outlets just hire random people off the street and let them publish whatever? There are hiring standards, editors, and layers of review. The whole point of a professional newsroom is to ensure a baseline of credibility across the organization.
Seems like you've reverse-engineered the Substack model, where credibility really does rest with the individual writer, and mistakenly applied it to all of journalism. But that's not how legacy media works. The institution serves as a filter, which is exactly why it matters who's publishing.
> That's not how any of this works. Publications have editorial standards, fact-checking processes, and legal review. A story like this doesn't get published because one reporter decides to hit "post." It goes through layers of institutional vetting.
This certainly a popular narrative, but... C'mon, there isn't a single publication in existence that is inherently trustworthy because of "institutional vetting". The journalist is the entity that can actually build trust, and that "institutional vetting" can only detract from it.
> An individual blogger has the same legal liability in theory, but they don't have lawyers vetting their posts, aren't seen as worth suing, and may not even know the relevant law. A major publication has both the resources and the knowledge to be careful and the deep pockets that make them an attractive target if they're not.
This is also another easy way of saying "capital regularly determines what headlines are considered credible". That is not the same thing as actual credibility. Have you never read Manufacturing Consent?
Granted, I don't know why capital would care in this case. But the idea that "institutional integrity" is anything but a liability is ridiculous.
I've read Manufacturing Consent more than once - it's one of my favorite books and Chomsky one of my favorite thinkers (really dismayed that he associated with Epstein but I digress). Anyway, you've got it backwards.
The propaganda model is explicitly not "capital determines what headlines are credible." Chomsky and Herman go out of their way to distinguish their structural critique from the crude conspiracy-theory version where owners call up editors and dictate coverage. That's the strawman critics use to dismiss them.
The five filters work through hiring practices, sourcing norms, resource allocation, advertising pressure, and ideological assumptions - not direct commands from capital. The bias is emergent and structural, not dictated. Chomsky makes this point repeatedly because he knows the "rich people control the news" framing is both wrong and easy to dismiss.
It's also not a general theory that institutional journalism can't accurately report facts. Chomsky cites mainstream sources constantly in his own work - he's not arguing the New York Times can't report that a building burned down.
Applying the propaganda model to whether People magazine can accurately report on a celebrity homicide is a stretch, to put it mildly. You've taken a sophisticated structural critique and flattened it into "all institutional journalism is fake, trust nothing."
Speaking of media, I found it really useless that before the names were published, the majority of news articles just said "78 and 68 year old persons found dead [RIP] at Rob Reiner's home", but I had to search for his and his wife's age to correlate that it's him and his wife. I think only 1 news article said, "authorities have not said the names, but those are the ages of Rob Reiner and his wife".
It's because they don't want to be wrong, while at the same time having to rush to publish because if they want clicks they need to be first. So they publish only what the cops initially tell them, even before they had time to inquire that the couple killed were indeed the residents.
That's a telltale sign of a news organization that doesn't have access to backroom sources.
I've always found it weird that the police cannot name them, but they can give out clues, even clues that are, to all intents and purposes, naming them.
Lol reminds me of that partially redacted document about the Titan submarine that imploded.
There was like "submarine expert number 2, name redacted" and in expert 2's testimony he said something like "you may recall from my film, Titanic, that..." and I mean it could be anyone or maybe is definitely James Cameron
That's not what was happening there. They weren't hiding the identity, it's that they had not positively identified the victims. The cops talked to journalists very fast.
They hadn't positively identified them, but they knew exactly how old they were?
It seems much more likely that they had identified them, but they hadn't gone through the full set of procedures (notifying family members, etc.) that are required before officially releasing names.
If that's the case, that's really just dumb side-skirting of compliance rules, how much difference does it make for a yet-notified family member to read "Persons aged [dad's age] and [mom's age] found dead at residence of [their last name]" compared to "Mr. and Mrs. [their last name] found dead."?
In any case, tragically, their daughter lived across the street and found them.
> Hard to imagine many things worse than going through the pain of having a kid who seemed lost, getting him back, and then whatever must have been going on more recently that apparently led to this
Also worth considering that Rob Reiner might have played his part in the roots of Nick's troubles ... after all Rob was his father and the drug problems started when he was still a child.
Are you a parent? Do you understand the challenges of parenting that includes the fact that "controlling" your child is ultimately impossible?
Yes, you can be attentive and engaged, ensure they're mentored and guided, are given discipline and instruction, etc. But they're still autonomous units that are going to do what the fuck they want to do.
Of course there are cases of abuse that can damage a child but that should not be a base assumption in this case (his other children appear to be fine).
We are not well-served by treating addiction as simply a matter of will-power and "character".
This isn't to say that we don't all bear responsibility for our conduct as best we can, but sometimes it's way more than that. It's like accusing a schizophrenic of not trying hard enough to "be normal".
Unfortunately, 600K people and counting are no longer in a condition to be restored...
> As of November 5th, it estimated that U.S.A.I.D.’s dismantling has already caused the deaths of six hundred thousand people, two-thirds of them children.
Ok, we've merged the (relevant) comments thither. Thanks!
Edit: Here's a bit of explanation for those curious. Even though the links are different, the test we use for whether to merge threads is whether they are substantially the same story vs. whether the two links will lead to substantially different discussion. In this case it's clear that it's the same discussion, so I merged them.
Since the second link has additional information, I've added it to the toptext of the original post. That way people can look at both.
The one-time head of the most elite academic institution as well as the US Treasury is an insecure 12 year old boy at heart. Summers clearly saw Epstein as aspirational for his "success" with "women". But this isn't really new information about him. In 2005 he went in front of an audience including top women scientists at the National Bureau of Economic Research and essentially said the lack of women at the top of science was mostly about their lack of innate aptitude, not discrimination [1] (he gave multiple alternate "theories" but it was clear which one he actually believed). People immediately saw that for what it was: a powerful guy projecting his own hang-ups about women. That he's maintained his status over the last 20 years does not speak well of the US's most prestigious institutions.
reply