This just returns us to the question — if it makes all these things so easy and fast, where are the AI-generated apps? Where is the productivity boost?
People start announcing that they're using AI to do their job for them? Devs put "AI generated" banners all over their apps? No, because people are incentivised to hide their use of AI.
Businesses, on the other hand, announce headcount reductions due to AI and of course nobody believes them.
If you're talking about normal people using AI to build apps those apps are all over the place, but I'm not sure how you would expect to find them unless you're looking. It's not like we really need that many new apps right now, AI or not.
Given the amount of progress in AI coding in the last 3 years, are you seriously confident that AI won't increase programming productivity in the next three?
This reminds me of the people who said that we shouldn't raise the alarm when only a few hundred people in this country (the UK) got Covid. What's a few hundred people? A few weeks later, everyone knew somebody who did.
Okay, so if and when that happens, get excited about it _then_?
Re the Covid metaphor; that only works because Covid was the pandemic that did break out. It is arguably the first one in a century to do so. Most putative pandemics actually come to very little (see SARS1, various candidate pandemic flus, the mpox outbreak, various Ebola outbreaks, and so on). Not to say we shouldn’t be alarmed by them, of course, but “one thing really blew up, therefore all things will blow up” isn’t a reasonable thought process.
AI codegen isn't comparable to a highly-infectious disease: it's been a lot more than a few weeks. I don't think your analogy is apt: it reads more like rhetoric to me. (Unless I've missed the point entirely.)
From my perspective, it's not the worst analogy. In both cases, some people were forecasting an exponential trend into the future and sounding an alarm, while most people seemed to be discounting the exponential effect. Covid's doubling time was ~3 days, whereas the AI capabilities doubling time seems to be about 7 months.
I think disagreement in threads like this often can trace back to a miscommunication about the state today / historically versus. Skeptics are usually saying: capabilities are not good _today_ (or worse: capabilities were not good six months ago when I last tested it. See: this OP which is pre-Opus 4.5). Capabilities forecasters are saying: given the trend, what will things be like in 2026-2027?
The "COVID-19's doubling time was ≈3 days" figure was the output of an epidemiological model, based on solid and empirically-validated theory, based on hundreds of years of observations of diseases. "AI capabilities' doubling time seems to be about 7 months" is based on meaningless benchmarks, corporate marketing copy, and subjective reports contradicted by observational evidence of the same events. There's no compelling reason to believe that any of this is real, and plenty of reason to believe it's largely fraudulent. (Models from 2, 3, 4 years ago based on the "it's fraud" concept are still showing high predictive power today, whereas the models of the "capabilities forecasters" have been repeatedly adjusted.)
The article provides a few good signals: (1) an increase in the rate at which apps are added to the app store, and (2) reports of companies forgoing large SaaS dependencies and just building them themselves. If software is truly a commodity, why aren't people making their own Jiras and Figmas and Salesforces? If we can really vibe something production-ready in no time, why aren't industry-standard tools being replaced by custom vibe clones?
> If we can really vibe something production-ready in no time, why aren't industry-standard tools being replaced by custom vibe clones?
That's a silly argument. Someone could have made all of those clones before, but didn't. Why didn't they? Hint: it's not because it would have taken them longer without AI.
I feel like these anti-AI arguments are intentially being unrealistic. Just because I can use Nano Banana to create art does not mean I'm going to be the next Monet.
> Why didn't they? Hint: it's not because it would have taken them longer without AI.
Yes it is. "How much will this cost us to build" is a key component of the build-vs-buy decision. If you build it yourself, you get something tailored to your needs; however, it also costs money to make & maintain.
If the cost of making & maintaining software went down, we'd see people choosing more frequently to build rather than buy. Are we seeing this? If not, then the price of producing reliable, production-ready software likely has not significantly diminished.
I see a lot of posts saying, "I vibe-coded this toy prototype in one week! Software is a commodity now," but I don't see any engineers saying, "here's how we vibe-coded this piece of production-quality software in one month, when it would have taken us a year to build it before." It seems to me like the only software whose production has been significantly accelerated is toy prototypes.
I assume it's a consequence of Amdahl's law:
> the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used.
Toy prototypes proportionally contains a much higher amount of the type of rote greenfield scaffolding that agents are good at writing. The sticker problems of brownfield growth and robustification are absent.
I would expect a general rise in productivity across sectors, but with the largest concentrated in the tech sector given the focus on code generation. A proliferation of new apps, new features, and new functionalities at a quicker pace than pre-AI. Given the hype, one would expect an inflection point in the productivity of this sector, but it mostly just appears linear.
I am very willing to believe that there are many obscure and low-quality apps being generated by AI. But this speaks to the fact that mere generation of code is not productive, that generating quality applications requires other forms of labor that is not presently satisfied by generative AI.
> A proliferation of new apps, new features, and new functionalities at a quicker pace than pre-AI
IMO you're not seeing this because nobody is coming up with good ideas because we're already saturated with apps. And apps are already releasing features faster than anyone wants them. How many app reviews have you read that say: "Was great before the last update". Development speed and ability isn't the thing holding us back from great software releases.
I would expect a _big_ increase in the production of amateur/hobbyist games. These aren’t demand driven; they’re basically passion projects generally. And that doesn’t seem to be happening; steam releases are actually modestly _down_, say.
Its not productivity boosting in a sense of "you can leave 2h earlier", but in a sense of "you get more done faster", resulting in more stuff created. Thats my general assumption/approach for "using AI to code".
When it comes to "AI-generated apps" that work out of the box, I do not believe in them - I think for creating a "complete" app, the tools are not good enough (yet?). Context & co is required, esp. for larger apps and to connect the building blocks - I do not think there will be any remarkable apps coming out of such a process.
I see the AI tools just as a junior developer who will create datastructures, functions, etc. when I instruct it to do so: It attends in code creation & optimization, but not in "complete app architecture" (maybe as sparring partner)
> Given the repeatability crisis I keep reading about, maybe something should change?
The replication crisis — assuming that it is actually a crisis — is not really solvable with peer review. If I'm reviewing a psychology paper presenting the results of an experiment, I am not able to re-conduct the entire experiment as presented by the authors, which would require completely changing my lab, recruiting and paying participants, and training students & staff.
Even if I did this, and came to a different result than the original paper, what does it mean? Maybe I did something wrong in the replication, maybe the result is only valid for certain populations, maybe inherent statistical uncertainty means we just get different results.
Again, the replication crisis — such that it exists — is not the result of peer review.
Have you ever went running with a dog? Dogs can go fast over a short distance but they overheat quickly. People just keep on running way past the time the dog has collapsed.
Generally speaking, foreign students subsidize public universities by paying full sticker price for tuition, whereas US students are either in state (paying less) or often receive scholarships and support.
Foreign students are not stealing “slots” from Americans. If anything, their tuition dollars make more slots available.
Assuming funding correlates to more slots, which is not really true. The number of important professors to take mentorship from, the number of research lab slots are certainly lagging the increased funding, if increasing at all.
The money might be going into nicer buildings or administrative costs, but it's also a white elephant once the foreign funding dries up as the domestic situation improves for many internationals. After which then these universities find themselves in major trouble.
That's like trying to apply "the average american gets 20% of their fiber..." type nutrition information to a 500lb obese person.
It's probably true but not really meaningful in the broader context.
With the current easy money federally backed loans US university funding model the foreign students are just easy money on top of an already screaming money printer more than a noteworthy subsidy of their operations.
This disaster was exactly predicted by a ton of people, with foresight! To treat this as an unexpected outcome belies the exact lack of seriousness that characterized this whole ordeal
The issue is that head of the HHS has led a years-long campaign against vaccines built atop shit-tier science and outright misinformation, and is part of a political movement that is growing increasingly anti-vaccine.
There is a clear possibility that the results will be cooked or otherwise fraudulent because the secretary will not take "no link" for an answer. Even if such a study is obviously deeply flawed or rigged, the damage it would do to public acceptance of vaccines would be unparalleled, measured in thousands (and likely more) of dead children.
The thing is that people like Thiel are actually quite smart. The problem is that people really overrate “smartness”. Smart people are often dumb as hell.
The thing is, people like Peter Thiel, Elon Musk, Jeff Bezos are undoubtedly intelligent, they were all lucky of course but I would argue that no one with below average IQ can use their initial luck to create additional opportunities and generate millions of dollars worth of wealth.
That being said, I feel like we need to be more nuanced when we discuss "intelligence", because I think that it can have a multifaceted definition and people have a wide array of skillets.
So I perhaps a better question would be something along the lines of "Intelligent in what way?"
tbf, many materialists dislike the "degrees of consciousness" idea because a theory that posits "consciousness is on a spectrum" is one that starts to resemble panpsychism, which they consider magical woo.
Thats fine for a sofrware startup because it fundamentally doesn't matter. Who cares if your silly website fails after you experiment, no one gets seriously hurt.
Shutting off the government means that things can be irreparably damaged. Losing a generation of scientists because of random cullings at the NSF will have effects for decades.
In the worst case, "moving fast and breaking things" with the government will kill people. For example, many patients were kicked off clinical trials during the NIH funding freeze. Abroad, the end of PEPFAR could kill untold numbers of people.
The indirect is a negotiated flat rate that covers costs that would be too numerous or difficult to account for in the direct costs. Like how would you as a researcher budget a fractionalized portion of access to a supercomputer cluster in each and every grant you need? You would need to hire new accountants just to handle this!
The indirect rate is basically covering the whole infrastructure of research at a university. In theory all could be put into direct costs but…again…we get to tremendously difficult accounting