> It isn't? What is stopping companies from building on GPT-OSS or other local models for cheaper? The AI services have no moat.
Right now there is an efficiency/hardware moat. That's why the Stargate in Abilene and corresponding build outs in Louisiana and elsewhere are some of the most intense capex projects from the private sector ever. Hardware and electric production is the name of the game right now. This Odd Lots podcast is really fresh and relevant to this conversation: https://www.youtube.com/watch?v=xsqn2XJDcwM
Local models, local agents, local everything and the commodification of LLMs, at least for software eng is inevitable IMO, but there is a lot of tooling that hasn't yet been built for that commodified experience yet. For companies rapidly looking to pivot to AI force multiplication, the superscalers are the answer for now. I think it's a highly inefficient approach for technical orgs, but time will create efficiency. For your joe on the street feeding data into an LLM, then I don't think any of those orgs (think your local city hall, or state DMV) are going to run local models. So there is a captured market to some degree for the the current superscalars.
It isn't? What is stopping companies from building on GPT-OSS or other local models for cheaper? The AI services have no moat.
I agree with your second paragraph. The boom in the AI market is occluding a general bear market.