Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What is the definition of "soaring"? The charts in the article showed that the percentage of the companies that adopt AI for automation has increase 3X. At least 40% of the companies pay for GenAI, and at least 10% of the employees use GenAI daily. Combined with the fact that the companies like OpenAI and Anthronpic frequently run out of capacity, how is the AI use not soaring?




- If microsoft bundles copilot to their standard office product, you become a company that pays for AI even if you didn't opt in

- Accidentally tapping the AI mode on the Google search will count as an AI search. DDG doesn't even wait for you to tap and triggers an AI response. Still counts as AI use even if you didn't mean to use

- OpenAI, Google and Microsoft have been advertising heavily (usage will naturally go up)

- Scammers using GenAI to scam increases AI usage and GenAI is GREAT for scammers

- Using AI after a meeting to get a summary is nice but not to enough to make a visible impact in a company output. Most AI usages fall in this bucket

This tech was sold as a civilisation defining. Not GPT-X but the GPT that is out now. Tech that was "ready to join the workforce" while the reality is that these tools are not reliable in the sense he implied. They are not "workers" and won't change the output of your average company in any significant way.

Sweet talking investors is easy, but walking the talk is another thing altogether. Your average business has no interest or time in supervising a worker that at random times behaves unpredictably and doesn't learn not to make mistakes when told off.


Those two sets of facts can be true at the same time.

40% of companies and 10% of employees can be using AI daily, but just for a small amount of tasks, and that usage can be leveling off.

At the same time, AI can be so inefficient that servicing this small amount of usage is running providers out of capacity.

This is a bad combination because it points to the economic instability of the current system. There isn't enough value to drive higher usage and/or higher prices and even if there was, the current costs are exponentially higher.


  > at least 10% of the employees use GenAI daily.
Remember that this includes people who are forced to use it (otherwise they wouldn't meet KPIs and would expect conversations with HR)

how much of this usage is replacing a web search or spelling/grammar checks with something orders of magnitude more costly?

There was a dip on the first chart in the article, it also sbows something like 9% of companies using it.

What I wonder is beyond "using" AI, is what value the companies are actually seeing. Revenue growth at both OpenAI and Anthropic are increasing rapidly at the moment, but it's not clear if individual companies are really growing their useage, or if it is everyone starting to try it out.

Personally, I have used it sparingly at work, as the lack of memory seems to make it quite difficult to use for most of my coding tasks. I see other people spending hours or even days trying to craft sub-agents and prompts, but not delivering much, if any, output above average. Any output that looks correct, but really isn't cause a number of headaches.

For the VC's, one issue is constant increase in compute. Currently it looks to me like every new release is only slightly better, but the compute and training costs increase at the same rate. The AI companies need the end users to need their product so much they can significantly increase the price to the end users. I think this is what they want to see in "adoption", such a high demand that they can see the future of increasing prices.


I don't want to be all "did you read the article?" since that's against guidelines, but the text of the article (the stuff in between the graphics and ads) is kind of about exactly that.

Adoption was widespread at first but seems to have hit a ceiling and stayed there for a while now. Meanwhile, there's been little evidence of major changes to net productivity or profitability where AI has been piloted. Nobody is pulling away with radical growth/efficiency for having adopted AI, and in fact the entire market of actual goods and services is mostly still just stagnating outside of the speculative investment being poured into AI itself.

Investment isn't just about making a bet on whether an company/industry will go up or down, but about making the right bet about how much it will do so over what period of time. The scale of AI investment over the last few years was making the bet that AI adoption would keep growing very very fast and would revolutionize the productivity and profitability of the firms that integrated it. That's not happening yet, which suggests the bet may have been too big or too fast, leaving a lot of investors in an increasingly uncomfortable position.


I get confused about the word "adoption". By adoption is it meant that a company tried to use AI, determined it useful and continues to use it. Just trying something out is not adoption in my mind. Companies try and abandon things all the time.

It has been my experience that technology has to perform significantly better than people do before it gets massively adopted. Self driving cars come to mind. Tesla has self driving that almost works everywhere but Waymo has self driving that really works in certain areas. Adoption rates for consumers has been much higher with Waymo (I was surrounded by 4 yesterday) and they are expanding rather rapidly. I have yet to see a self driving Tesla.


It's the engagement fallacy all over again.

Companies are shoving AI into everything and making it intrusive into everyone's workflow. Thus they can show how "adoption" is increasing!

But adoption and engagement don't equal productive, useful results. In my experience it simply doesn't and the bottom is going to fall out on all these adoption metrics when people see the productivity gains aren't real.

The only place I've seen real utility is for coding. All other tasks, such as Gemini for document writing, produces something that's about 80% ok, and 20% errors and garbage. The work of going back through with a fine toothed comb to root out the garbage is actually more work and less productive than any simply writing the darn thing from scratch.

I fear that the future of AI driven productivity is going to push a mountain of shoddy work into the mainstream. Imagine if the loan documents for your new car had all the qualities of a spam email. It's going to be a nightmare for the administrative world to untangle what is real from the AI slop.


So with 10X the employees using it and double the current companies (so nearly 100%) will it finally the investment? I’m guessing not.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: