Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why can't normal standard work have a press release? Why do we need to play pretend and add buzzwords just to make things sound "cool"? > ...But that's just me being a bit bitter, perhaps...

Were you complaining as heavily about OCR or Markov chains ever being referenced as AI in their hay day?

The term “AI” is in an infinite treadmill and the day it stops being useable as a time sensitive reference is probably the day it surpasses humanity and becomes its own State



You can make highly accurate predictions of what contrarians will say by assuming that they define AI as "whatever computers can't do yet."

LLMs aren't truly intelligent. [No True Scotsman fallacy...] They don't really reason. [A distinction asserted without giving a falsifiable definition of reasoning...] They're just next token predictors! [Which must be mutually exclusive with intelligence, I suppose?] Etc, etc, etc. Find your favorite pretext to dismiss modern AI, ignore the holes in the argument, and satisfyingly conclude that it's all smoke and mirrors.

Consequently you see hilarious takes from skeptics, like comparing today's enormous investment in AI to when people sold blockchain cartoon monkeys. Or claiming that modern models aren't useful for anything, as if they exist in an alternative reality where hundreds of million of people don't use them daily, and there's no incessant firehose of new tools/products/results discussed in news/social media constantly.


It's not that, it's breathlessly proclaiming that techniques that have been standards for decades are "groundbreaking AI". The hyperbole makes it impossible to get at anything, and if you accurately propose a time tested solution at work these days, it gets dismissed because it's "not AI". So now standard computer vision methods that aren't AI in any way are getting proclaimed as "AI". It's quite annoying, as least from the perspective of someone who does more or less this exact thing (geospatial analysis and data processing of various types) for a living.

Folks won't let you use the right tool for the job anymore unless you make wildly hyperbolic claims about how groundbreaking it is and claim it's cutting edge AI.

The situation is bad for everyone. There's nothing wrong with using the right tool for the job and accurately describing it. I'm tired of having to inaccurately describe methods to be allowed to use them. E.g. claiming a Hough transform is "deep learning" so folks won't immediately dismiss it and demand I use some completely incorrect approach to a simple problem.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: