Hacker Newsnew | past | comments | ask | show | jobs | submit | jcgrillo's commentslogin

Echoing the comments there... this seems like a colossally dumb move on their part. Is there any way this doesn't just end with a hard fork and some new player taking over where Arduino left off?


The other option is that Arduino simply fades away. Their hardware doesn't have anything to offer that you can't get on aliexpress or spin yourself for a tenth the cost.

The framework is the only arguably valuable thing they offer, but even that's not enough to prop a business up on.

Most likely everything will continue exactly as-is: Arduino hardware will become increasingly dated and undesirable, and open source Arduino-compatible libraries will continue flourishing until nobody remembers that Arduino was a hardware platform before it was software framework.

I think we've long since passed the point where Wiring will ever go away, but I doubt we'll still be calling it Arduino for too much longer. Arduino is probably dead, and espressif is moving in.


Yeah I personally never really bought into Arduino. I got their Uno back whenever it came out but never really got into their whole IDE experience. Latest projects are on esp32 using embassy which so far has been going great. Interested to check out rp2040 or rp2350 at some point maybe.. There are tons of interesting, easy options out there now


A decade would be very quick. The amount of specialist knowledge that went into every part of this project is crazy.. After a decade's worth of projects I doubt I'd be confident to tackle the steering and suspension design on something like this, let alone all the aero.

I've been working on cars for 20yr, I weld, I have done CAD/CAM/CAE stuff, rebuilt and modified engines, done custom suspension work... there are so many aspects of a project like this that are just completely unknown to me, like I wouldn't even know where to start. Many aspects of this build are not things you can really learn or research on your own.


Step 1: Get ~$250k+ in cash for the initial build.

Step 2: Start learning. If you don't know how to evaluate the work of your builder you may have a few false starts finding someone who can actually do it, which will cost you even more time and money.

Step 3: Learn some more. Owning a vehicle like this is a constant development effort. The work will never be "done" so unless you have a mechanic on retainer you will be working on it constantly.

In short, unless you have like a million dollars to spend on a toy and staff to keep it running you'll have to shoulder at least some of the effort.


The problem is entitled, lazy, sociopathic bastards who think everyone else is an idiot who should be killed and replaced with a robot. They're seeing a thing which confirms these biases, and reacting predictably. Unfortunately people like this are found in higher concentrations the further up the class hierarchy you go.


This too. Also the amount of group think between the tech CEOs is evident in this and other trends.


> They consistently have the best or second best models.

This is the problem with your original argument. It assumes that having a "good model" (e.g. one that performs well on some benchmarks) has somehow to do with something in the real world. It doesn't. If you can show that it does, your thesis might have at least a glimmer of credibility.

The idea that a chatbot will somehow displace an operating system is the kind of absurdity that follows from making this error.


This is an excellent metaphor, so don't take this as criticism merely an observation, but it skews heavily towards the techno-utopian narrative that scam artists like Altman and Pichai keep harping on. Your techno-dystopia makes the same fatal assumption that tech matters much at all. The internet has become television. That's it. It's not nothing but it damn sure ain't everything, and it's just not all that important to most folks.


In the article Pichai is quoted saying:

> "It doesn't matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools."

Bullshit. Citation very much needed. It's a shame--a shameful stain on the profession--that journalists don't respond critically to such absurd nonsense and ask the obvious question: are you fucking lying?. It is absolutely not true that AI tools make doctors more effective, or teachers, or programmers. It would be very convenient to people like Pichai and Scam Altman, but that don't make it so.


There are hundreds of thousands of developers here including me who would vouch that AI does make them vastly more productive.


And AI skeptics are waiting to see the proof in the pudding. If we have a new tool that makes hundreds of thousands of devs vastly more productive, I expect to see the results of that in new, improved software. So far, I'm just seeing more churn and more bugs. It may well be the case that in a couple years we'll see the fruits of AI productivity gains, but talk is cheap.


The proof is in feature velocity of devs/teams that use it and in the layoffs due to efficiency gains.

I think it's very hard to convince AI skeptics since for some reason they feel more threatened by it than rest. It's counterproductive and would hinder them professionally but then it's their choice.


Without rigorous, controlled study I'm not ready to accept claims of velocity, efficiency, etc. I'm a professional software engineer, I have tried various AI tools in the workplace both for code review and development. I found personally that they were more harmful than effective. But I don't think my personal experience is really important data here. Just like I don't think yours is. What matters is whether these tools actually do something or whether instead they just make some users feel something.

The studies I've seen--and there are very few--seem to indicate the effect is more placebo than pharmacological.

Regardless, breathless claims that I'm somehow damaging my career by wondering whether these tools actually work are going to do nothing to persuade me. I'm quite secure in my career prospects, thank you kindly.

I do admit I don't much like being labeled an "AI skeptic" either. I've been following developments in machine learning for like 2 decades and I'm familiar with results in the field going back to the 1950s. You have the opportunity here to convince me, I want to believe there is some merit to this latest AI summer. But I am not seeing the evidence for it.


You say you've used AI tools for code review and deploys, but do you ever just use chat GPT as a faster version of Google for things like understanding a language you aren't familiar with, finding bugs in existing code, or generating boilerplate?

Really I only use chat GPT and sometimes Claude code, I haven't used these special-cased AI tools


> You have the opportunity here to convince me, I want to believe there is some merit to this latest AI summer. But I am not seeing the evidence for it.

As I said the evidence is in companies not hiring anymore since they don't need as many developers as before. If you want rigorous controlled studies you'll get it in due time. In the meantime maybe just look into the workflows of how people are using

re AI skeptics: I started pushing AI in our company early this year, and one of the first questions I got was that "are we doing it to reduce costs". I fully understood and sympathize with the fact many engineers feel threatened and feel they are being replaced. So I clarified it's just to increase our feature velocity which was my honest intention since ofc I'm not a monster.

I then asked this engineer to develop a feature using bolt, and he partially managed to do it but in the worst way possible. His approach was to spend no time on planning/architecture and to just ask AI to do it in a few lines. When hit with bugs he would ask the AI "to fix the bug" without even describing the bug. His reasoning was that if he had to do this prep work then why would he use AI. Nonetheless he finished entire month's worth of credit in a single day.

I can't find the proper words, but there's a certain amount of dishonesty in this attitude that really turns me off. Like turbotax sabotaging tax reforms so they can rent seek.


> If you want rigorous controlled studies you'll get it in due time.

I hope so, because the alternative is grim. But to be quite honest I don't expect it'll happen, based on what I've seen so far. Obviously your experience is different, and you probably don't agree--which is fine. That's the great thing about science. When done properly it transcends personal experience, "common sense", faith, and other imprecise ways of thinking. It obviates the need to agree--you have a result and if the methodology is sound in the famous words of Dr. Malcolm "well, there it is." The reason I think we won't get results showing AI tooling meaningfully impacts worker productivity are twofold:

(1) Early results indicate it doesn't. Experiences differ of course but overall it doesn't seem like the tools are measurably moving the needle one way or the other. That could change over time.

(2) It would be extremely favorably in the interests of companies selling AI dev tools to show clearly and inarguably that the things they're selling actually do something. Quantifying this value would help them set prices. They must be analyzing this problem, but they're not publishing or otherwise communicating their findings. Why? I can only conclude it's because they're not favorable.

So given these two indications at this point in time, a placebo like effect seems most likely. That would not inspire me to sign a purchase agreement. This makes me afraid for the economy.


months worth of credits? What does that mean?


Productive how? If you’re not measuring the thing, how can you tell that the thing improved?


Feature velocity has increase manifold. And this is despite decreased team size.

A small startup now has the chance to compete with incumbents without raising from VCs.

Not talking about you, but I think cynics just look at it with too much pessimism and therefore can't see it.


It's not really about optimism or pessimism, it's effect vs no effect. Self reported anecdotes like yours abound, but as far as I'm aware the effect isn't real. That is, it's not in fact true that if a business buys AI tools for its developers their output will increase in some way that impacts the business meaningfully. So while you may feel more productive using AI tooling, in fact you probably aren't, actually.


If there was no effect we wouldn't be seeing so many mass layoffs and people complaining about this job market being worst than even dot com crash.


No. If you're trying to make a causal link between some layoffs and AI tooling you need to bring the receipts. Show that the layoffs were caused by AI tooling, don't just assume it. I don't think you can, or that anyone has.


Most public companies aren't attributing it to AI yet due to fear of backlash, but it's obvious to most of us.

And Startups and SMB's are more open to attributing it to efficiency gains due to AI.

If you aren't seeing it then probably it's because you are in a niche vertical. Which is great for you, but not representative on the whole.


I am very much not an AI skeptic, I use AI every day for work, and it's quite clear to me that most of the layoffs of the past few years are correcting for the absurd over hiring from the Covid era. Every software company really convinced themselves that they needed like 2-3x the workforce they actually did because "the world changed". Then it became clear that the world in fact did not fundamentally change in the ways they thought.

Chat GPT just happened to come out around the same time so we get all this misattribution


I agree, the first thing I asked copilot to write me was a feature estimator:

def mult2(points): return 2xpoints

Its going to be really interesting what happens to this labor pool once you've finally sunk all the ships youre let aboard.


> Feature velocity has increase manifold.

And yet, every time somebody not paid by the AI companies tries to measure it, they don't find it.


From what I hear, many doctors pass X-ray and MRI scans to Chatgpt for a ...second opinion :-)


> frontier labs, which had already produced a product that was going to eviscerate Google Search (and therefore, Google ad revenue)

> If Google does nothing, they lose.

Is any of that actually true though? In retrospect, had google done nothing their search product would still work. Currently it's pretty profoundly broken, at least from a functional standpoint--no idea how that impacts revenue if at all. To me it seems like google in particular took the bait and went after a paper tiger, and in doing so damaged their product.


Even before recent "AI improvements" for us tech nerds Google search was broken ad invaded something. But for average Joe up until recently it's was still okay because it served purpose of whatever normal people use search for: find some rumors about their favorite celebs, find some car parts information or just "buy X".

Problem for Google is that for a good chunk of normal non-techy people LLM chats looks like talking to genius super intelligence and they was not burned by it yet. So they trust it.

And now good chunk of non-tech people now go and ask ChatGPT instead of using google search. And they do it simply because it's less enshittified than Google search.


I wonder is Google's AI investment a rational reaction to real competition or something else? My strong suspicion is that it's in fact delusional beliefs held by their management--something to do with "AGI"--that drives this activity, perhaps combined with the effects of information monoculture/social isolation/groupthink. It seems the simpler explanation that a very small group of people are behaving insanely than a very large number.


I'm honestly clueless about reasoning behind bigtech investment into AI. For me it's all just look like another seasonal fad like we had many of during last two decades. Everyone invests into AI because of FOMO.

I know the tech itself is real and people do use it. And it will certainly change the world. Yet I doubt even fraction of money burnt on it will ever be recuperated because race to the bottom.

But yeah - I'm just random tech guy who has not built a big successful company and honestly have very little clue how to make money this way.


> I'm just random tech guy who has not built a big successful company and honestly have very little clue how to make money this way.

Hey, me too :)

I’ve been at this for a couple of decades, though, and from what I’ve seen the key to building a “successful” company is to ride the wave of popular interest to get funding, build an effective team, and then (and only then) try to find a way to make it profitable enough to exit.

I do think “AI” (really, LLMs, and GPTs in particular) are going to have a transformative impact on a scale and at a rate we’ve never seen before - I just have zero confidence that I can accurately predict what it’s going to look like when the dust settles.


Users still googled before. Now they just move to chatbots. Regular people don't really notice the search degradation as much and enshittification helps Google, as revenues kept going up. Chatbots are an existential threat since they will add ads and that's where Google's ad revenue dies.


Did any users actually move to chatbots? By which I don't mean the 0.001% of tech nerds who buy chatgpt subscriptions, but in aggregate did a meaningful number of google searchers defect to chatgpt or other llm services? I really doubt that. Data would be interesting but there's a credibility problem...


Yes. People do use them and they trust them, unfortunately.

Tech nerds know what ChatGPT is, they know llm limits somewhat and they know it's hallucinating. Normal people do not - for them is a magical all knowing oracle.


> People do use them and they trust them, unfortunately.

Yep, and it’s hard to communicate that to them. It’s hard to accurately describe even to someone familiar with the context.

I don’t think “trust” is the right word. Sitting here on 19 Nov 2025, I do in fact trust LLMs to reason. I don’t trust LLMs to be truthful.

If I ask for a fact, I always consider what I’d lose if that fact were wrong.

If I ask for reasoning, I provide the facts that I believe are required to make the decision. I then double-check that reasoning by inverting the prompt and comparing the output in the other direction. For more critical decisions, I make sure I use different models, from different providers, with completely separate context. If I’ve done all that, I think I can honestly say that I trust it.

These days, I would describe it as “I don’t trust AI to distinguish truth”


I don’t have data for it, and would love to dig it up at some point. My head is too deep in a problem at the moment to make space for it …but I did just add it to my task list via ChatGPT :)

Anecdotally, I believe they did.

My wife is decidedly not a tech nerd, but had her own ChatGPT subscription without my input before I switched us over to a business account.

My mother is 58, and a retired executive for a “traditional” Fortune 100 company. She’s competent with MS productivity tools and the software she used for work, but has little interest outside that. She also had her own ChatGPT subscription.

Both of them were using it for at least a large subset of what they’d previously used Google for.


Gemini, ChatGPT and probably all of the others have free tiers that can be used as an enhanced web search. And they're probably better in many regards, since they do the aggregation directly. Plus regular users don't really check sources, can't really verify the trustworthiness of a website, etc, so their results were always hit or miss.


As someone who deeply dislikes using chatbots for information there is a lot of stuff that is easily and reliably anwsered by GPT

You must know the limitations of the medium but searching for how much and at what temperature should i bake my broccoli is so fucking annoying to search on google


> Is any of that actually true though?

¯\_(ツ)_/¯

I don’t think it matters.

Google has the capital to spend, and this effort needn’t succeed to be worthwhile. My point is that the scope of the potential future risk more than justifies the expense.

> and in doing so damaged their product

Only in objective terms.

The overall size of the market Google is operating in hasn’t changed, and I’m not aware of anyone positioned to provide a better alternative. Even if we assume that Google Search has gotten worse as a result of this, their traditional competitors aren’t stealing marketshare. They’re all either worse than the current state of Search, are making the same bet, or both.


I've heard the same breathless parroting of the marketing hype at large O(thousands ppl) cloud tech companies. A quote from leadership:

> This is existential. If we aren't early adopters of AI tools we will be left behind and will never catch up.

This company is dominant in the space they operate in. The magnitude of the delusion is profound. Ironically, this crap is actually distracting and affects quality, so it could affect competitiveness--just not how they hope.


I've seen the same trend. AI neeeds to be everywhere, preferably yesterday, but apart from hooking everything up to an LLM withot regards for the consequences nobody seems to know what the AI is supposed to do.


The next time you're working on your car google bolt torque specs and cross reference the shit their "AI" says with the factory shop manual. Hilarity ensues.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: