Hacker Newsnew | past | comments | ask | show | jobs | submit | more comboy's commentslogin

There was maybe a short period of time when RPi offered a decent compute for money if ever. But all of that time it's about the ecosystem and simply being the most popular platform. Any hardware library, you take it and you know somebody tested it on this exact hardware with the same operating system. When doing hardware stuff it can be really painful to debug. You don't want to also be wondering if maybe some pin is handled differently on your box than RPi etc.


Yeah; honestly if you're going to integrate I2C/UART/SPI, cellular, serial bus stuff, PoE, or anything like that into a project, the Pi (4/5) makes that simple, and almost always painless.

Having well-supported GPIO and documented interfaces is nice, when you want to do anything outside normal 'compute' use cases.

The Pi 4 is still a great option for throwing into random spots for $35 and burning 1-2W of power. The Pi 5 less so, in that common homelab use case.

I wish they made a Pi Zero 2 non-W with an Ethernet port, for $15; that would be the perfect little 'more than microcontroller' endpoint for a lot of things.


> I wish they made a Pi Zero 2 non-W with an Ethernet port, for $15; that would be the perfect little 'more than microcontroller' endpoint for a lot of things.

It's not a Raspberry Pi, but this Radxa board is more powerful than a Pi Zero 2 and has the form factor you're looking for. Price isn't that bad either (around $25 for the cheapest model).

https://radxa.com/products/zeros/zero3e/


Ethernet w/POE


There’s Orange Pi Zero, they are pretty good. I have the very first version from circa 2016, and I like it. The only thing I’d change is to move from 32 bit to 64. But IIRC their very next version is 64 bit. And as of today there are three or four versions. All of which are pretty cheap.


The ecosystem (and in the early days, price) is about the only thing they’ve ever had going for them.

They’ve otherwise always been mediocre boards in pretty much every respect. Slow, relatively power hungry, and powered by a set top box SOC that is NDAed out the wazoo.



My home builder tried to get us to use crestron and we noped out of that. Their stuff feels like it's a relic of the X10 days and has bonkers price tags. A single lightswitch that can dim is $200-$300 from them. And that doesn't include the hub to control things that costs (IIRC) $2-$3k.

In comparison, a lutron switch is $70 and the hub is $50.


I wouldn't consider a Lutron light switch to be a direct comparison to Crestron. Crestron targets the ultra wealthy by being very reliable (assuming setup by a competent programmer) with unified control of pretty much everything household - shades, lights, audio, video, etc. They're aiming for the kind of people who will pay a premium to make sure their house just works, every time, without having to deal with tech issues.

You could certainly bodge together a similar system for less money, but the controls won't be as nice and it'll be nowhere near as hassle free long term. HomeAssistant and competitors have really been catching up in the past few years though, i'm excited to see competition in the market. I wish they could all play nice together with reasonable APIs :/


Lest this mislead anyone, that's a touch panel not a thermostat. Pretty much all of Crestrons panels and Processors (the brains of the system) ran some form of windows embedded. They've switched the current generation over to linux I believe.


Welcome to mobile development.


Paypal lawyers went over this situation thousands of times. If they would have any trouble with it, Paypal's behavior would already be different. AML/KYC and other bank regulations are so ridiculous and vague that it seems perfectly normal according to them to withhold somebody's money for a long time without even providing any explanation.


The most likely reason they don't have trouble with it is that the only consequence if they do get sued is that they have to actually have someone look at the case and release the money. All it costs them is a few minutes of their lawyer's time.

In cases where the victim just gives up, which seems incredibly common, they get to steal the money and they avoid the cost of having to review most cases, so it's obviously still a winning strategy.


If his net effect on the climate is positive then you are only arguing that he could be even more efficient at it - but you are not in position to do that without knowing all his personal context. Outside you can only judge the net result - which is not a bad one.


I bet kids these days don't even know how to do a hostile channel takeover with a bunch of eggdrops.


*** Ja mata!


zk-SNARKS maybe?


For demonstrating verification of a conjecture, surely you can do much simpler things than a zero-knowledge proof: Send one of the primes.


It would still take a nontrivial amount of computation to do all the verification afterwards. Back of the envelope calculations suggest it should less than 100x longer to find the two primes than to verify them.


It'd be neat to do the verification in the same manner by redistributing one client's results to another, therefore obtaining a proof modulo client collusion.


Say smaller prime is less then 10,000. Then this one or two Byte per Nummer. E.g. 100 Mio number is already 100mb or


I am curious about alternatives or solutions in such a setting / context.


I like how critique of LLMs evolved on this site over the last few years.

We are currently at nonsensical pacing while writing novels.


The most straightforward way to measure the pace of AI progress is by attaching a speedometer to the goalposts.


Oh, that's a good one. And it's true. There seems to be a massive inability for most people to admit the building impact of modern AI development on society.


Oh, we do admit impact and even have a name for it: AI slop. (Speaking on LLMs now since AI is a broad term and it has many extremely useful applications in various areas)


AI slop is soon to be "AI output that no one wanted to take credit for".


They certainly seem to have moved from "it is literally skynet" and "FSD is just around the corner" in 2016 to "look how well it paces my first lady Trump/Musk slashfic" in 2025. Truly world changing.


I've asked claude to explain what you meant... https://claude.ai/share/391160c5-d74d-47e9-a963-0c19a9c7489a


I’m not source outsourcing even the comprehension of HN comments to an LLM is going to work out well for your mind


I’m not sure lacking comprehension of a comment and choosing to ignore that lack is better. Or worse: asking everyone to manually explain every reference they make. The LLM seems a good choice when comprehension is lacking.


This is so on-point. Many things that we now take for granted from LLMs would have been considered sufficient evidence for AGI not all that long ago. Likely the only test of AGI is whether we can still come up with new goalpost.


Haha, so that's the first derivative of goalpost position. You could take the derivative of that to see if the rate of change is speeding up or slowing.


I love this comment.


It's not really passing the Turing Test until it outsells Harry Potter.


> It's not really passing the Turing Test until it outsells Harry Potter.

Most human-written books don't do that, so that seems to be a ceiteria for a very different test that a Turing test.


Both books that have outsold the Harry Potter series claim divine authorship, not purely human. I am prepared to bet quite a lot that the next isn't human-written, either.


The joke is that the goalpost is constantly moving.


This subgoal post can't move much further after it passes "outsells the Bible" mark.


Why would the book be worth buying tough. If AI can generate a fresh new one just for you?


I don't know. It's a question relevant to all generative AI applications in entertainment - whether books, art, music, film or videogames. To the extent the value of these works is mostly in being social objects (i.e. shared experience to talk about with other people), being able to generate clones and personalized variants freely via GenAI destroys that value.


You may be right, on the other hand it always feels like the next goalpost is the final one.

I'm pretty sure if something like this happens some dude will show up from nowhere and claim that it's just parroting what other, real people have written, just blended it together and randomly spitted it out – "real AI would come up with original ideas like cure for cancer" he'll say.

After some form of that comes another dude will show up and say that this "alphafold while-loop" is not real AI because he just went for lunch and there was a guy flipping burgers – and that "AI" can't do it so it's shit.

https://areweagiyet.com should plot those future points as well with all those funky goals like "if Einstein had access to the Internet, Wolfram etc. he could came up with it anyway so not better than humans per se", or "had to be prompted and guided by human to find this answer so didn't do it by itself really" etc.


From Gary Marcus' (notable AI skeptic) predictions of what AI won't do in 2027:

> With little or no human involvement, write Pulitzer-caliber books, fiction and non-fiction.

So, yeah. I know you made a joke, but you have the same issue as the Onion I guess.


Let me toss a grenade in here.

What if we didn’t measure success by sales, but impact to the industry (or society), or value to peoples’ lives?

Zooming out to AI broadly: what if we didn’t measure intelligence by (game-able, arguably meaningless) benchmarks, but real world use cases, adaptability, etc?


I recently watched some Claude Plays Pokemon and believe it's better measure than all those AI benchmarks. The game could be beaten by a 8yo which obviously doesn't have all that knowledge that even small local LLMs posess, but has actual intelligence and could figure out the game within < 100h. So far Claude can't even get past the first half and I doubt any other AI could get much further.


Now I want to watch Claude play Pokemon Go, hitching a ride on self-driving cars to random destinations and then trying to autonomously interpret a live video feed to spin the ball at the right pixels...

2026 news feed: Anthropic cited as AI agents simultaneously block traffic across 42 major cities while trying to capture a not-even-that-rare pokemon


the true measure of AI: does it have fun playing pokemon? did it make friends along the way?


We humans love quantifiability. Since you used the word "measure", do you believe the measurement you're aspiring for is quantifiable?

I currently assert that it's not, but I would also say that trying to follow your suggestion is better than our current approach of measuring everything by money.


> We humans love quantifiability.

No. Screw quantifiability. I don't want "we've improved the sota by 1.931%" on basically anything that matters. Show me improvements that are obvious, improvements that stand out.

Claude Plays Pokemon is one of the few really important "benchmarks". No numbers, just the progress and the mood.


This is difficult to do because one of the juiciest parts of AI is being able to take credit for it's work.


the goal posts will be moved again. Tons of people clamoring the book is stupid and vapid and only idiots bought the book. When ai starts taking over jobs which it already has you’ll get tons of idiots claiming the same thing.


Well, strictly speaking outselling the Harry Potter would fail the Turing test: the Turing test is about passing for human (in an adversarial setting), not to surpass humans.

Of course, this is just some pedantry.

I for one love that AI is progressing so quickly, that we _can_ move the goalposts like this.


To be fair, pacing as a big flaw of LLMs has been a constant complaint from writers for a long time.

There were popular writeups about this from the Deepseek-R1 era: https://www.tumblr.com/nostalgebraist/778041178124926976/hyd...


This was written on march 15. Deepseek came out in January. "Era" is not a language I would use for something that happened few days ago


This either ends at "better than 50% of human novels" garbage or at unimaginably compelling works of art that completely obsoletes fiction writing.

Not sure what is better for humanity in long term.


That could only obsolete fiction-writing if you take a very narrow, essentially commercial view of what fiction-writing is for.

I could build a machine that phones my mother and tells her I love her, but it wouldn't obsolete me doing it.


Ahh, now this would be a great premise for a short story (from the mom's POV).


We are, if this comment is the standard for all criticism on this site. Your comment seems harsh. Perhaps novel writing is too low-brow of a standard for LLM critique?


I didn't quite read parent's comment like that. I think it's more about how we keep moving the goalposts or, less cynically, how the models keep getting better and better.

I am amazed at the progress that we are _still_ making on an almost monthly basis. It is unbelievable. Mind-boggling, to be honest.

I am certain that the issue of pacing will be solved soon enough. I'd give 99% probability of it being solved in 3 years and 50% probability in 1.


In my consulting career I sometimes get to tune database servers for performance. I have a bag of tricks that yield about +10-20% performance each. I get arguments about this from customers, typically along the lines of "that doesn't seem worth it."

Yeah, but 10% plus 20% plus 20%... next thing you know you're at +100% and your server is literally double the speed!

AI progress feels the same. Each little incremental improvement alone doesn't blow my skirt up, but we've had years of nearly monthly advances that have added up to something quite substantial.


Yes, if you are Mary Poppins, each individual trick in your bag doesn't have to be large.

(For those too young or unfamiliar: Mary Poppins famously had a bag that she could keep pulling things out of.)


Except at some point the low hanging fruit is gone and it becomes +1%, +3% in some benchmarked use case and -1% in the general case, etc. and then come the benchmarking lies that we are seeing right now, where everyone picks a benchmark that makes them look good and its correlation to real world performance is questionable.


What exactly is the problem with moving the goalposts? Who is trying to win arguments over this stuff?

Yes, Z is indeed a big advance over Y was a big advance over X. Also yes, Z is just as underwhelming.

Are customers hurting the AI companies' feelings?


> Are customers hurting the AI companies' feelings?

No. It's the critics' feelings that are being hurt by continued advances, so they keep moving goalposts so they can keep believing they're right.


The goalposts should keep moving. That's called progress. Like you, I'm not sure why it seems to irritate or even amuse people.


People are trying to use gen AI in more and more use-cases, it used to fall flat on its face at trivial stuff, now it got past trivial stuff but still scratching the boundaries of being useful. And that is not an attempt to make the gen AI tech look bad, it is really amazing what it can do - but it is far from delivering on hype - and that is why people are providing critical evaluations.

Lets not forget the OpenAI benchmarks saying 4.0 can do better at college exams and such than most students. Yet real world performance was laughable on real tasks.


> Lets not forget the OpenAI benchmarks saying 4.0 can do better at college exams and such than most students. Yet real world performance was laughable on real tasks.

That's a better criticism of college exams than the benchmarks and/or those exams likely have either the exact questions or very similar ones in the training data.

The list of things that LLMs do better than the average human tends to rest squarely in the "problems already solved by above average humans" realm.


I don’t know why I keep submitting myself to hacker news but every few months I get the itch, and it only takes a few minutes to be turned off by the cynicism. I get that it’s from potentialy wizened tech heads who have been in the trenches and are being realistic. It’s great for that, but any new bright eyed and bushy tailed dev/techy, whatever, should stay far away until much later in their journey


Do we have any simple benchmarks ( and I know benchmarks are not everything ) that tests all the LLMs?

The pace is moving so fast I simply cant keep up. Or a ELI5 page which gives a 5 min explanation of LLM from 2020 to this moment?


It’s more a bellwether or symptom of a flaw where the context becomes poisoned and continually regurgitates the same thought over and over.


Not really new is it? First cars just had to be approaching horse and cart levels of speed. Comfort, ease of use etc. were non-factors as this was "cool new technology".

In that light, even a 20 year old almost broken down crappy dinger is amazing: it has a radio, heating, shock absorbers, it can go over 500km on a tank of fuel! But are we fawning over it? No, because the goalposts have moved. Now we are disappointed that it takes 5 seconds for the Bluetooth to connect and the seats to auto-adjust to our preferred seating and heating setting in our new car.


lol wouldn’t that be great to read this comment in 2022


Has OpenAI hired McKinsey yet?


I'm unsure if you can layoff AI


ai can.


unnecessary. mckinsey uses ai from openai.

embrace. extend. extinguish.

infiltrate. assimilate.

done, tovarisch ...

https://en.m.wikipedia.org/wiki/Tovarishch


If you have sweets in front of you and some cognitive dissonance in your head about how you both want and don't want them at the same time, this CPU time could be spent on more interesting things.

If you understand you don't want them given a broader context, then it requires no self-dispcipline - you just don't want them.

But if you can't untangle your contradiction, i.e. if self-discipline is required, why would you want to spare any thought on that?

I understand your point about this resisting temptation being a skill, a muscle that you train, that you want to master and that feels empowering. I'm just pointing out that when truly mastered it is 100% effortless - and when it is 100% effortless it means you either can't have it or understand that you don't want to (which is the same as simply not wanting to).

Otherwise it's a cognitive dissonance, a distraction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: