Hacker Newsnew | past | comments | ask | show | jobs | submit | ori_b's commentslogin

The analogy you're making is that wiring a taskrabbit to assemble Ikea furniture is woodworking.

There's a market for Ikea. It's put woodworkers out of business, effectively. The only woodworkers that make reasonable wages from their craft are influencers. Their money comes from YouTube ads.

There's no shame in just wanting things without going to the effort of making them.


Love this extension of the analogy, particularly. Especially because, like a woodworker inspecting IKEA assembled by a taskrabbit, the craftsmanship of a finished product becomes less and less impressive the longer you inspect it

> For my whole life I’ve been trying to make things—beautiful elegant things.

Why did you stop? Because, you realize, LLMs are giving up the process of creating for the immediacy of having. It's paying someone to make for you.

Things are more convenient if you live the dream of the LLM, and hire a taskrabbit to run your wood shop. But it's not you that's making.


There's undefined behavior, which is quite well specified. What do you mean by unspecified behavior? Do you have an example?


If you put a brick on the accelerator of a car and hop out, you don't get to say "I wasn't even in the car when it hit the pedestrian".

This is true for bricks, but it is not true if your dog starts up your car and hits a pedestrian. Collisions caused by non-human drivers are a fascinating edge case for the times we're in.

It is very much true for dogs in that case: (1) it is your dog (2) it is your car (3) it is your responsibility to make sure your car can not be started by your dog (4) the pedestrian has a reasonable expectation that a vehicle that is parked without a person in it has been made safe to the point that it will not suddenly start to move without an operator in it and dogs don't qualify.

You'd lose that lawsuit in a heartbeat.


what if your car was parked in a normal way that a reasonable person would not expect to be able to be started by a dog, but the dog did several things that no reasonable person would expect and started it anyway?

You can 'what if' this until the cows come home but you are responsible, period.

I don't know what kind of drivers education you get where you live but where I live and have lived one of the basic bits is that you know how to park and lock your vehicle safely and that includes removing the ignition key (assuming your car has one) and setting the parking brake. You aim the wheels at the kerb (if there is one) when you're on an incline. And if you're in a stick shift you set the gear to neutral (in some countries they will teach you to set the gear to 1st or reverse, for various reasons).

We also have road worthiness assessments that ensure that all these systems work as advertised. You could let a pack of dogs loose in my car in any external circumstance and they would not be able to move it, though I'd hate to clean up the interior afterwards.


I agree. The dog smashed the window, hot–wired the ignition, released the parking brake, shifted to drive, and turned the wheel towards the opposite side of the road where a mother was pushing a stroller, killing the baby. I know, crazy right, but I swear I'm not lying, the neighbor caught it on camera.

Who's liable?

I think this would be a freak accident. Nobody would be liable.


Your analogy has long since ceased to have any illuminating power, because it involves things that are straight up impossible.

You would not be guilty of a crime, because that requires intent.

But you would be liable for civil damages, because that does not. There are multiple theories for which to establish liability, but most likely this would be treated as negligence.


What was I negligent about?

Well at that point we might as well say it's gremlins that you summoned, so who knows, there are no laws about gremlins hot-wiring cars. If you summoned them, are they _your_ gremlins, or do they have their own agency. How guilty are you, really... At some point it becomes a bit silly to go into what-if scenarios, it helps to look at exact cases.

> I agree. The dog smashed the window, hot–wired the ignition, > released the parking brake, shifted to drive, and turned the > wheel towards the opposite side of the road where a mother was > pushing a stroller, killing the baby. I know, crazy right, but > I swear I'm not lying, the neighbor caught it on camera.

> Who's liable?

You are. It's still your dog. If you would replace dog with child the case would be identical (but more plausible). This is really not as interesting as you think it is. The fact that you have a sentient dog is going to be laughed out of court and your neighbor will be in the docket together with you for attempting to mislead the court with your AI generated footage. See, two can play at that.

When you make such ridiculously contrived examples turnaround is fair play.


Found the annoying kid in my lawschool class

You're stretching it. It's more like if you train your dog to start the car and accelerate, open the door and turn your back.

Everything an AI does is downstream of deliberate, albeit imperfect, training.

You know this, you rig it all up and you let things happen.


Legally, in a lot of jurisdictions, a dog is just your property. What it does, you did, usually with presumed intent or strict liability.

What if you planted a bush that attracted a bat that bit a child?

What if you have an email in your inbox warning you that 1) this specific bush attracts bats and 2) there were in fact bats seen near you bush and 3) bats were observed almost biting a child before. And you also have "how do I fuck up them kids by planting a bush that attracts bats" in your browser history. It's a spectrum you know.

Well, if it was a bush known to also attract children, it was on your property, and the child was in fact attracted by it and also on your property, and the presence of the bush created the danger of bat bites, the principal of “attractive nuisance” is in play.

what if my auntie had wheels, would she be a wagon?

Would a reasonable person typically consider this an act that risk causing harm to kids?

In the USA, at least, it seems pet owners are liable for any harm their pets do.

Being guilty != Being responsible

They correlate, but we must be careful not to mistake one for the other. The latter is a lower bar.


I don’t know where you from but at least in Sweden you have strict liability for anything your dog does

I'm dubious, do you have any examples of this happening?

Prima facie negligence = liability

With all due respect, that's all the more reason to put all the resources they can behind browser development, rather than scattering it across unrelated projects.

As long as you respect the NO_COLOR variable, it will work for me.

https://no-color.org/


That's a funny example. I have no issues with the idea of course but in my day to day life I'm way more likely to encounter an issue with colors being lost after sent to a pager or a log file or tee or what not

Why would we argue if the machine is better at knowing what's worth doing? Why wouldn't we ask the machine to decide, and then do it?

There are infinite things worth doing, a machines ability to actually know what's worth doing in any given scenario is likely on par with a human's. What's "Worth doing" is subjective, everything comes down to situational context. Machines cannot escape the same ambiguity as humans. If context is constant, then I would assume overlapping performance on a pretty standard distribution between humans and machines.

Machines lower the marginal cost of performing a cognitive task for humans, it can be extremely useful and high leverage to off load certain decisions to machines. I think it's reasonable to ask a machine to decide when machine context is higher and outcome is de-risked.

Human leverage of AGI comes down to good judgement, but that too is not uniformly applied.


For what human leverage of AGI may look like, look at the relationship between a mother and a toddler.

As you said: There's an infinite number of things a toddler may find worth doing, and they offload most of the execution to the mother. The mother doesn't escape the ambiguity, but has more experience and context.

Of course, this all assumes AGI is coming and super intelligent.


Why would we let a machine decide what's worth doing? In what way could its decisions be better? Better for who?

Well, because people are lazy. They already ask it for advice and it gives answers that they like. I already see teams using AI to put together development plans.

If you assume super intelligence, Why wouldn't that expand? Especially when it comes to competitive decisions that have a real cost when they're suboptimal?

The end state is that agents will do almost all of the real decision making, assuming things work out as the AI proponents say.


It's not the syscalls. There were only 300,000 syscalls made. Entering and exiting the kernel takes 150 cycles on my (rather beefy) Ryzen machine, or about 50ns per call.

Even assume it takes 1us per mode switch, which would be insane, you'd be looking at 0.3s out of the 17s for syscall overhead.

It's not obvious to me where the overhead is, but random seeks are still expensive, even on SSDs.


Didn't test, but my guess is it's not “syscalls” but “open,” “stat,” etc; “read” would be fine. And something like “openat” might mitigate it.



I'm sympathetic to that argument, but to invoke it you have to argue why the anti-fraud measures outweigh the benefits, not just drop a link to it. Moreover that's giving too much credit to the OP, who doesn't even recognize there's some sort of a trade-off, only that "fool and their money is soon departed".


I guess it's not a bad Brexit.


Pretty wild that they picked that as their name AFTER Brexit began.


there are no good brexits, bro. god promise.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: