Hacker Newsnew | past | comments | ask | show | jobs | submit | scottbez1's commentslogin

Very different standards - in its current form of emergency autoland it just needs to be proven to result in equal or better outcomes as a plane with no rated pilot onboard; the best case is another person that knows how to use the radio and can listen to instructions but the more likely case is a burning wreckage when the pilot is incapacitated.

To always auto land it needs to be as good as a fully trained and competent pilot, a much higher standard.


Latency makes this hard even with local connections, it’s essentially impossible due to physics to do it offshore.

And I believe Waymo remote access only allows providing high level instructions (like pull over, take the next right, go around this car, etc) precisely because full direct control with a highly and variably latent system is very hard/dangerous.

And in an emergency situation you’re likely to have terrible connectivity AND high level commands are unlikely to be sufficient for the complexity of the situation.


Yeah, the correlated risk with AVs is a pretty serious concern. And not just in emergencies where they can easily DDOS the roads, but even things like widespread weaknesses or edge cases in their perception models can cause really weird and disturbing outcomes.

Imagine a model that works real well for detecting cars and adults but routinely misses children; you could end up with cars that are 1/10th as deadly to adults but 2x as deadly to children. Yes, in this hypothetical it saves lives overall, but is it actually a societal good? In some ways yes, in some ways it should never be allowed on any roads at all. It’s one of the reasons aggregated metrics on safety are so important to scrutinize.


This doesn’t seem that crazy to me - a broadly applicable coordinated OTA zero day applied across cars during US rush hours has the potential to result in likely hundreds of thousands of deaths in a few hours if safety critical systems like airbags can be tampered/inhibited by OTA-capable systems.

The scale of car travel plus the inherent kinetic energy involved make a correlated risk particularly likely to lead to a mass casualty event. There are very few information system vulnerabilities with that magnitude of short-term worst case outcome.


Sure but you could just nuke us too, given that the response to a mass civilian death event would be the same. Same reason the US would be foolish to destroy the Three Gorges Dam.

It doesn't need to be a mass civilian death event. They can wait, collect data and kill 90% of our most important soldiers, heads of state, spies and everyone needed to maintain critical sectors of our economy. They could kill everyone who is anti-china. They could kill all the members of one political party (any one) as a false flag and cause a civil war.

Surveillance technology is nessisarially selective, so these "all or nothing" hypotheticals do not apply.

See also "slaughterbots". https://www.youtube.com/watch?v=O-2tpwW0kmU


Again, they could just nuke us. Because if they did what you're suggesting, we would absolutely nuke them in response.

How would we know who did it? As I said earlier, it could be a false flag attack triggering a civil war, or a war with another mutual enemy.

China could kill every anti-russian politican with robots, and start a nuclear shootout between the US and Russia.


Nonsense, if that's the goal the countries are at war and you have to worry about nukes, not your car being switched off.

I'd expect HN crowd to be smarter than nonsense security propaganda, yet it seems to work.


There was already a million vehicle recall for a vulnerability that allowed remote control of safety features (steering/breaking/acceleration control) that could be abused by anyone with a sprint mobile sim.

https://static.nhtsa.gov/odi/rcl/2015/RCRIT-15V461-4869.pdf


.... and the second US civil war starts up and one side has hacked into the automobile kill switches ...

"security" and "war" come in all sizes and shapes. Even inter-national warfare can be of the "cold" variety, in which nobody is nuking anybody else, but making automobiles randomly unreliable could be extremely effective (for a while, anyway).


Not really convinced by your argument. If you want to achieve your scenario you just take a sysadmin from the Tesla shanghai plant and next time they go to the US HQ they gain access to a coworkers laptop and deploy an OTA update to the tesla fleet. And this is assuming that the Tesla OTA update deployment mechanism is actually separated between countries, and not simply accessible from the Tesla intranet.

No need to design & ship another low-cost car model for this.


Flashing can be easy, sure. Compiling that binary, including library management, is not, unless you’re using something like micropython. CMake is not hobbyist/student-friendly as an introductory system. (Arduino isn’t either, but platformio with Arduino framework IS! RPi refuses to support platformio sadly)

Arduino took over for 3 reasons: a thoughtful and relatively low cost (at the time) development board that included easy one-click flashing, a dead-simple cross-platform packaging of the avr-gcc toolchain, and a simple HAL that enabled libraries to flourish.

Only the first item, and a bit of the second), is really outdated at this point (with clones and ESP32 taking over the predominant hardware) but the framework is still extremely prominent and active even if many don’t realize it. ESPHome for example still will generally use the Arduino HAL/Framework enabling a wide library ecosystem, even though it’s using platformio under the hood for the toolchain.

Even folks who “don’t use Arduino any more” and use platformio instead are often still leveraging the HAL for library support, myself included. Advanced users might be using raw esp-idf but the esp-idf HAL has had a number of breaking API changes over the years that make library support more annoying unless you truly need advanced features or more performance.


CMake doesn't spark joy, but it's not something you need to touch constantly. I figured out how to set up a basic cmake file and now I mostly need to touch it to set a project name, add or remove modules etc.

It was a while since I used arduino, but I remember having a harder time setting up a workflow that didn't need me to touch the arduino IDE.


None of those bullet points are contradictory though?

They are all completely aligned with a policy reducing non-essential public exposure, with a tiered approach for transport that limits public exposure where better alternatives exist.


> tiered approach for transport that limits public exposure where better alternatives exist.

Travelling to a testing center almost certainly puts me within proximity of COVID patients at one point, particularly in the waiting room. Sitting in waiting rooms is how I got the case the city notified me of in the first place.

Cycling and exercise is not only a solitary activity, it is also conducted outdoors where we were told risk of airborne transmission was de minimis. It in fact limits public exposure more than travelling to a public place full of probable COVID patients.


Lock-in usually just refers to a situation where switching costs are (perceived to be) higher than the net benefit, within some reasonable payoff period. It can include things like high cost to extract data, but it can also include things like network/social effects.

The latter is a huge reason companies strive to establish "platforms" and suites of connected apps - even if competition is cheaper/better in a vacuum, it still may not be worth the effort to switch if you're already established within an ecosystem. The goal is vendor lock-in even if they're not holding your data hostage (though they might do that too).


Yeah, I agree with that. I didn’t mean to imply that it had to be impossible to move your data.

However, I do think that it has to mean something besides “there are no other good providers of a service”. Integrations, platforms, etc make sense as being “locked in”, but not “no one else provides the service”

To me, the key would be, “if you were starting from scratch and weren’t using any service at all, would you choose a different one than what you actually currently use?”

If the answer is “I would still choose the one I am using”, then I don’t think that is locked in.


Dropbox recently broke (accidentally or intentionally) hosted images/thumbnails from their Paper docs product (which they're quietly but noncommittally killing off) and that was a good wakeup for me to stop trusting hosted storage. And I'm saying this as a former Dropbox engineer of ~6 years who has plenty of free Dropbox storage for life. The brain drain and profitability crunch is real.

Recently bought a 14TB HDD and downloaded my entire Dropbox, Google Photos, and Lightroom photos. Planning to set up an off-site copy as well, and will probably build out a proper NAS within a few years.


Dropbox is just hard to trust. I used to pay for it, until they suddenly decided to shut down the photos app. Then the password app, recently. I don’t expect Apple to shut down Photos any time soon, so I find it easier to trust them with my data


On the topic of small LED panels, Jason of Evil Genius Labs has been making some really small LED panels [0] with addressable 1mm x 1mm LEDs (yes, individually addressable AND only 1mm on each side!). Fitting 128 onto a 1" circle is pretty sweet.

I keep meaning to design some PCBs with them [1] but it's too far down my ever-growing list of projects to see the light of day...

[0] https://www.evilgeniuslabs.org/one-inch-fibonacci128

[1] https://www.lcsc.com/product-detail/C5349953.html


Kingbright recently released 01005 (0.45mm x 0.25mm x 0.2mm) sized LEDs, which afaik are one of the smallest ones easily available. One neat idea would be to pack those on DIP14 sized pcb, making tiny neat character display. I guess something like 5x7 or 6x8 matrix could be doable with small mcu to drive them.

For those 1mm addressable RGB LEDs I've been thinking how you could do cool cyberpunk looks by stringing them on some hairthin magnet wire and sticking them on your body/face/hair/etc. Blend them in with some latex or something if needed. Just need to hide the controller/battery somewhere.


Ah, the joy of being a Brit. That first link is just full of purple rectangles containing the text "content not viewable in your region" (Imgur).

I'm feeling safer already. sigh.

Looks like a fun site though, I'll take a look when I'm not on my work computer.


Those are just really small WS2812, I get those at 160 led per meter on a strip of 5 bucks per meter. I just mean to say they are cheaply available in many form factors. I use them a lot in cosplay clothing.



Those look like normal 2.5-3mm LEDs, that is big difference to 1mm² LEDs. The circle disc from OPs link has 2.5x higher LED density, and they could be probably packed even more densely in a grid.


Looks like they're WS2812B-2020, like on this PCB:

https://www.wemos.cc/en/latest/d1_mini_shield/8x8_rgb.html


BTW: The 2020 are super bright, but the 1010 not so much.


Why are you doing this to me


The last point I think is most important: "very subtle and silently introduced mistakes" -- LLMs may be able to complete many tasks as well (or better) than humans, but that doesn't mean they complete them the same way, and that's critically important when considering failure modes.

In particular, code review is one layer of the conventional swiss cheese model of preventing bugs, but code review becomes much less effective when suddenly the categories of errors to look out for change.

When I review a PR with large code moves, it was historically relatively safe to assume that a block of code was moved as-is (sadly only an assumption because GitHub still doesn't have indicators of duplicated/moved code like Phabricator had 10 years ago...), so I can focus my attention on higher level concerns, like does the new API design make sense? But if an LLM did the refactor, I need to scrutinize every character that was touched in the block of code that was "moved" because, as the parent commenter points out, that "moved" code may have actually been ingested, summarized, then rewritten from scratch based on that summary.

For this reason, I'm a big advocate of an "AI use" section in PR description templates; not because I care whether you used AI or not, but because some hints about where or how you used it will help me focus my efforts when reviewing your change, and tune the categories of errors I look out for.


I think we need better code review tools in the age of LLMs - not just sticking another LLM to do a code review on top of the PR

Needs to clearly handle the large diffs they produce - anyone have any ideas


I was about to write my own tool for this but then I discovered:

   git diff --color-moved=dimmed-zebra
That shows a lot of code that was properly moved/copied in gray (even if it's an insertion). So gray stuff exactly matches something that was there before. Can also be enabled by default in the git config.


I would love if GitHub implemented this in their UI! There’s and issue: https://github.com/orgs/community/discussions/9632


I used autochrome[0] for Clojure code to do this. (I also made some improvements to show added/removed comments, top-level form moves, and within-string/within-comment edits the way GitHub does.)

At first I didn't like the color scheme and replaced it with something prettier, but then I discovered it's actually nice to have it kinda ugly, makes it easier to detect the diffs.

[0] https://fazzone.github.io/autochrome.html


That's a great solution and I'm adding it to my fallback. But also, people might be interested in diff-so-fancy[0]. I also like using batcat as a pager.

[0] https://github.com/so-fancy/diff-so-fancy


Perfect. This is why I visit this website


Thanks:)


I personally agree with you. I think that stacked diffs will be more important as a way of dealing with those larger diffs.


Yep, this pattern of LLMs reviewing LLMs is terrifying to me. It's literally the inmates running the asylum.


When using a reasonably smart llm, code moves are usually fine, but you have to pay attention whenever uncommon words (like urls or numbers) are involved.

It kind of forces you to always put such data in external files, which is better for code organization anyway.

If it's not necessary for understanding the code, I'll usually even leave this data out entirely when passing the code over.

In Python code I often see Gemini add a second h to a random header file extension. It always feels like the llm is making sure that I'm still paying attention.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: