Hacker Newsnew | past | comments | ask | show | jobs | submit | cduzz's commentslogin

My question about the safety of "FSD" or whatever autonomous cars is -- how much safety reputation can you burn in a new market that offers huge convenience? Which is more important, being first or being best?

Apple's famous for not being first, but being best to a well established market and cleaning up.

People certainly don't remember de havilland, who was first to the market of "passenger jet airliner".

I certainly won't be putting my kids in any of these, and especially not in a vehicle operated by a company with a "devil may care" approach to safety.

Sure, it works fine in Arizona or Texas; let's see it work okay with a snow storm in Boston.


I haven't used minio in years, and when I did I only fiddled around with it, but my recollection of it is that it's about the simplest build chain imaginable. Install modern golang, build minio, get single binary.

Anyone relying on an opensource tool like minio, needs to look at:

  * organization supporting it
  * the license
  * the build chain
  * who else uses it?
  * the distribution artifact needed for production.  
Once you've looked at that you can decide "is this an anchor I want to handcuff myself to and hope the anchor won't jump into the icy blue deep taking me and my dreams with it?"

If the org behind it ever decides to rugpull/elastic you, what're you gonna do? At least with something like minio, if they're still distributing the source it's trivial to build (and if you can't build it you should evaluate if you're in a position to rely on it).

Let's look at other cool open source things like SigNoz which distribute only docker artifacts (as far as I remember, anyhow) -- if they were to rugpull that people relying on it would be totally lost at sea.

This isn't to say that this isn't poor behavior on minio's part, but I feel like they've been signaling us for a while that they're looking to repay their VC patrons.


They have also removed the web UI and stopped updating the documentation for the community edition. The former is not extremely serious as the community can easily replace it. The latter is arguably the worst among all the changes that we know of. While they do redirect community documentation towards its enterprise counterpart, it's becoming clear that the differences in the community edition won't be addressed at all. That will make MinIO community edition less viable over time.

Overall, it's pretty clear that they don't view the OSS users kindly or want them around. I'm pretty sure that they would drop the entire community edition if they could do so legally and without much fuzz. You can expect more like this in the future. So this story shouldn't be seen simply as the loss of a docker image.


Right -- I think it's quite clear that if you're relying on the free minio you need to look elsewhere or peer up with some others and fork it.

And any adoption of a critical piece of software needs to have a risk calculus associated with it of "what if they get bought by CA, invaded by Russia and murdered, murder their wife and go to jail, or dedicate their remaining time on earth to writing haiku?"

Both open source software and commercially supported software have risks and mitigations. I'd argue that you're actually safer with open source software since you can pick up and keep running it, but that's not a trivial undertaking.


> I'd argue that you're actually safer with open source software since you can pick up and keep running it, but that's not a trivial undertaking.

I agree with that. It's just that I find it very annoying that these companies turn against the OSS (user) community after they've gained enough market share by taking advantage of the community's trust and network. This discussion thread itself is full of people calling the users 'entitled'. That's some level of gaslighting! The real question is, how much would these projects have succeeded if they had started under the same terms as the ones they've now switched to? If the answer is 'not very much', then that means the community added significant value to the product, which these companies are now refusing to share and running away with. These companies are the entitled ones, besides being deceptive and dishonest.

The case with MinIO is not as egregious as the others we have seen - elastic, for example. MinIO is still under an open source license. But their decisions to let the community edition documentation rot and to remove the web ui make it very clear that they're trying to make the community edition as unviable as possible without having to take the heat for going all out proprietary or source available. Does this tactic seem familiar? This exactly what Google does with AOSP. Slowly remove and replace its OSS parts with proprietary software and gradually kill the project. Again, it's deceptive, dishonest and distasteful.

Both free software and open source software have a tradition of not excluding anybody from participating in the process, community and contributions. But looking at how much certain companies damage the trust and fracture the community for some extra profit, it might be a good idea to start asking if they should even be given the opportunity to do so.


> If the org behind it ever decides to rugpull/elastic you

I love it that you use "elastic" as a verb here.


That certainly seems reasonable -- that the immune system needs practice or otherwise it will start using its ammo on other "hey that's me!" stuff and cause auto-immune diseases.

But I also have to wonder if the kids with auto-immune diseases or "common" allergies elsewhere might just die the first time they encounter some event that'd otherwise be caught and treated in "the first world" ?


So they say:

Data Center end of life will take place on March 28, 2029

Data Center subscriptions and any associated Marketplace apps will expire on this date. To make this transition as smooth as possible, we’re winding down support in phases across the next 3 years, giving you plenty of time to plan your next steps.

I guess nobody uses their on-prem system anyhow?


On prem is ~1.5B in revenue for them. They're making a bet that their financials will improve if enough of that revenue moves to cloud spend even if they lose some folks who want or must have on prem. If you can't grow the TAM, you force the growth apparently.

https://news.ycombinator.com/item?id=45177972 by u/eastbound


It's pretty easy to run docker on macos -- colima[1] is just a brew command away...

It runs qemu under the hood if you want to run x86 (or sparc or mips!) instead of arm on a newer mac.

[1]https://formulae.brew.sh/formula/colima


As hair splitting, one can choose to use qemu or Virtualization.framework https://lima-vm.io/docs/config/vmtype/vz/ (I'm aware that's a link to Lima docs but ... <https://github.com/abiosoft/colima/blob/v0.8.4/config/config...>)


> colima[1] is just a brew command away...

Which would be great if it worked reliably, or had any documentation at all for when it breaks. But it doesn't and it doesn't.


First, I guess I'll just invoke Sturgeon's law[1] -- almost all software, especially if you don't really understand it, is crap, and probably the software you understand is also crap, you're just used to it. Good software is pretty tricky to make.

But second -- I use colima lots, on my home macs and my work macs, and it mostly just works. The profiles stuff is kinda annoying and I find myself accidentally running arm when I want x86, or other tedious config issues crop up. But it actually has been easier to live with than docker desktop where I'd run out of space and things would fall apart.

Docker on MacOS is broadly going work poorly relative to it on linux, just from having to run the docker stuff in a linux vm that's hiding somewhere behind the scenes.

If you find too much friction with any of these, probably it's easier to just run a linux vm on the mac and interact with docker in the 'native' environment. I've found UTM to be quite a bit easier to live with than virtualbox.

[1] https://en.wikipedia.org/wiki/Sturgeon%27s_law


> almost all software, especially if you don't really understand it, is crap, and probably the software you understand is also crap, you're just used to it. Good software is pretty tricky to make.

Most software has issues, but Colima is noticeably worse than most software I've used. And the complete lack of documentation is definitely not normal.


No escape key? No problem! Ctl-[ to the rescue. If you really want to puzzle the kids let them loose on an old hpux or irix box where # is remapped to backspace and @ is remapped to kill...

That's a lovely terminal; makes me wish I'd held onto my vt220 just to show the kids all the weird things people used to have to put up with. I remember there was something deranged about it's keyboard that I eventually came to not really mind all that much. Eventually I spent years of my life in front of a decstation 3100 and I think I eventually got used to the strange layout.


Did you ever use those settings on an old printer terminal? Visible but unusual punctuation for erase and kill made sense with paper terminals, where back space was as non-destructive as forward space was.

(Amusingly, TianoCore makes BS destructive, and that breaks the spinners displayed by the NetBSD boot loader.)

One other amusing thing is that the LK201 et al. are RS423 at 4800 BPS 8N1, so with suitable voltage-level-converting adapters should be easier to plug in to modern SBC kit than a (non bimodal) PS/2 keyboard is.


Oh, I understand why those old-time sysvr3.2 and sysvr4 systems would default to # backspace / @ kill -- even in the early 90s most places that ran unix had a weird zoo of different keyboards that would randomly put the backspace (or delete, or both!) keys in weird an inspired places, but the @ and # are usually in the same place everywhere. There were also some really old terminals that were basically just "screen is printer without paper" without even vt100 emulation mode (maybe they had fancy termcaps that'd let you run vi, but we only ever just used them on servers that weren't actually used too often, in the server room).

To this day, I use a 1990s vintage PS/2 keyboard, with a chain of adapters, on my mac (an old IBM M4-1 keyboard/trackpoint thing). At least on the mac it works perfectly because you can remap the caps lock key to command; it works pretty poorly on windows but such is life. Also, I very often, even today, use the # key as something akin to "kill" but instead in modern bash in vi mode if you're in escape mode it comments out the whole line.

But woof, watching people who'd never interacted with real (and old) sysv derived unixes instantly going insane trying to type things with @ or # and not understanding what's going on... kids that's what everyone had to fight with in the bad old days...

EDIT: and -- in old times, "backspace" and "delete" were actually different keys; bash and other modern shells hide this from you, (just as newline and carriage return were different actions) -- I guess learning how to type on a mechanical typewriter where you make the ! glyph with a ' and a backspace and a ., and where 1 and l were the same glyph, hopelessly burned the physicality of character rendering into me...


Heh! GNU groff, in particular its grotty driver, still does that to this day. There is a lengthy list of things that in ASCII or Latin-1 modes it composes with a character, a BS, and then another character.

https://github.com/jdebp/nosh/blob/trunk/source/UnicodeKeybo...


> it works pretty poorly on windows but such is life

You can use SharpKeys to remap keys on Windows.


I don't think you ever get enough of a temperature difference just by having a passive black pipe in the sun, to do any useful work besides potentially keeping someone warm in the winter. You could do useful work if you concentrate the energy from the sun somehow, like with mirrors.

Heat pumps do magic by changing the pressure at which a working fluid changes phase, so you can boil the fluid over here, have it absorb an enormous amount of energy then compress it back to a fluid elsewhere and push that heat back out -- this works pretty well because you're just moving the heat and only pushing the temperature on the "hot" side up a relatively small amount. I don't think, for instance, you could make an oven with heat pumps.

To do useful work you need a _substantial_ energy gradient -- it's hard to live in the sun even though its got lots of free energy floating around. The sun is very useful to the earth because the energy it provides is so much more energetic than the ambient environment.

Edited to add:

There are discussions of using exotic working fluids like compressed CO2 -- that'd allow you to manage the phase change maybe to a region where you could concentrate the energy in the fluid then expand it elsewhere at "room temperature" temperatures -- but I think things like compressed (to a _fluid_) CO2 are really hard to work with.


I wonder if there's some business model like a mixture of send-cut-send and TSMC where a "FAB" agrees to stamp out 3000 fenders/doors/roofs and ship them to the customer (who then puts together the cars and paints them and such).

This is similar to what lotus did to help bootstrap tesla...

And hey, maybe tesla's going to have some spare capacity lying around so they could be that FAB... ?

I personally really want this truck to succeed. I'd happily trade in my 10 year old model S for this; it'd make dump runs and trips to the garden / home centers a lot easier than in the S...

I do wish they'd go full eccentric and use a citroen inspired oil suspension...


Press tools for car body panels are extraordinarily expensive, which is why low-volume manufacturers generally avoid using pressings wherever possible. It's just inherently very expensive to carve two huge blocks of steel into a smooth curved shape, so you need to sell an awful lot of units to amortise that cost. Tesla's deal with Lotus only worked because they used a fibreglass body - expensive per-unit, but very low tooling costs.

Desktop Metal are developing a sheet forming solution that requires no bespoke tooling, but it's a slow process with fairly poor surface finish.

https://www.youtube.com/watch?v=6oqeVLILGHY


Right -- this is why I use the analogy of TSMC -- chip fabs are also extremely expensive, for similar reasons.

What are the relative costs of the making die set, the press, and setting up and doing a run of stampings, and the facility and employees to actually house the whole kit?

As of right now, if you need to make a car and you don't have a NUMI or similar retired automotive plant sitting around, it's going to be expensive.

What about the hydroforming process?

I guess smaller car makers from the 60s that did make low volume sheet metal cars didn't need to pass crash tests.

Probably The Telo people should just team up with ineos....


The process of a 3D model -> dies > panels that match the model is iterative. Tesla apparently cheaped out on that optimization paths and they had lots of issues towards rear left corner of cars, somehow on both Model S and X.

ODMs like Magna-Steyr and Valmet also exist. They take your plan, build some, and send to you on a ship from Central Europe.


except the fab pays the tooling cost once per process and the tool and die company charges once per design.

I've actually worked a little on hydroforming, but unless you're thinking of a different kind, it was labor intensive, and prone to crinkling in bad spots. we basically concluded that we used less time and got a better result with an English. which would probably run at least $10k for a car body, if you could find someone willing to work for that low an hourly rate.


Again, I'm confident that this is not a viable business, and certainly in no position to make it a viable business even if it was...

But places that make windshields keep forms around and make runs of windshields even for low volume cars; obviously there's less recurring need for fenders or floor pans... But there may be some way to financially engineer a "do a run we pay X per unit; hopefully that results in doing another run and paying X, and if we can get to 100X the per unit costs go down to some other target.

But I was thinking, and perhaps they should just make these cars out of ground up trabants or saturns. Tesla has demonstrated that some customers don't actually care about perfect finishes or gaps or whatnot; I certainly would be fine with a car made out of trabant, especially if it meant I didn't have to worry about dents...


Isn't that exactly what the automotive component suppliers (Magna, Bosch, Denso, Aisin, etc) do?


I mean, isn't that literally just a supplier?


Car companies tend to make the engines and the sheet metal body parts themselves; the tool and die setup to stamp out the sheet metal is enormously capital intensive and there aren't services to stamp up a bunch of sheet metal for you.


I wouldn't say "super computer" chip (or at least "supercomputer")...

I've read the definition of a supercomputer as "some computer that takes specific domain, compute-bound problems and turns them into IO bound problems." (Implicitly, in that statement is the second statement of "and they have a ton of IO too). They're not really general purpose computers, and likely you'd be able to use risc-v or anything else, with specialized hardware, as the basis of a "supercomputer".

The power platform, on the other-hand (I'll create a new word here) is a foundation for a "SuperEnterprise(tm)" computer.

Power has insane / awesome things like "oh, you can use the ECC bits for ECC and hardware memory tagging[1]." Eventually such things may trickle down to things like ARM or RISC-V, but they're pioneered at the top of the enterprise mountain and trickle down...

[1]https://www.devever.net/~hl/ppcas


I guess I was referring to the fact that Power ISA chips were dominant at once time (prior to the rise of x86 chips) in the TOP 500 listings:

https://en.wikipedia.org/wiki/TOP500#/media/File:Processor_f...

They have faded from that over time from this niche. I do not know where they excel now.


Technically almost all homes have a wonderful energy storage system already -- their hot water heater tank.

One can imagine a setup where you've got a hot water tank and a mixing valve that allows you to heat your water up to some very high temperature and then mix that down to "safe" hot water for the house. Have that run in "heat from grid if below this threshold, otherwise conditionally heat with surplus energy if the water's below this temperature"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: