Hacker Newsnew | past | comments | ask | show | jobs | submit | masterj's commentslogin

I suspect that for a large number of orgs accepting over-provisioning would be significantly cheaper than the headcount required for a more sophisticated approach while allowing faster movement due to lower overall complexity

Thank you for this link! I've been wondering if it is possible these days to replace Fusion for my workflow, and this is exactly what I need to see


Modern geothermal is dispatchable. It's a really good compliment to wind and solar https://climateinstitute.ca/safe-bets-wild-cards/advanced-ge...


But is it usefully dispatchable? Nuclear can be made dispatchable but it's not usefully dispatchable because the costs are fairly similar whether the plant is on or off.

Like nuclear, I believe geothermal has high capital cost and low running costs, suggesting that it isn't usefully dispatchable.

But that's too simplistic. A big limitation of geothermal is that rock has poor thermal conductivity. So once you remove heat it takes a while for it to warm up again. If you're running it 100% then you need a large area to compensate. OTOH, if you're running it at a lower duty cycle you likely need less area.

So if you know the duty cycle in advance, then you can likely significantly reduce costs. Yay!

But that also means that you likely can't run a plant built for low duty cycles continuously for 2 weeks during a dankelflaute. It's likely great for smoothing out daily cycles, but not as good for smoothing out annual cycles. That means it's competing against batteries, which are also great for smoothing out daily cycles, and are very inexpensive.


> I believe geothermal has high capital cost and low running costs

Higher capital costs, but not nuclear high capital costs.

> That means it's competing against batteries, which are also great for smoothing out daily cycles, and are very inexpensive.

It likely would supplement batteries rather than compete against them. A battery buffer would allow a geothermal plant to slowly rise to load and fall as that load goes away.

A very large battery can store 200MWh worth of energy. The largest geothermal plant produces 1.5GW. (A lot of the large plants look like they are in the range of 100->200MW). Presumably those plants can run for more than a few hours which ultimately decreases the amount of batteries needed to smooth out the demand curve.


A very large battery storage site, like the top 10 currently running has an order of magnitude more energy storage than you suggest.

The largest under construction for go live in 2027 has another order of magnitude, 19000MWh and will deliver up to 1000MW.

Things are changing fast as battery prices drop and experience accumulates.


Is that correct only 1MW of power but 19,000MWh of storage? That would take over two years to drain it.


Well spotted, I've corrected to 1000MW (or 1GW).

The UEA are aiming for a longer than usual runtime, but only 19 hours, not 2 years.


That’s a lot of words to admit that geothermal has its place.


That wasn't the conclusion, though. The conclusion was that dispatchable geothermal is competing against daily cycling batteries, a competition it's likely to lose on cost.


Nuclear produces very dangerous substances. The long term cost to guard us from them for a million years and the risk that something gets out of control are extemly high.


Solar doesn't? As far as I know the process for mining the panel and batteries is the same sulphuric acid process with extremely toxic tailings. And you get uranium in this process of rare earth mining. These toxins are orders of magnitude greater in risk and in volume/ quantity than process's nuclear fuel waste.


Any substance with a half-life of a million years is giving off very, very tiny levels of radiation.

What you should worry about is half-lives of under a few years.


Yes but a very small amount and it is nothing we don't know how to manage.

> the risk that something gets out of control are extemly high

Except this is false, you are just spreading misinformation. I suggest you confront your current knowledge to different sources and listen to the arguments of the proponents of nuclear energy before you make up your mind. Don't just repeat what you have heard.


In what world are you living that you not have heard about nuclear accidents? Here is a reading list for you:

https://en.wikipedia.org/wiki/Windscale_fire

https://en.wikipedia.org/wiki/Three_Mile_Island_accident

https://en.wikipedia.org/wiki/Chernobyl_disaster

https://en.wikipedia.org/wiki/Fukushima_nuclear_accident

With regard to nuclear waste. Here is an example, how it can went quickly out of control:

https://en.wikipedia.org/wiki/Asse_II_mine


Yes I have and you clearly know nothing about those incidents else you wouldn't give a laundry list of wikipedia article you haven't even read.


"At present, atomic power presents an exceptionally costly and inconvenient means of obtaining energy which can be extracted much more economically from conventional fuels.… This is expensive power, not cheap power as the public has been led to believe." — C. G. Suits, Director of Research, General Electric, who was operating the Hanford reactors, 1951.

Safe, clean, too cheap to meter?

Some things never change.


I am not sure what your point exactly is

The C.G suits were right as long as the digging operation is not too costly (the more shallow and concentrated the better)

Fossil fuels are nothing short of a miracle because they are so energy dense, but it's a slow poison and has high addictive power.

As long as we didn't (want to) know about negative externalities (chief among them CO2 and CH4) whose cost was borne by humanity, it was ok. Dirty but everyone seemed to think it was worth it.

The advantages of nuclear is not that it would be too cheap to meter (even though that becomes true with time because most of the price is upfront investment).

- It is that you can get energy independence even if you don't have uranium because it is so energy dense that you can just stockpile it. For example France could run its plants for 2 years with its current stockpile of uranium, and it only recycles around 10% of its fuel. Compare that with its oil needs, the oil stockpile would only last 3 months, probably less.

- It is CO2 free

Bonus: Nuclear industry is required to take of its waste products (which are only waste products insofar are we are too lazy/cheap to recycle them, else they are just more fuel)


Why don’t you try that, convert the output to OTLP and then write about it?


Given it can output 180Nm I expect this thing can get up whatever hill you point it at


Title seems slightly exaggerated since by my reading there was no actual $3000 / month bill? Still a great use-case

This seems like a good idea to have plentiful dev environments and avoid a bad pricing model. If your production instance is still on Heroku, you might still want a staging environment on Heroku since a Hetzner server and your production instance might have subtle differences.


> tap is a data-intensive SaaS that needs to be able to execute complex queries over gigabytes of data in seconds.

> minimum resource requirements for good performance to be around 2x CPUs and 4 GiB RAM

This is less compute than I regularly carry in my pocket? And significantly less than a Raspberry Pi? Why is Fargate that expensive?


Cloudflare has Outbound Workers for exactly this use-case: https://developers.cloudflare.com/cloudflare-for-platforms/w...

If these aren't enabled for containers / sandboxes yet, I bet they will be soon


> I bet you could get very very far on a single box,

With single instances topping out at 20+ TBs of RAM and hundreds of cores, I think this is likely very under-explored as an option

Even more if you combine this with cell-based architecture, splitting on users / tenants instead of splitting the service itself.


Single instance is underappreciated in general. There's a used server reseller near me, and sometimes I check their online catalogue out of curiosity. For only $1000ish I could have some few generations old box with dual socket 32-core chips and 1TB of RAM. I don't have any purpose for which I'd need that, but it's surprisingly cheap if I did. And things can scale up from there. AWS will charge you the same per month that it costs to get one of your own forever - not counting electricity or hard drives.


I run my entire business on a single OVH box that costs roughly $45/month. It has plenty of headroom for growth. The hardest part is getting comfortable with k8s (still worth it for a single node!) but I’ve never had more uptime and resiliency than I do now. I was spending upwards of $800/mo on AWS a few years ago with way less stability and speed. I could set up two nodes for availability, but it wouldn’t really gain me much. Downtime in my industry is expected, and my downtime is rarely related to my web services (externalities). In a worst case scenario, I could have the whole platform back up in under 6 hours on a new box. Maybe even faster.


What's the benefit of using K3 on a single node?


I'd list these as the real-world advantages

  * Very flexible, but rigid deployments (can build anywhere, deploy from anywhere, and roll out deployments safely with zero downtime)
  * Images don't randomly disappear (ran into this all the time with dokku and caprover)
  * If something goes wrong, it heals itself as best it can
  * Structured observability (i.e. logs, metrics, etc. are easy to capture, unify, and ship to places)
  * Very easy to setup replicas to reduce load on services or have safe failovers 
  * Custom resource usage (I can give some pods use more/less CPU/memory limits depending on scale and priority)
  * Easy to self-host FOSS services (queues, dbs, observability, apps, etc.)
  * Total flexibility when customizing ingress/routing. I can keep private services private and only expose public services
  * Certbot can issue ssl certs instantly (always ran into issues with other self-hosting platforms)
  * Tailscale Operator makes accessing services a breeze (can opt-in services one by one)
  * Everything is yaml, so easy to manipulate
  * Adding new services is a cake-walk - as easy as creating a new yaml file, building an image and pushing it. I'm no longer disincentivized to spin up a new codebase for something small but worthwhile, because it's easy to ship it.
All-in-all I spent many years trying "lightweight" deployment solutions (dokku, elastic beanstalk, caprover, coolify, etc.) that all came with the promise of "simple" but ended up being infinitely more of a headache to manage when things went wrong. Even something like heroku falls short because it's harder to just spin up "anything" like a stateful service or random FOSS application. Dokku was probably the best, but it always felt somewhat brittle. Caprover was okay. And coolify never got off the ground for me. Don't even get me started on elastic beanstalk.

I would say the biggest downside is that managing databases is less rigid than using something like RDS, but the flip side is that my DB is far more performant and far cheaper (I own the CPU cycles! no noisy neighbors.), and I still run daily backups to external object storage.

Once you get k8s running, it kind of just works. And when I want to do something funky or experimental (like splitting AI bots to separate pods), I can go ahead and do that with ease.

I run two separate k8s "clusters" (both single node) and I kind of love it. k9s (obs. tool) is amazing. I built my own logging platform because I hated all the other ones, might release that into its own product one day (email in my profile if you're interested).


Also running a few single node clusters - perfect balance for small orgs that don't need HA. Been running small clusters since ~2016 and loving it.


Deployments are easy. You define a bunch of yamls for what things are running, who mounts what, and what secrets they have access to etc.

If you need to deploy it elsewhere, you just install k3s/k8s or whatever and apply the yamls (except for stateful things like db).

IT also handles name resolution with service names, restarts etc.

IT's amazing.


any notes or pointers on how to get comfortable with k8? For a simple nodejs app I was looking down the pm2 route but I wonder of learning k8 is just more future proof.


Use K3s in cluster mode, start doing. Cluster mode uses etcd instead of kine, kine is not good.

Configure the init flags to disable all controllers and other doodads, deploy them yourself with Helm. Helm sucks to work with but someone has already gone through the pain for you.

AI is GREAT at K8s since K8s has GREAT docs which has been trained on.

A good mental model is good: It's an API with a bunch of control loops


I'd say rent a hetzner vps and use hetzner-k3s https://github.com/vitobotta/hetzner-k3s

Then you are off to races. you can add more nodes etc later to give it a try.


Definitely a big barrier to entry, my way was watching a friend spin up a cluster from scratch using yaml files and then copying his work. Nowadays you have claude next to you to guide you along, and you can even manage the entire cluster via claude code (risky, but not _that_ risky if you're careful). Get a VPS or dedicated box and spin up microk8s and give it a whirl! The effort you put in will pay off in the long run, in my humble opinion.

Use k9s (not a misspelling) and headlamp to observe your cluster if you need a gui.


Is this vanilla k8 or any flavor?


I use microk8s


I guess you got cheap power. Me too, but not 24/7 and not a whole lot (solar). So old enterprise hardware is a no-go for me. I do like ECC, but DDR5 is a step in the right direction.


> So-called standards now are just the monopolists coming to agreement among themselves.

That's... largely what standards are?? And they are really beneficial??


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: