Hacker Newsnew | past | comments | ask | show | jobs | submit | evrflx's commentslogin

Where is the overhead in a container? It is just a regular process. (Ok plus a container runtime process, but that is negligible)


Does podman have a container runtime process? Or does it just exec the child after setting up the environment?

In that case the overhead is just a small amount of kernel accounting.


It's the latter - podman just sets up all the necessary stuff - namespaces, cgroups, seccomp, network, mounts, etc - and then executes the child. No monitoring whatsoever. The best you can do is to have it listen on the socket it uses for the control API (similar to the docker socket).

However, the quadlets technology allows you to easily setup systemd using systemd generators to initialize the containerized applications using podman and then monitor it for any crashes. Quadlets essentially does everything that docker compose does.

That aside, a container's main overheads aren't the compute or the memory. It's the storage overhead. You're essentially replicating the minimal Linux userland for each container, unless that's in a shared layer.


negligible for you, perhaps ;)


Would love to hear about the pain points: Please elaborate, as I am currently in the decision phase and Talos as of now the top contender.


It's an opinionated vertical platform; if you run into an edge case, bug, or functionality you don't like, you are have to open a discussion Github and wait for a new release to fix or change things. The devs are very responsive, but the same as any open source tool, it's their project. It perhaps depends on how much customization you want to do - GPUs and drivers, custom CNI, very specific disk settings. I've had more trouble with bare metal systems with varied hardware vs their supported cloud platforms, which are approved and tested.

I'm pretty positive toward Talos but if you stray from the happy path, by choice or accident, it can become challenging technically. And then you have sunk costs around choosing this platform and how hard it would be to restart from scratch.


Not OP, but when we tested it out it was painful to handle usb disks. The reason being that if you have two they get named sda/sdb randomly. We managed to overwrite the usb we were using to install talos since that one was named sda one boot and sdb the next. This lead ut to develop the “pullout technique” when installing…

This mostly only happened because it was a test cluster where we used usb disks, probably not a problem when one properly provisions.

Otherwise it was great! But it does feel akward not booting into an environment where you have a terminal at first


This does sound like it could be solved with better installDiskSelectors[0]. Talos has done a fair bit of work in improving this and UserVolumeConfigs in the last couple of 1.x revisions.

Alternatively, network booting in some fashion is an option. [1]

[0] https://www.talos.dev/v1.11/reference/configuration/v1alpha1...

[1] https://www.talos.dev/v1.11/talos-guides/install/bare-metal-...


I recognize this from my bare-metal homelab setup. But at work we used VMs for Talos nodes so this was not an issue.

And if I had to deploy it on bare-metal at work I'd most likely use PXE booting instead of USB.


I use pxe boot for my homelab baremetal :)


What if an input is required to prevent a crash and one pilot maliciously does nothing.

I think there is a point in life when you just have to trust or the complexity and failure scenarios explodes.

By the way I have a similar feeling about software supply chains. You can do a little but there is a point it becomes futile.


I like the idea! We use it actually for a financial application we develop for a bank. We use spring test docs with tests to create example api calls with answers, run reference calculations as part of the test and record the outcome and decisions Both become part of the documentation rendered with asciidoc. We added custom annotations to add documentation snippets thorough the code in addition to using drools and recording the ruleset as well. Feedback is great! But it is no generic approach and involved quite some effort for infrastructure and ongoing maintenance. But well worth the effort given the stakes involved.

Perhaps this helps you as feedback. I am curious how your approach will turn out.


That's great to know. I was thinking about tackling this from the highest level of abstraction first, so the user interface. I plan on supporting Playwright first for MVP, and then expanding in to Cypress, and maybe even unit testing frameworks.

I feel like backend API documentation is kind of handled with things like Swagger.

How do you think your in house solution compares to something like Swagger or Javadoc?

One of my personal fears (which might be a bit unfounded) since Swagger and Javadoc are generated based off of code comments, and not tests, and there is a possibility that they could get out of sync with implementation. But that might be unfounded. When I worked on Java and wrote unit tests and generated Swagger docs, we never actually ran in to the problem of these things becoming out of sync.

I theorize that the front end isn't as well disciplined as it is in the backend world as well. Which is where I think this idea of Test2Doc will really shine.


This feature must be explicitly enabled, it is not on by default nor by accident.


huh, I sure seem to be needing to debug this a lot, I guess I'll just leave it turned on all the time that way I can say a few seconds next time. Larry Wall says one of the virtues of being a great developer is laziness!


Based on [1] it seems like one `management.endpoints.web.exposure.include=*` is enough to expose everything including the heapdump endpoint on the public HTTP API without authentication. It's even there in the docs as an example.

Looks like there is a change [2] coming to the `management.endpoint.heapdump.access` default value that would make this harder to expose by accident.

Let's look for `env` next...

[1] https://docs.spring.io/spring-boot/reference/actuator/endpoi...

[2] https://github.com/spring-projects/spring-boot/pull/45624


For small vps Kubernetes might be overkill indeed. But: the API and ecosystem is really an enabler besides the built in infrastructure.

For the single vps-with-containers use case I recommend checking out watchtower instead of relying on systemd scripting.


I used one ArgoCD instance per cluster as of today. Makes security and scaling easier. What is your main driver to have a single ArgoCD instance?


got it, that makes sense. i was trying to keep things centralized at first but splitting per cluster might be the better call. did you end up automating anything around managing those separate instances or mostly manual?


I wonder why „The Saint“ is not mentioned. Loved the movie and the different characters played by val kilmer.


Loved this movie. Has one of my all time favorite songs in it, Polaroid Millennium, and features a Nokia 9000 joke. What's not to love?


That's also my favorite Val Kilmer movie


The aggressor putin can stop any time and there will be peace. After that new elections are legally possible according to ukrainian law.


Zelenskyy is also the aggressor when he invaded Russia. You cannot blame Putin for Ukraine being corrupt and not holding elections.


With an XSS exploit it is game over, you control the browser. Adding more complexity and opening up the possibility of CSRF exploits with BFF does not look like a good trade off to me.


You don’t open up for CSRF attacks if you use same site cookies, which I guess is part of why this pattern is seeing more use now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: