It's the latter - podman just sets up all the necessary stuff - namespaces, cgroups, seccomp, network, mounts, etc - and then executes the child. No monitoring whatsoever. The best you can do is to have it listen on the socket it uses for the control API (similar to the docker socket).
However, the quadlets technology allows you to easily setup systemd using systemd generators to initialize the containerized applications using podman and then monitor it for any crashes. Quadlets essentially does everything that docker compose does.
That aside, a container's main overheads aren't the compute or the memory. It's the storage overhead. You're essentially replicating the minimal Linux userland for each container, unless that's in a shared layer.
It's an opinionated vertical platform; if you run into an edge case, bug, or functionality you don't like, you are have to open a discussion Github and wait for a new release to fix or change things. The devs are very responsive, but the same as any open source tool, it's their project.
It perhaps depends on how much customization you want to do - GPUs and drivers, custom CNI, very specific disk settings. I've had more trouble with bare metal systems with varied hardware vs their supported cloud platforms, which are approved and tested.
I'm pretty positive toward Talos but if you stray from the happy path, by choice or accident, it can become challenging technically. And then you have sunk costs around choosing this platform and how hard it would be to restart from scratch.
Not OP, but when we tested it out it was painful to handle usb disks. The reason being that if you have two they get named sda/sdb randomly. We managed to overwrite the usb we were using to install talos since that one was named sda one boot and sdb the next. This lead ut to develop the “pullout technique” when installing…
This mostly only happened because it was a test cluster where we used usb disks, probably not a problem when one properly provisions.
Otherwise it was great! But it does feel akward not booting into an environment where you have a terminal at first
This does sound like it could be solved with better installDiskSelectors[0]. Talos has done a fair bit of work in improving this and UserVolumeConfigs in the last couple of 1.x revisions.
Alternatively, network booting in some fashion is an option. [1]
I like the idea!
We use it actually for a financial application we develop for a bank.
We use spring test docs with tests to create example api calls with answers, run reference calculations as part of the test and record the outcome and decisions
Both become part of the documentation rendered with asciidoc.
We added custom annotations to add documentation snippets thorough the code in addition to using drools and recording the ruleset as well.
Feedback is great! But it is no generic approach and involved quite some effort for infrastructure and ongoing maintenance.
But well worth the effort given the stakes involved.
Perhaps this helps you as feedback. I am curious how your approach will turn out.
That's great to know. I was thinking about tackling this from the highest level of abstraction first, so the user interface. I plan on supporting Playwright first for MVP, and then expanding in to Cypress, and maybe even unit testing frameworks.
I feel like backend API documentation is kind of handled with things like Swagger.
How do you think your in house solution compares to something like Swagger or Javadoc?
One of my personal fears (which might be a bit unfounded) since Swagger and Javadoc are generated based off of code comments, and not tests, and there is a possibility that they could get out of sync with implementation. But that might be unfounded. When I worked on Java and wrote unit tests and generated Swagger docs, we never actually ran in to the problem of these things becoming out of sync.
I theorize that the front end isn't as well disciplined as it is in the backend world as well. Which is where I think this idea of Test2Doc will really shine.
huh, I sure seem to be needing to debug this a lot, I guess I'll just leave it turned on all the time that way I can say a few seconds next time. Larry Wall says one of the virtues of being a great developer is laziness!
Based on [1] it seems like one `management.endpoints.web.exposure.include=*` is enough to expose everything including the heapdump endpoint on the public HTTP API without authentication. It's even there in the docs as an example.
Looks like there is a change [2] coming to the `management.endpoint.heapdump.access` default value that would make this harder to expose by accident.
got it, that makes sense. i was trying to keep things centralized at first but splitting per cluster might be the better call. did you end up automating anything around managing those separate instances or mostly manual?
With an XSS exploit it is game over, you control the browser.
Adding more complexity and opening up the possibility of CSRF exploits with BFF does not look like a good trade off to me.