Would be interesting to see this kind of analysis on youtube comments.
It seems to me they made an algorithmic change a few years back where positive comment are greatly boosted. Since then then the "top" comments are always over-the-top exuberant.
Luckily, I do happen to know that stuff, so I used the existing board with brand new 18650 cells. Unfortunately, the board seemed to brick itself when it lost power, so the vacuum kept complaining the battery wasn't kosher.
>This is only good advice if you're good at soldering
I meant soldering onto the pre-welded tabs that come with the new cell (unless you have a spot welder). You don't need much soldering experience for that.
>and know details about cells like which ones have in-built protection.
It's highly unlikely that the individual cells would be protected ones. Manufacturers are not stupid to pay N times the cost of a management circuit.
I don't think you'll ever find a battery pack using cells with integrated low-voltage protection, if that's what you're referring to. All that stuff is managed by the BMS.
What you should be on the look-out for is the cell's operating range, continuous and max power. Personally I use buy VT6's in bulk and never think about any of that.
Same experience here. I tried it a few months ago and even on simple use I quickly ran into so many bugs & issues I quickly gave up. I'm willing to learn a new UI, but the tool must be reliable, and it simple was not.
Well, it's also what has enabled foreign nations to spread misinformation, what enabled people to disappear into their own bubbles filled with falsehoods, etc. Since these things are now tearing at the fabric of democracy, I wouldn't say it's a clean win for the internet so far.
Annas Archive uses slow servers on delay, and constantly tells me that they are too many downloads from my IP address, so I flip VPN settings as soon as the most recent slow download completes. And I get it again after a short while. It's hell waiting it out and flipping VPN settings. And the weird part is that this project is to replace paper books that I already bought. That's the excuse one LLM uses for tearing up books, scanning and harvesting. I just need to downsize so I can move back to the Bay Area. Book and excess houseware sale coming, it seems. Libgen had few or no limits.
lol guy makes a fair point. Open source software suffers from this expectation that anyone interested in the project must be technical enough to be able to clone, compile, and fix the inevitable issues just to get something running and usable.
I'd say that a lot of people suffer from this expectation that just because I made a tool for myself and put it up on GitHub in case someone else would also enjoy it that I'm now obligated to provide support for you. Especially when the person in the screenshot is angry over the lack of a Windows binary.
Thank goodness; solving this "problem" for the general internet destroyed it.
Your point seems to be someone else should do that for every stupid asshole on the web?
But will this run inside another docker container?
I normally hate things shipped as containers because I often want to use it inside a docker container and docker-in-docker just seems like a messy waste of resources.
Docker in Docker is not a waste of resources, they just make the same container runtime the container is running on available to it. Really a better solution than a control plane like Kubernetes.
No, you're running docker inside a docker container. The container provides a docker daemon that just forwards the connection to the same runtime. It's not running two dockers, but you are still running docker inside docker.
Yeah, it feels like nothing but a little trick. Why would anyone want to actually use this? The exe simply calls docker, it can embed an image into the exe but even then it first calls docker to load the embedded image.
Presumably, they don’t want to write/maintain a shell script wrapper for every time they want to do this, when they could use a tool which does it for them.
> Presumably, they don’t want to write/maintain a shell script wrapper for every time they want to do this, when they could use a tool which does it for them.
How's "packing" cli commands into a shell script any different from "packing" CLI commands into a container?
Calling a container on the CLI is a pain in the ass.
People generally don’t put stuff that works in whatever environment you’re in on the CLI already into contains. Stuff that doesn’t, of course they do.
Having a convenient shell script wrapper to make that not a pain in the ass, while letting all the environment management stuff still work correctly in a container is convenient.
Writing said wrapper each time, however is a pain in the ass.
Generating one, makes it not such a pain in the ass to use.
So then you get convenient CLI usage of something that needs a container to not be a pain in the ass to install/use.
I do that for a lot of stuff. Got a bit annoyed with internal tools that was so difficult to set up (needed this exact version of global python, expected this and that to be in the path, constantly needed to be updated and then stuff broke again). So I built a docker image instead where everything is managed, and when I need to update or change stuff I can do it from a clean slate without affecting anything else on my computer.
To use it, it's basically just scripts loaded into my shell. So if I do "toolname command args" it will spin up the container, mount the current folder and some config folders some tools expect, forward some ports, then pass the command and args to the container which runs them.
99% of the time it works smooth. The annoying part is if some tool depends on some other tool on the host machine. Like for instance it wants to do some git stuff. I will then have to have git installed and my keys copied in as well for instance.
CoreOS had a toolbox container that worked similarly to the one you have (the Podman people took over its maintenance): https://github.com/containers/toolbox
Tip: you could also forward your ssh agent. I remember it was a bit of a pain in the ass on macos and a windows WSL2 setup, but likely worth it for your setup.
Basically the same as Python’s zipapps which have some niche use cases.
Before zipapp came out I built superzippy to do it. Needed to distribute some python tooling to users in a university where everyone was running Linux in lab computers. Worked perfectly for it.
The first of which can be p90 solved by "Okay, type 'apt install dash capital why docker return,' tell me what happens...okay, and 'docker dash vee' says...great! Now..."
Probably takes a couple minutes, maybe less if you've got a good fast distro mirror nearby. More if you're trying to explain it to a biologist - love those folks, they do great work, incredible parties, not always at home in the digital domain.
I feel like it's much easier to send a docker run snippet than an executable binary to my Docker-using friends. I usually try to include an example `docker run` and/or Docker Compose snippet in my projects too.
Is there any alternative way of achieving a similar goal (shipping a container to non technical customers that they can run as if it were an application)?
It feels like there ought to be a way to wrap a UML kernel build with a container image. Never seen it done, but I can't think of an obvious reason why it wouldn't work.
I'm a freelance webdev with 15 years of experience, and recently I've made some new projects with Symfony Stimulus & Turbo to great success.
A few thoughts:
I tried just htmx too, too limiting. Stimulus/Turbo combination is much better.
Use the right tool for the job. Highly interactive app, like an editor? Use react or similar js framework. Mostly page/document website? A backend framework is much faster and easier to develop, and much simpler to test.
A blend of both? Drop in a React component where needed on a given page! People forget (me included) that even React can be mounted into a single DOM element.
With all this framework nonsense its hard to forget at the end of the days its all just javascript.
If you're doing something something reasonably small, using preact rather than react and then wrapping it into a web component so you can just ... put it in there ... is a really nice option to have.
(it may not be the right option in any given case, but it's worth knowing it's there even so)
Preact is definitely a good choice if you're looking for something lightweight. React-dom was already relatively hefty, and seems to have gotten even larger in version 19. Upgrading the React TypeScript Vite starter template from 18 to 19 increases the bundle size from 144kB to 186kB on my machine [1][2]. They've also packaged it in a way that's hard to analyze with sites like bundlephobia.com and pkg-size.dev.
It seems to me they made an algorithmic change a few years back where positive comment are greatly boosted. Since then then the "top" comments are always over-the-top exuberant.
reply