Hacker Newsnew | past | comments | ask | show | jobs | submit | jiggunjer's commentslogin

Perhaps encoding such things in comments at all is the wrong approach? E.g. If my linter misbehaves, why can't I right click and ignore the red line in the IDE instead of encoding it into my source file.


Encoding it into your source file has positive externalities. If you're using source control, the comment controlling the linter is tracked alongside the rest of your code. You can track who added it and why. You can share this comment with other engineers on your team.

You could also imagine other representations of the same data (e.g. one large file which includes annotations, or a binary representation). But in this case you lose colocation with the code they're annotating, so it's more likely to drift.

I fully agree that there are probably better UX's out there for this flow, but the source annotation approach is popular for very good reasons.


Same. It's how I learned Docker and Kubernetes, study the concepts, then I can ask "what's the specific command to do A,B,C" instead of an open ended "how do I do X".


Recently had to familiarize myself with python async because a third party SDK relies on it.

In many cases the lib will rely on threads to handle calls to synchronous functions, got me wondering if there's a valid use case for running multiple async threads on a single core.


I frequently use single threaded async runtimes in Rust. Particularly if it's background processing that doesn't need to be particularly high throughput.

Eg in a user application you might have the performance sensitive work (eg rendering) which needs to be highly parallel - give it a bunch of threads. However when drawing the UI, handing user input, etc you usually don't need high throughput - use only 1 thread to minimise the impact on the rendering threads

In my work with server side code, I use multiple async runtimes. One runtime is multithreaded and handles all the real traffic. One runtime is singlethreaded and handles management operations such as dispatching metrics and logs or garbage collecting our caches


i would say: probably not

if your async thread is so busy that you need another one, then it's probably not an async workload to begin with.

i work on a python app which uses threads and async, but only have one async thread because it's more than enough to handle all the async work i throw at it.


All skills are RAG, a subset of skills can add more RAG.


You could also say horse carriages and cars are very different things, yet one replaced the other.

MCP lets agents do stuff. Skills let agents do stuff. There's the overlap.


That's two words. How about "deterministic".


Perhaps ironically, most Docker builds aren't deterministic. Run `docker build`, clear the cache, run it again five minutes later and you might not have a bit-compatible image because many images don't pin their base and pull from live updating package repositories.

You can make a Docker image deterministic/hermetic, but it's usually a lot more work.


The build process is non-determenstic, sure.

But the images themselves are, and that is a great improvement on pre-docker state of the art. Before docker, if you wanted to run the app with all of the dependencies as of last month, you had _no way to know_ at all. With docker, you pull that old image and you get exactly the same version of every dependency (except kernel) with practically zero effort.

Sure, it's annoying that instead of few-kB-long lockfile you are now having hundred of MBs of docker images. But all the better alternatives are significantly harder.


some steps, e.g ap-get, are not deterministic and practically, it would be painful to make them so (usually controlling updates with an external mirror, ignoring phased upgrades, bunch of other misc stuff).

You then start looking at immutable OSes, then get to something like NixOS.


For me the best demo is a test module lighting up all green.

How do make this into a sexy image for management. Sure, business logic is stubbed, but my carefully crafted strongly typed interfaces all mesh together! Imagine the future dividends!


But you can connect any machine to any vpn and have it be a tailscale exit node?


Well, yes, but being able to designate a VPN node as a Tailscale exit node directly means you don't need a random server in the middle for it. (Which is beneficial if you use it as an exit node for road warrior devices)


You can, but the issue is usability..when I'm watching TV, I want to just be able to flip open an app and say "I'm in London!" and watch BBC, then the same for Canada and etc. I don't want to be fiddling with a VPN and switching routes on some separate device / switching the entire wifi network or etc.


I've forgotten how much hassle installing applications can be since docker.


Here are the official docs for deploying ejabberd using containers: https://docs.ejabberd.im/CONTAINER/


Part of my thoughts... though if you're familiar with Ansible, the automation isn't so bad in that ecosystem. I mostly run my personal stuff single instance, so deploying /apps/app-name/docker-compose.yaml is my general approach to most things along with either Caddy or Traefik.


I think selection bias is a bit different, keyword being ignore. Maybe negativity bias.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: