Hacker Newsnew | past | comments | ask | show | jobs | submit | mgaunard's commentslogin

A lot of British houses have coaxial cable TV in all bedrooms.

Ignoring the horrible taste of our forebears that were putting TVs where they don't belong, that does enable carrying gigabit ethernet using MoCA technology.


In my experience trying to outsource to India, there is a strong systemic bias towards lying and cheating to get ahead (and that was even before AI), and a focus on milking as much money as possible rather than building great technology.

While there is real talent there, there is also a lot of overhead to find people you can trust.

This is probably just a reflection of the competitive nature of the market and the social ladder tech salaries enable there.


The main problem is that builds require a variable amount of cores depending on what needs to be (re)built. The ideal thing to do is to have the build system itself orchestrate remote builds, since it actually knows how many things need building and how expensive they are.

This is what nixbuild.net does, it tracks historic CPU and memory usage of individual builds, and takes that into account when deciding what resources to allocate for new builds. You can configure limits on max/min CPUs on your account or individual builds. Also, if a build runs out of memory we simply restart it with more memory. The client will just see that the build log starts over.

That's precisely what I'm not describing; Nix doesn't even have access to the build DAG.

Please correct me if I'm wrong, but I assume you mean Nix doesn't have access to the build DAG that may exist inside the hermetic environment of individual Nix builds? If so, that's true, because Nix doesn't do that level of granularity unless you have a way to translate such DAGs into Nix derivations.

But Nix certainly tracks dependencies between Nix packages, and have knowledge about what packages need to be rebuilt if you make a change somewhere. Some of these packages might build config files, while other may build Chromium, ie wildly different CPU+mem needs.


Right, I'm arguing this is the wrong abstraction level, and that only the build system can make correct container sizing decisions.

The new era of AI.

Everybody saw it coming. Frankly I'm surprised it took this long.

"It is difficult to get a man to understand something, when his salary depends on his not understanding it." - Upton Sinclair

It's kind of like people who live at the base of a giant dam but cannot comprehend that it could ever fail.


Not everybody. Some very mentally ill individuals thought we’d all be living in some start-trek post scarcity TV show.

It would make sure that any graph is provided in topological order.

There are good parser generators, but potentially not as Rust libraries.


Meanwhile C++ has more than a hundred, with a focus on production-ready rather than innovative design patterns.

There are plenty of people who use RAII with arenas for nested group of objects.

Bloomberg for example had a strong focus on that, and they enhanced the allocator model quite significantly to be able to standardize this. This was the reason for stateful allocators, scoped allocators, uses-allocator construction and polymorphic memory resources.


even when you write your own container, you do not use new and delete.

Are you sure? It seems as though ultimately Microsoft's STL for example ends up calling std::allocator's allocate function which uses the new operator.

you would use "operator new" (allocates memory only) not "new" (allocates and constructs, and more if using the new[] variant)

You might use placement new though.

The main problem if you're roaming is that you're considered a lower-priority customer, and since the network is often saturated already, you don't get any bandwidth.

tl;dr people reject installing ugly masts in densely urbanised neighbourhoods, meaning there often isn't enough capacity for everyone to get fast 5G.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: