API/SSH keys can easily be swapped, it's more hassle than it's worth. Be glad they didn't choose to spread the payload of one of the 100 ransomware groups with affiliate programs.
Most language users will follow the "spirit" of the language - e.g. Bill is against package managers, people who use his language mostly agree with his ideas, and there's not a huge standard Odin package manager.
I rather appreciate that C and C++ don't have a default package manager that took over - yes, integrating libraries is a bit more difficult, but we also have a lot of small, self-contained libraries that just "do the thing" without pulling in a library that does colored text for logging, which pulls in tokio, which pulls in mio, which pulls in wasi, which pulls in serde, which is insane.
C and C++ do have package managers. It's just that these languages evolved for OS implementation and also that these package managers are old and stable so they have support for a lot of languages, so that you probably know them as OS package managers.
You're making your customer's life miserable by having dependencies. You're a library, your customer is using you to solve a specific problem. Write the code to solve that and be done with it.
In the game development sphere, there's plenty of giant middleware packages for audio playback, physics engines, renderers, and other problems that are 1000x more complex and more useful than any given npm package, and yet I somehow don't have to "manage a dependency tree" and "resolve peer dependency conflicts" when using them.
When you're a library, your customer is another developer. By vendoring needlessly, you potentially cause unavoidable bloat in someone else's product. If you interoperate with standard interfaces, your downstream should be able to choose what's on the other end of that interface.
> You're making your customer's life miserable by having dependencies. You're a library, your customer is using you to solve a specific problem. Write the code to solve that and be done with it.
And you just don't know what you are talking about.
If I am providing (lets say) a library that provides some high level features for a car ADAS system on top of a CAN network with a proprietary library as driver and interface.
This is not up to me to fix or choose the library and the driver version that the customer will use.
He will choose the certified version he will ship, he will test my software on it and integrate it.
Vendoring dependency for anything which is not a final product (product as executable) is plain stupid.
It is a guarantee of pain and ABI madness for anybody having to deal with the integration of your blob later on.
If you want to vendor, do vendor, but stick to executables with well-defined IPC systems.
> If I am providing (lets say) a library that provides some high level features for a car ADAS system on top of a CAN network with a proprietary library as driver and interface.
If you're writing an ADAS system, and you have a "dependency tree" that needs to be "resolved" by a package manager, you should be fired immediately.
Any software that has lives riding on it, if it has dependencies, must be certified against a specific version of them, that should 100% of the time, without exceptions, must be vendored with the software.
> It is a guarantee of pain and ABI madness for anybody having to deal with the integration of your blob later on.
The exact opposite. Vendoring is the ONLY way to prevent the ABI madness of "v1.3.1 of libfoo exports libfoo_a but not libfoo_b, and v1.3.2 exports libfoo_b but not libfoo_c, and in 1.3.2 libfoo_b takes in a pointer to a struct that has a different layout."
If you MUST have libfoo (which you don't), you link your version of libfoo into your blob and you never expose any libfoo symbols in your library's blob.
And in addition: Yocto (or equivalent) will also be the one providing you the traceability required to guarantee that what you ship is currently what you certified and not some random garbage compiled in a laptop user directory.
Did Yocto ever clean up how they manage the sysroot?
It used to have a really bad design flaw. Example:
- building package X explicitly depends on A to be in the sysroot
- building package Y explicitly depends on B in the sysroot, but implicitly will use A if present (thanks autoconf!)
In such a situation, building X before Y will result in Y effectively using A&B — perhaps enabling unintended features. Building Y then X would produce a different Y.
Coupled with the parallel build environment, it’s a recipe for highly non deterministic binaries — without even considering reproducibility.
> Did Yocto ever clean up how they manage the sysroot?
It's better than before but you still need to sandbox manually if you want good reproducibility.
Honestly, for reproducibility alone. There is better than Yocto nowadays. It is hard to beat Nix at this game. Even Bazel based build flows are somewhat better.
But in the embedded world, Yocto is pretty widespread and almost the de-facto norm for Linux embedded.
> but implicitly will use A if present (thanks autoconf!)
When you want reproducibility, you need to specify what you want, not let the computer guess. Why can't you use Y/configure --without-A ? In the extreme case you can also version config.status.
Things using autotools evolved to be “manual user friendly” in the sense that application features are automatically enabled based on auto detected libraries.
But for automated builds, all those smarts get in the way when the build environment is subject to variation.
In theory, the Yocto recipe will fully specify the application configuration regardless of how the environment varies…
Of course, in theory the most Byzantine build process will always function correctly too!
You're providing a library. That library has dependencies (although it shouldn't). You've written that library to work against a specific version of those dependencies. Vendoring these dependencies means shipping them with your library, and not relying on your user, or even worse, their package manager to provide said dependencies.
I don't know what industry you work in, who the regulatory body that certifies your code is, or what their procedures are, but if they're not certifying the "random library repos" that are part of your code, I pray I never have to interact with your code.
I dabbled my fingers in enough of them to tame my hubris a bit and learn that various fields have specific needs that end up represented in their processes (and this includes gamedev as well). Highly recommended before commenting any further.
> I don't know what industry you work in, who the regulatory body that certifies your code is, or what their procedures are, [..], I pray I never have to interact with your code.
You illustrate perfectly the attitude problem of the average "gamedev" here.
You do not know shit about the realities and the development practice of an entire domain (here the safety critical domain).
But still you brag confidently about how 'My dev practices are better' and affirm without any shame that everybody else in this field that disagree is an idiot.
Just to let you know: In the safety critical field, the responsibility of the final certification is on the integrator. That is why we do not want intermediate dependency to randomly vendor and bundle crap we do not have control of.
Additionally, it is often that the entire dependency tree (including proprietary third party components like AUTOSAR) are shipped as source available and compiled / assemblied from sources during the integration.
Thats why the usage of package manager like Yocto (or equivalent) is widespread in the domain: It allows to precisely track and version what is used an how for analysis and traceability back to the requirements.
Additionally again, when the usage of binary dependencies is the only solution available (like for Neutrino QNX and its associated compilers). Any serious certification organism (like the TUV) will mandate to have the exact checksum of each certified binary that you use in your application and a process to track them back to the certification document.
This is not something you do by dumping random fu**ng blob in a git repository like you are proposing. You generally do that, again, by using a proper set of processes and generally a package manager like Yocto or similar.
Finally, your comment on "v1.3.1 of libfoo" is completely moronic. You seem to have no idea of the consequence of duplicated symbols in multiples static libraries with vendored dependencies you do not control nor the consequences it can have on functional safety.
If you need 40,000 servers to keep your business running (which you don't, your ~3-8 million weekly transactions can be processed on 1 computer, but whatever), hire people that will work on you, and whose paycheck depends on keeping those computers working, to keep those computers working.
Game theory arguments like "they wouldn't screw me over because other people won't want to do business with them" don't work when the other party is trying to maximize quarterly earnings, and their long-term thinking is in the order of ~2 years.
To be fair, and I know nothing about Tesco’s actual stack, a large grocery chain needs to track their contracts with suppliers, track their inventory in each location and in transit, track what goods they want in which locations, understand which larger pallets and big boxes contain which goods, track things prepped in house, and also optimize what to move from where, to where, and when and how. The latter part probably uses some spiffy stack involving something like CPLEX or Gurobi, and it’s not running on their “1 computer” OLTP stack.
That being said, I don’t see what 40k servers is for unless the POS machines are thin clients that use a substantial fraction of a server each.
If you're doing 10 million transactions per week (which is likely way more than what they're pulling) that's about 16 transactions processed per second. You can add inventory management, payroll management, you can run the company's email server, write all that in JavaScript, and you'll still have room to run a Minecraft server on the same laptop.
My point was not that running all that on one computer is a great idea, just that 40,000 servers for a CRUD application is way past what should be considered reasonable.
But even that's fine. I like computers, you can have 40,000 of them if you want, even if the only reason they exist is some guy's job security. However, you're insane if the guy keeping them running doesn't work for you.
Running it on a single laptop might be an exaggeration, but I can't imagine there's any essential complexity that requires more than a few dozen servers.
No, I did not suggest that, in fact in the very comment you're replying to I said:
My point was not that running all that on one computer is a great idea...
Regardless, if you want to strawman my passing remark, I'm happy to defend it.
Let's even say my numbers are wildly wrong, and they're processing 100x more transactions than what I claimed (which was already an overestimate). Tell me why you can't process 1600 transactions per second on one computer, especially for a country the size of the UK, where you would expect a ~15ms ping when talking to a server on the other side of the country.
I would expect outages when talking to a server on the other side of the country, and an outage preventing supermarket customers from checking out or supermarket staff from usefully restocking shelves would be a very expensive mistake.
A good system here will be distributed.
That being said, two servers (for redundancy) physically located at each branch ought to do the trick. Tesco has a bit over 5000 branches [0], so that’s 10k of the 40k VMWare seats right there. Throw in some extra seats so storage and compute can be separated at each location (maybe unnecessary but comes with some benefits) and so that there is still redundancy while a server or two at a site is being re-imaged and 40k seems about in the right ballpark even for a fairly lean implementation.
And, sure, all those on-site servers might be relatively inexpensive industrial units designed to tolerate a toasty, dusty, and occasionally damp closet that looks nothing like a tidy datacenter, but it still makes sense to run something like VMWare on them.
Yeah, I'm pessimistically sure that there is other stuff, like:
* Checking whether each item scanned has satisfied a logical contract for a discount, some of which may be per-region, per-store, or even per-customer.
* If multiple exclusive coupons or deals are available, resolve the contradiction, preferably in favor of the customer.
* Check if any items or quantities of items require an ID to be shown before proceeding, and record information about the employee authorizing it.
* Update customer "rewards" data and generate any special offers so that you can put it onto their receipt.
And that's not even starting to get into all the other less-customer-synchronous stuff that you still need CPU power somewhere to do. Managing stock levels, orders, deliveries, price changes, anti-"shrinkage", employee shifts, market-research, status and repairs of freezer-units, operational logging and telemetry, every form of reporting/dashboard "strategic insight" stuff beloved by upper management...
You are severely underappreciating how complex a retail organization of Tesco's size is.
I work in this space for a retailer almost the same size as Tesco and when factoring in all the attendant organizations, businesses, and functions it requires, 40k servers does not surprise me at all.
Outside of just the brick-and-mortar stores you have Marketing, Retail, eComm, Merchandising, Strategic Sourcing, FP&A, Finance & Accounting, Asset Protection, Corporate Real Estate, Retail Real Estate, Internal Audit, Supply Chain, Transportation, Business Services, Data Science, etc etc. and IT at every level of those. Each one of these components is large enough to be a medium-to-large sized company in its own right.
Numbers I found vary but Tesco has around 3500 stores in the UK alone alongside other chains they have a hand in. They also have a large online presence, click and collect operations, estates, data collection schemes and a whole logistics network to operate. I'd have actually thought it would be higher than ~11 VMs per store.
That works out to ~11 computers/licenses per store, which sounds a tad high but also very easy to do if you let new system accrete over time and factor in the need for offline operations and redundancy across regions.
You’re ignoring general heavy workloads such as observability. How much telemetry do they gather and analyze for tracking and fraud detection. A quick google on Tesco engineering shows that they process 35k qps against couchbase, and 35 terrabytes of telemetry data per day.
They track 150k devices in their ecosystem, which, reading between the lines, would produce the telemetry and require observability, state management, anomaly detection, etc. They have hundreds of thousands of employees using these devices for varying purposes. We’re talking quite a bit of compute, which also requires high availability.
I know nothing about Tesco beyond that quick google search, but I’ve been at several companies where I would read online comments claiming we could reduce our workload to a few servers, and I would think of our tens of thousands of fully loaded machines and roll my eyes.
Most admins are more keen to shift blame than to keep things running. Having another company to point fingers at is more attractive than a proper functioning team.
I'd hate to be the lowly, underpaid sysadmin who responded "40,000 servers" when asked the current number of servers, but he meant to respond "4,000 servers". LOL
> Every pixel and every function went through me. The AI translated what I asked for into code, but every decision was human.
You'll find that programmers are a lot less prickly when you use AI to generate code, than say artists are, when you use it to generate pictures. You don't have to defend yourself, it's OK to use it to make cool things that you couldn't otherwise.
You should be aware though that even though it may "feel like magic" when just getting started, there's an upper limit to the complexity of what you can build with AI-generated code - it's very low quality and will start falling apart once you stack a lot of it. For the same reason I wouldn't recommend using it as a learning resource, if you really want to get into programming.
No, not everything is a trade-off. Some things are just good and some are just bad.
A working permission system would be objectively good. By that I mean one where a program called "image-editor" can only access "~/.config/image-editor", and files that you "File > Open". And if you want to bypass that and give it full permissions, it can be as simple as `$ yolo image-editor` or `# echo /usr/bin/image-editor >> /etc/yololist`.
A permission system that protects /usr/bin and /root, while /home/alex, where all my stuff is is a free-for-all, is bad. I know about chroot and Linux namespaces, and SELinux, and QEMU. None of these are an acceptable way to to day-to-day computing, if you actually want to get work done.
That claim is too generic to add anything to this discussion. Ok, everything has a trade off. Thanks for that fortune cookie wisdom. But we’re not discussing CS theory 101. In this case in particular, what is the cost exactly? Is it a cost worth paying?
The cost is that developing that simple script to execute something and accessing files will have to be constructed differently. It will be much more complex.
That or the OS settings for said script will need to be handled. That is time and money.
I've said this elsewhere in this thread - but I think it might be interesting to consider how capabilities could be used to write simple scripts without sacrificing simplicity.
For example, right now when you invoke a script - say "cat foo.js" - the arguments are passed as strings, parsed by the script and then the named files are opened via the filesystem. But this implicitly allows cat to open any file on your computer.
Instead, you could achieve something similar with capabilities. So, I assume the shell has full access to the filesystem. When you call "cat foo.js", the shell could open the file and pass the file handle itself to the "cat" program. This way, cat doesn't need to be given access to the filesystem. In fact, literally the only things it can do are read the contents of the file it was passed, and presumably output to stdout.
> It will be much more complex.
Is this more complex? In a sense, its exactly the same as what we're doing now. Just with a new kind of argument for resources. I'm sure some tasks would get more complex. But also, some tasks might get easier too. I think capability based computing is an interesting idea and I hope it gets explored more.
> how capabilities could be used to write simple scripts without sacrificing simplicity.
I proposed a solution for that in my original comment - you should be able to trivially bypass the capability system if you trust what you're running ($ yolo my_script.sh).
The existance of such a "yolo" command implies you're running in a shell with the "full capabilities" of your user, and that by default that shell launches child processes only a subset of those. "yolo" would then have to be a shell builtin, that overrides this behavior and launches the child process with the same caps as the shell itself.
> That claim is too generic to add anything to this discussion. Ok, everything has a trade off. Thanks for that fortune cookie wisdom.
It isn't fortune cookie wisdom and no it isn't "too generic". It is something that fundamentally wasn't understood by the person I was replying to from their comment. I also don't believe you really understand the concept either.
> But we’re not discussing CS theory 101.
No we are not. We are discussing concepts about security and time / money management.
> In this case in particular, what is the cost exactly? Is it a cost worth paying?
You just accused me of "fortune cookie wisdom" and "being too generic". While asking a question where the answer differs dependant on the person / organisation.
All security is predicated on what you are protected against. So it is unique to your needs. What realistically are your threats. This is known as threat modelling.
e.g. I have a old vehicle. The security on it is a joke. Without additional third party security products, you can literally steal it with a flat blade about two inches long and drive away. You don't even need to hot-wire it. Additionally it is highly desirable by thieves. I can only realistically as a individual without a garage to store it in overnight, protect it from an opportunist. So I have a pedal box, a steering wheel lock, and a secret key switch that turns off the ignition and only I know where it is in the cab. That is like stop an opportunist. However more determined individuals. It will be stolen. Therefore I keep it out of public view when parked overnight. BTW because of the security measures, it takes about a good few minutes to be able to drive anywhere.
Realistically. Operating system security is much better than than it was. It is at the point that many recent large scale hacks in the last few years were initiated via social engineering to bypass the OS security entirely. So I would say it is in the area of diminishing returns already. So the level of threats I face and most people face, it is already sufficient. The rest I can mitigate myself.
Just like my vehicle. If a determined individual wants to get into you computer they are going to do so.
Thanks for educating me there champ. I'm sure you're very smart. But I've been writing software for a few decades now. Longer than a lot of people on HN have been alive. There's a good chance the computer you're using right contains code I've written. Suffice it to say, I'm pretty familiar with the idea of engineering tradeoffs. I suspect many other people in this thread are familiar with it too.
You missed the point the person you were replying to upthread was making. You're technically right - there is always some tradeoff when it comes to engineering choices. But there's a pernicious idea that comes along for the ride when you think too much about "engineering tradeoffs". The idea is that all software exists on some paraeto frontier, where there's no such thing as "better choices", there's only "different choices with different tradeoffs".
This idea is wrong.
The point made upthread was that often the cost of some choice is so negligible that its hardly worth considering. For example, if you refactor a long function by splitting it into two separate functions, this will usually result in more work for the compiler to do. This is an engineering tradeoff - we get more readability in exchange for slower compile times. But the compilation speed difference is usually so miniscule that we don't even talk about it.
"Everything comes with tradeoffs" is technically true if you look hard enough. But "No, not everything is a trade-off. Some things are just good and some are just bad" is also a good point. Some things are better or worse for almost everyone. Writing a huge piece of software using raw assembly? Probably a bad idea. Adding a thorough test suite to a mission-critical piece of software? Probably a good idea. Operating systems? Version control? Yeah those are kinda great. All these things come with tradeoffs. But the juice can still be worth the squeeze.
My larger point in this thread is that perhaps there are ways we can improve security that don't make computing measurably worse in other ways. You might not be clever enough to think of any of them, but that isn't proof that improvements aren't possible. I wasn't smart enough to invent typescript or rust 20 years ago. But I write better software today thanks to their existence.
I would be very sad if, in another 30 years, we're still programming using the same mishmash of tools we're using today. Will there be tradeoffs involved? Yes, for sure. But no matter, the status quo can still be improved.
> Realistically. Operating system security is much better than than it was. [...] So I would say it is in the area of diminishing returns already. So the level of threats I face and most people face, it is already sufficient.
What threat models are you considering? Computers might be secure enough for you, but they are nowhere near secure enough for me. I also don't consider them secure enough for my parents. I won't go into detail of some of the scams people have tried to pull on my parents - but better computer systems could easily have done a better job protecting them from some of this stuff.
If you use programming languages with a lot of dependencies, how do you protect yourself and your work against supply chain attacks? Do you personally audit all the code you pull into a project? Do you continue doing that when those dependencies are updated? Or do you trust someone to do that for you? (Who?). This is the threat model that keeps me up at night. All the tools I have to defend against this threat feel inadequate.
The idiotic statement is yours. If the "sometimes" is important to you, you can have it - you're not the first person on the internet to play word games.
But unless you can come up with a very detailed list of when it's acceptable "to lock people down and physically prevent them from harming themselves" and when it's not acceptable (it never is, it's a crazy statement), and I don't think you have such a list, your "sometimes" just means "whenever I, as the person writing the software judge", rendering it completely meaningless.
It's not illegal to not release your software on a platform. But the mobile market is so top-heavy on both the apps and the games side, that without a few key developers - Meta, ByteDance, Tencent, etc your union is dead in the water - and the top 1% of developers would very much like to more friction for new developers, not less.
If Apple or Google convinced a court you formed a cartel then that's the end of the story. Whether it's a cartel in the eyes in the public would be irrelevant.
Considering the same law is used to strike a 3 hour GPU documentary over a ~30 second clip, I think it serves to corporate pretty well.
GamersNexus' 3 hour documentary about GPU smuggling (which is way more than a vlog as HN commenters like to portray) is struck down by Bloomberg because they didn't want their 30 second clip, which is squarely fair use BTW, of POTUS speaking to be in that. GamersNexus repealed successfully, but Bloomberg tried to bully them [0].
I don't understand why people think this is something corporations desperately want. It's something they'll abuse if you leave it sitting around for that, but that's just the argument for getting rid of it. How does the ability to be a petulant grouch benefit them? It has negligible monetary value and causes PR damage. It's a footgun that nobody needs and only fools want.
And if they're actually the cartoon villains it would imply, rather than just banal petty autocrats carelessly fooling around with a toy they deserve to have taken away from them, then we should maybe less be saying "it makes sense that they would want it this way" and more be sticking their heads in a guillotine so we can show the children the proper way to resolve a dispute with a tyrant.
In neither case should a law like that remain on the books.
Looks like it's complicated. The video has some theories.
- Bloomberg has a similar investigation which is deeply undercut by GamersNexus video. GN seen the labs, Bloomberg got their access revoked, so theirs is an empty video, and they want the views.
- The video holds no punches back about anyone, and Bloomberg has an NVIDIA sponsored section dedicated to them.
- There's no other source which recorded POTUS' words, and maybe they don't want these words to be widely available, video argues.
- Lastly, they wanted a licensing fee for that 30 seconds to leave their videos alone.
So, when you're a beancounting billionaire corporation, you can have the reasons to go after a bearded guy who manages to do a better job and make you look bad.
That's precisely the "petty autocrats carelessly abusing a footgun" scenario. They've made themselves look bad for negligible benefit while harming innocent people. It's the argument for taking it away from them.