The example.com/mod2 go.mod does not in fact affect version resolution, because it's not even fetched. However, it affects the example.com/mod1 go.mod, and the example.com/mod1 go.mod affects version resolution.
This doesn't help with the problem you are describing, but it still has value from a security point of view, because example.com/mod2 truly doesn't matter except to the extent that was already checked into example.com/mod1, which you do need to trust.
If you try to "go build" or "go test" something in example.com/mod2, you actually do get an error since Go 1.17, as if it was not in your dependency tree at all. You need to "go get" it like any new dependency.
As explained in the post, if a transitive dependency asks for a later version than you have in go.mod, that’s an error if -mod is readonly (the default for non-get non-tidy commands).
I encourage you to experiment with it!
This is exactly how the “stricter” commands of other package managers work with lockfiles.
Frank does great work that is critical to many businesses, and should get funded to do it professionally.
However, donating money to an open collective is prohibitively hard for most big companies. Maybe the world should be different (or maybe not, since it would be easy for employees to embezzle money if they could direct donations easily), but that's how it works currently.
AFAICT, there is also no fiscal sponsor, so the donation matching suggested in a sister comment won't apply.
This is why Geomys (https://geomys.org) works the way it does, and why it has revenue (ignoring the FIPS and tlog sides of the business) which is 30-50x of some GitHub Sponsors "success stories": we bill in a way that's compatible with how companies do business, even if effectively we provide a similar service (which is 95% focused on upstream maintenance, not customer support).
I am not saying it's for everyone, or that Frank should necessarily adopt this model, or that it's the only way (e.g. the Zig foundation raises real amounts of money, too), but I find it frustrating to see over and over again the same conversation:
- "Alice does important maintenance work, she should get professionally funded for it!"
- "How does Alice accept/request funding?"
- "Monthly credit card transactions anchored at $100/mo that are labeled donations"
- no business can move professional amounts of money that way
- "Businesses are so short-sighted, it's a tragedy of the commons!"
Anyone who solicits donations should also sell overpriced books of some sort, because it’s often very easy to get even a $500 book approved as an expense where a $5 “donation” causes hell.
With the year prominently displayed, i.e. "20XX Edition", to reflect when it was current. To help people track how long it has been since they dona-bought their last copy. And so purchase documentation explains repeat purchases.
> However, donating money to an open collective is prohibitively hard for most big companies.
You are absolutely correct. However, that's the mechanism that Frank has made available, and that's what the comment I was replying to was asking, so I was just connecting the dots between the question and answer.
While it might be frustrating to see non-viable options presented as ways to fund critical FOSS, it's even more frustrating to see blame effectively being placed on the maintainer; particularly because, if companies like Apple really wanted to fund this work, I'm pretty sure they could figure something out.
Anyway, looking at the model you propose, it seems like the main difference is that Frank just doesn't explicitly say "you can retain my services"? Is that all that's stopping Apple from contacting him and arranging a contract?
> if companies like Apple really wanted to fund this work, I'm pretty sure they could figure something out.
Having spent the last ~6 years in big tech consistently frustrated by the rigidity of the processes and finding clever ways to navigate (see: wade through the bullshit), this isn’t as easy as you’d hope. The problem is that someone has to spend a non-trivial amount of time advocating internally for something like this (a “non-standard process”) which generally means asking pinging random people across finance, procurement, and legal how to deal with it and 99% of people will just throw up their hands (especially in this case because they don’t understand the importance of it). If things don’t fit a mold in these big companies, they fall into the event horizon and are stretched out to infinity.
Couldn’t you just go up your chain to the VP or whatever and use their backing / negotiating at the VP level to organize? It might not work for random projects but if Apple is using libsodium for security this could presumably be pitched as an investment into their own software supply chain.
Filippo is another maintainer, of extremely similar open source software with entirely the same customer base, offering (important) advice to a peer, so I don't think policing his tone is helpful here.
I know who he is and what he does. I think we probably disagree on whether that makes the comment in better or worse taste.
Otherwise, I agreed with him, and am genuinely curious whether the stopping factor here is maintainers like Frank simply not saying "you can email me to retain my services"
> if companies like Apple really wanted to fund this work, I'm pretty sure they could figure something out
A reminder that companies are not a hive mind.
Many people at Apple surely would love to funnel piles of money to open source. Maybe some of them even work in the Finance or Procurement or Legal departments. But the overwhelming majority of Apple’s procurement flow is not donations, and so it is optimized for the shape of the work it encounters.
I bet there are plenty of people working at Chick-fil-A who wish it was open on Sundays. But it’s not ~“blaming the user” to suggest that as it stands, showing up on Sunday is an ineffective way to get chicken nuggets.
The idea that donations are the only way they could fund this work is what I was talking about. I'm sure Apple has various contractors and other forms of employees.
It's like suggesting that Chic-Fil-A really does want to open on Sunday, but the only thing stopping them is customers not telling them they want it open on Sunday.
Given the increasing obviousness that there's functionally no oversight of NGOs and government funding, perhaps we just need some NGOs and get government grants for these critical services.
Thanks very much for your comment. I posted elsewhere that I felt like SameSite: Lax should be considered a primary defense, not just "Defense in depth" as OWASP calls it, but your rationale makes sense to me, while OWASP's does not.
That is, if you are using SameSite Lax and not performing state changes on GETs, there is no real attack vector, but like you say it means you need to be able to trust the security of all of your subdomains equally, which is rarely if ever the case.
I'm surprised browser vendors haven't thought of this. Like even SameSite: Strict will still send cookies when the request comes from a subdomain. Has there been any talk of adding something like a SameSite: SameOrigin or something like that? It seems weird to me that the Sec-Fetch-Site header has clear delineations between site and origin, but the SameSite header does not.
Browser vendors have absolutely thought about this, at length.
The web platform is intricate, legacy, and critical. Websites by and large can’t and don’t break with browser updates, which makes all of these things like operating on the engine in flight.
For example, click through some of the multiple iterations of the Schemeful Same Site proposal linked from my blog.
Thing is, SameSite’s primary goal was not CSRF prevention, it was privacy. CSRF is what Fetch metadata is for.
> Thing is, SameSite’s primary goal was not CSRF prevention, it was privacy.
That doesn't make any sense to me, can you explain? Cookies were only ever readable or writable by the site that created them, even before SameSite existed. Even with a CSRF vulnerability, the attacker could never read the response from the forged request. So it seems to me that SameSite fundamentally is more about preventing CSRF vulnerabilities - it actually doesn't do much (beyond that) in terms of privacy, unless I'm missing something.
Oh, thanks. I learned something new. Never knew that different subdomains are considered the same "site", but MDN confirms this[0]. This shows just how complex these matters are imo, it's not surprising people make mistakes in configuring CSRF protection.
It's a pretty cool attack chain, if there's an XSS on marketing.example.com it can be used to execute a CSRF on app.example.com! It could also be used with dangling subdomain takeover or if there's open subdomain registration.
It's why I like Sec-Fetch-Site: the #1 risk is for the developer to make a mistake trying to configure something more complex. Sec-Fetch-Site delegates the complexity to the browser.
You don’t, but remember you monitor your own keys: if you know you didn’t upload a poisoned key and the log refuses to serve a key preimage for your email, you’ve caught it misbehaving.
No, the point of the Merkle tree inclusion proofs and of the witness cosignatures is precisely that the operator can't show a different view of the log to different parties.
The SKS network is append-only in aspiration. There is nothing like a Merkle tree stopping a server in the pool (or a MitM) from serving a fake key to a client. The whole point of tlogs is holding systems like that accountable. Also, the section on VRFs of the article addresses precisely the user removal issue.
A single SKS server can not serve a fake key, only a valid key that existed in the past. This might be done to maliciously unrevoke a key. The normal PGP key integrity prevents straight up forgeries.
This has not been true since Go 1.17 with the default -mod=readonly, which is why go.mod is a reliable lockfile.