Is my take on this also generally correct? That it wouldn't have been a problem had cookies and such been designed to take into account the origin properly (and hence why it's unintuitive and catches people off-guard)?
If cookies were scoped to the (source, target) pair, then that would remove one of the main motivations for CORS, yes: evil.com would not be able to get any information from bank.com by making your browser make the request that they could not get by doing the request server-side.
There's a second problem CORS kinda tries to solve, which is the ambient authority problem: services that run behind firewalls and assume that if someone can reach them the someone should have access. If someone runs a browser behind the firewall and opens a page on unsafe side of the firewall, that page can then issue network requests from the browser and thus end up access things on the "safe" side of the firewall. This is a large part of why CORS has the whole preflight complication and the rules around when preflights happen: the idea is that in this situation just making the request, not even receiving a response, is potentially damaging. There are the carve-outs for requests that could be generated without things like XHR that are subject to CORS (e.g. by doing a form submission or <img> load or whatnot); if your ambient-authority-using server responds in interesting ways to those, CORS is not going to help you... The _right_ fix for this stuff, of course, is for services to stop using ambient authority and/or for browsers to block requests from public sites to private IPs. Unfortunately in practice detecting "private IPs" reliably is not trivial, because fundamentally it depends on the routing and firewall topology, which the browser doesn't really know about.
Thank you for the reply! I didn't realize this actually gets to another question I've had in another context, which is why can't a browser at least assume 192.168.0.0/16 etc. are private networks and block requests from nominally-public addresses from being sent to those? (Or do they do that already?) This should be possible without needing to detect anything at all, right?
Pretty sure some tools like NoScript already do that, but to protect against DNS rebinding (which can subvert the SOP), not as a protection against cross-origin requests coming from a public site to an internal one.
> The NoScript extension for Firefox includes ABE, a firewall-like feature inside the browser which in its default configuration prevents attacks on the local network by preventing external webpages from accessing local IP addresses.
I think just because they're afraid of breaking things? I'm honestly not sure what the rationale was for not implementing this. They do DNS pinning which helps mitigate DNS rebinding attacks, but I don't think they do anything specifically to restrict access to internally routable IPs.
They do block certain ports that are known to be problematic (25, 6667, 5222, etc)
If you do that, I would recommend looking at writing an extension first to see if there's any way to do it without a fork (maybe using the same technique that things like ublock origin have)
Yeah you can almost certainly do it with an extension, but half the point of a fork would be to get the message across that they need to get their act together. (I almost certainly won't get around to it though, so this is just daydreaming.)
Interesting. It seems the "legitimate" uses are corporate? Seems like providing a group policy or config option to disable protection against this would be the sensible way to go, rather than increasing the attack surface of home users just because some corporate users do weird things. Although honestly major software vendors are already happy breaking so many things in the name of security that this one could just be another one on top...