The backstory on this, if you don't follow web security at all, is that building public web pages to trick browsers into talking to private networks (or your loopback interface) is a surprisingly pernicious attack vector. The modern countermeasure for this is to require devices parked on private networks to opt-in to the ability mix public Internet requests with private network accesses, using CORS, which is what this is about.
You should probably be mostly thrilled about this development.
Finally CSRF attack scenarios on outdated router firmwares can be prevented.
CORS was basically ineffective in a lot of ways because it only works - by design - with newer servers that send those headers and with newer browsers that actually respect the headers and not simply ignore them. It was also ineffective for older servers from pre-CORS ages. The never break the web mentality didn't work out for this scenario.
Looking forward to this! Finally the LOIC and routersploit web implementations are ineffective. Now drop support for ftp in the web browser and we are good to go.
> Firefox can still be abused to DDoS via ftp urls
Firefox deprecated FTP in 2015, and required it be manually re-enabled since April last year (FF88), and completely removed it in July (FF90), this year.
This is so true. Especially since the pandemic the sheer amount of CVEs are hard to follow through and evaluate whether they're relevant for your own tech stack or not.
A client of mine has an FTP site, and their customers access it. Those customers have an IT policy which does not allow them to install other software, for security reasons.
Thus, keeping and old version of firefox around is what their customers do.
(Yes, this is insane and bizarre beyond belief. The security policy is working against security, and the fact that the security policy doesn't care about an old browser is insane. Yet there it is.)
2)
I have a client with employees around the world. They are usually very secure. However, these employees seem to be the complete opposite of computer literate. Every step they take, every task assigned, is accompanied with PDF files and wiki walkthroughs of "here is menu item X, click this, then menu item Y", along with screenshots, and enlargements of menu items.
All their training is rote. They don't know how to use software, only how to click this, then that, as per pictures and doc, then entire report in the form that pops up.
If anything deviates -- tech support.
I honestly don't know how it is possible to find people capable of doing a job with diligence, competence, and intelligence, but require this level of hand holding, yet I see it myself, through this client, constantly.
Like I said ... strange and bizarre.
While I am sure this client will eventually manage to upgrade its staff, they have been researching clients, testing them, re-working all documentation, and even rolling out 'test upgrades' for employees!
And of course this takes time, naturally they are short staffed, and it requires management buy in at every step.
And getting people to modify about:config? That's way, waaay too complex. So they're stuck on an old browser, which they aren't supposed to use for anything but FTP, yet these employees are the sort that call a browser "google", and don't know the difference between firefox and chrome.
So you can be sure they're using an old version.
--
Again, I don't blame Mozilla for this.
This is the sort of stuff which makes me think 'maybe people need a license, like a driver's license, to be on the internet, they're too dangerous otherwise'.
But of course, as I said initially... not easy or realistic to roll out.
Now that I think of it, though, maybe it should be "businesses need a license to be on the internet". The important part here being, if you have constant breeches, and your infra gets used to launch endless attacks, you get fined until you go out of business.
Correct me if I'm wrong but the browser still sends the request. The browser checks for the response's CORS headers to see if the response can be used but there's potentially an attack vector even when the response is ignored.
I'd be a lot more likely to be thrilled by it if the browsers had a persistent per page or per site setting to disable these kind of things.
It gets annoying when a page I write loaded onto my table on my LAN's WiFi cannot talk to devices I own on my LAN just because I loaded the page from my server that is on the public Internet.
PS:
> The modern countermeasure for this is to require devices parked on private networks to opt-in to the ability mix public Internet requests with private network accesses, using CORS, which is what this is about
I predict that most IoT devices won't have a way to configure this.
If the manufacturer intended it to be controllable from some web app from their site, they will opt-in to control from everyone. If the manufacturer only wants it controlled from their mobile app, they will explicitly opt-out of web control if that is possible.
Just to be clear: the people working on PNA thought about how likely it was for IOT devices to support this, and the default behavior for most devices is expected to be fail-closed.
This is why one of my security mantras has become: if it is not secure enough to connect to the open Internet, it is not secure at all.
That doesn't necessarily mean you connect it to the open Internet, but it means you don't leave everything inside wide open because "oh there's a firewall." It also means buggy vulnerable IoT (Internet of Targets) stuff has to be dealt with.
Firewalls are almost security theater. They're just a basic precautionary thing. Same goes for virtual and zero trust networks. Systems must be secure, period.
I think the concern is for even clicking a link, which to my knowledge typically hasn't fallen under implicit CORS concerns. The browser will automatically assume that unless a private origin server (i.e. one within a private address space) explicitly allows requests from a public source, then the navigation should be prevented.
It's always a challenge in browser design, but basically this is just another case of killing valid use cases because some servers don't follow the spec (i.e. GETs with side effects).
> killing valid use cases because some servers don't follow the spec
There are probably 25 IoT devices in my home, and more than half of them have a magic GET request with side effects. For example, just by clicking this link, my lights turn on http://lighting.londons_explorer/cm?POWER%20ON
A malicious web page could redirect me to that URL and force my lights on with no input from me. I bet some of the devices allow firmware updates with the same method.
I'd wager the ratio of people who have at least one item of IoT junk to the number of people who have a legit setup that requires allowing cross site access to a local resource from a remote one is probably ten million to one. Who are these "everyone else" who are controlling their LAN devices from a public website who are going to suffer an evening setting up CORS?
Also services using split horizon DNS: i.e. it is publicly available service, but only when used from outside. When used from inside, it is not routed through public interface, but resolved directly to it's internal IP.
I'm pretty sure I'm lacking experience in administering a huge network to spot the obvious? problem, but for smaller setups I've never understood why you would do this anyway and not treat it as "external first" by putting it in a sort of DMZ and treat your internal users as "quasi-external" ones by routing them into the DMZ via another means...
Of course this has the downside if you have actual 'internal only' stuff, but those could be separated from the split stuff... Just too much work with years (decades) old setups?
At bigger, established corporations, I've seen internal web apps that need external access available set up this way, but when accessed via the public IP, a second auth factor is required. IIRC, MS Exchange's web interface was the most smooth, where the only difference to the end user was an additional field for the MFA code.
If an organization is using the "BeyondCorp" approach, it doesn't seem relevant, but that's tough to bolt onto large, complex existing environments IMO.
Edit: just to clarify, the advantage is similar to what "BeyondCorp" gets you - end users just need to remember the one URL, regardless of where they're connecting from.
In DMZ, it still has an IP from private range, ergo, still treated as internal -- it doesn't matter whether it is in the same subnet as your users, only whether it falls into the range that the browser considers private.
Maybe our understanding of DMZ is different, for me it's not necessarily with an internal IP - so to clarify, what I meant was hosting it on-premise, but only accessible via it's public IP (as the service is public anyway), which shouldn't be slower if your physical route isn't any longer.
I use split horizon networking to, eg, provide public DNS entries to a public canary server. Then on the private network a different DNS server provides correct (LAN-only) addresses. This way I can see if/when private hostnames are leaked to some entity.
I have about two dozen devices on my private LAN so I wouldn't consider myself to be "big" or "enterprise".
The setup is fairly unusual though because most users (and unfortunately many developers) lack the technical know-how for it.
I don't think it's because of this one person, but rather the masses that has crap hardware and software installed everywhere. It's a global problem, not everyone is capable of securing their own network.
Although i agree with this browser change, i highly doubt it will fix all crap hardware. There's a limit to how much you can apply band aid solutions to something that seems to be trying its hardest to be insecure, and non-idempotent GET requests is a bit beyond a minor oversight.
Note that GET requests should be side-effect free. Not just idempotent.
That means a GET request that turns your lights on, or opens your garage door (which is idempotent) is still wrong.
> Note that GET requests should be side-effect free.
Not quite. If a GET request is side-effect free then it won't be logged (since that is a side effect).
GET requests aren't supposed to modify state. Logging is a side-effect but it isn't usually considered to be stateful. Changing the state (on/off) of a lightbulb is definitely against the standard requirements for a GET request.
Logging is stateful, though the state usually isn't observable via HTTP requests. Of course the same is true of the light bulb, unless another API is provided to read its status, and a similar API could be provided for the log files. Or a script could toggle a light bulb in response to the logging.
Perhaps a better way to express this is that user agents are permitted to turn one GET request into N GET requests (N >= 1), and can also issue GET requests without user interaction (e.g. for preloading). When this happens the system should still meet its (customer / end-user) requirements. The requirements related to logging are that every request is logged, so it makes sense to record each GET request separately. The requirements for the light bulb are that one user interaction (not one GET request) equals one state change, so updating the state in response to each GET request doesn't meet the requirements. Even if the API were explicitly "turn on" or "turn off" rather than "toggle" you still wouldn't want the state to be affected by preloading, and you could get odd results if opposing requests were repeated (interleaved).
> DELETE /idX/delete HTTP/1.1 is idempotent, even if the returned status code may change between requests:
So requesting to open the garage door multiple times which results in an open garage door in the end is an idempotent request, even though after the second request the response is "I am already open!"
Now, a request to toggle the state of the garage door would not be idempotent. The state of the system is different if you call it an odd or even amount of times.
In general, an idempotent method is a method such that calling it one time is the same as calling it n times in a row for any n > 1. However, idempotency doesn't require that calling the method one time is the same as calling it zero times, although such a method isn't excluded from the definition either. So a stateless method is necessarily idempotent, but an idempotent method isn't necessarily stateless.
Nothing will fix all crap hardware, so we have to live with it and solutions like this prevent crap hardware from being accessed and abused even if it remains crap forever (since it will remain crap).
Just like not everyone is capable in handling explosives or build roadsafe cars or most other stuff by themselfes. We have rulings for this, laws, needed checks and so on. Start stopping the import of crap, like one does it with illegal firecrackers or in the past with eletronic stuff that interferes to much.
Oh, I know! How about a browser that does not allow direct navigation from external to internal addresses? Ah, wait.
It still does not quite work for services that have a public IP address. So where do you go from there, a new protocol that has capability handling and external access is disabled by default?
Forcing companies to issue recalls for buggy hardware and firmware could probably do the trick. Note the expense and the fact of insufficient dissemination of found issues combined with lag to fix them.
Are you suggesting regulating IOT devices for better security? Who the heck could you trust to do that? The FCC? The Chinese government? Consumer rights in the USA is barely a thing, and there is no competent bureaucracy for this sort of auditing. Creating some standards body and coming up with evolving criteria and hoping manufacturers will buy in seems like a multi year, if not multiple decade, process...
In the meantime Google can single handedly monkey patch this situation within a few months and force manufacturers to catch up with the next product cycle. While a less ideal route, this seems far likelier to produce actual results within our lifetimes.
I find it really interesting how the US government can constantly solves very tricky real world problems that impact our lives, yet people have close to zero trust in it. Which isn’t to say you’re wrong here.
I think it’s because when stuff works it looks simple. There isn’t an obvious difference between a mound of dirt piled up in days and one that’s carefully compacted as it’s constructed. At least until you want to build something on top that actually lasts for 50 years. Build stuff to last takes time and nobody is around to see you succeed.
Back to your point actually fixing the underlying issues with IOT security is worth it long term even if it hypothetically takes 20 years to get it correct. At the same time moving quickly and patching one of 100,000 problems can still be useful.
> how the US government can constantly solves very tricky real world problems that impact our lives
What are some recent examples? Especially dealing with technology?
I can think of a bunch of counter-examples where the government did not do a good job of regulating:
* Rural broadband failed
* Net neutrality failed
* Healthcare.gov was a fiasco
* Wireless spectrum auction never got us municipal/rural long-range wireless
* NASA's duties have largely been outsourced to private actors
* State DMVs are a shitshow
* Election integrity failed
* They can't figure out what to do about online disinformation
* Warrantless wiretapping: both unconstitutional yet ineffective, as in 9/11 and the lack of data sharing between agencies
* Foreign military misadventures, our traditional forte: Afghanistan, Iraq, both abysmal failures
* Healthcare: a joke compared to every other developed country
* Education: pathetic and getting worse
* Social programs: Welfare, what welfare? inequality and homelessness getting worse every year
* Infrastructure: crumbling
* Clean water: only for rich white people
* Immigration: heh
* Covid: lol
* Renewables: haha
* Nuclear: let's pretend it's not there
On every major policy front, the US government has been a disaster for decades. I'm no libertarian by any stretch, but our government is a complete shitshow compared to any other developed democracy. We have neither the leadership competence (decisionmakers and legislators) nor the engineering talent (career civil servants) who can tackle something as diverse as nuanced as IoT security, or arguably, digital security in general. Give them 20 years and they might be able to catch up to 1990s netsec, and by then the manufacturers will be two decades ahead and foreign intelligence services even further beyond that.
Our government is doomed, and taking us down with it.
Almost everything the government does is extremely complicated. How about the modern coast guard search and rescue process, weather forecasting, air traffic control, making counterfeit resistant money, etc etc.
Of course it’s all stuff the government does directly like GPS that generally works even if theirs issues with version 1. Go to Healthcare.gov today and it works fine, but wow 8 years ago there where issues. People still get mileage talking about that launch presumably because it’s that unusual.
Bringing up the FTC there’s keeping the wireless spectrum clean. You can blame the Government for not solving all shorts of issues, but people complain while at the same time they largely don’t want state or federal government internet. Healthcare is the same issue, we apparently don’t want even a public option, yet somehow the government is still on the hook.
People are always going to talk up government boondoggles because that’s what’s memorable. Clean water in all 155,693 public water systems in the United States isn’t easy it’s a monumentally difficult task that works 99.9% of the time across a huge range of public and private organizations managed by a huge range of different locations from tiny towns to vast cities. Of course if people actually trusted their water then bottled water would be less popular…
Almost anything any government does is extremely complicated, and yet most of the developed ones succeed in ways our fails at. Why are we unique? Countries poorer than us, less dense than us, smaller than us, still manage to provide many basic services, rights, protections, and guarantees that ours does not. This is American exceptionalism gone awry.
Those things you mentioned aren't recent developments. Yes, there was a time when our government was capable of producing good output. What happened? Why are we still judging today's government by its successes of decades past...? Most of what you mentioned is literally last-century tech. The world has moved on; our government has not.
> You can blame the Government for not solving all shorts of issues, but people complain while at the same time they largely don’t want state or federal government internet.
Maybe there's one class of issues that government can't deliver on because the public mandate isn't quite there yet, like single-payer healthcare. But there's another class of issues that the public DOES want, the government already wrote the laws and allocated the budget for, and then did absolutely nothing about (like rural broadband grants basically going to corrupt telcos, with zero real enforcement). That has nothing to do with the lack of public will, just sheer incompetence and corruption.
Then there's the outright unconstitutional things, like warrantless wiretapping or drone assassinations of US citizens... to say nothing of recent developments, like Roe v Wade.
It's not that our boondoggles our more visible, it's that we fail at providing basic services for a huge portion of the population -- things that most other developed democracies can provide without much issue or controversy. By that measure, we fall far short.
It’s generally possible to find individual governments that do just about any specific thing better than the US gov, but that’s not to say each of those countries is better in every way. The best healthcare isn’t in the country with the lowest crime, least corruption, down a list where the 2nd country is then second on every criteria. Further only looking at US failures ignores how other counties fail in different ways. The UK’s land situation is seriously messed up and yet we rarely consider that when comparing governments. I am not going to say the US is #1, but it’s a long way from abject failure.
Also when you excluded say Nigeria and in fact most counties then every remaining country is going to seem worse simply because you just arbitrarily raised the standards. It’s not US exceptionalism to simply say few countries or groups of countries have landed anything on Mars which is freaking difficult. Sure, providing great healthcare is more important, but it’s also something very few counties have done well.
In the detracting from success by looking at unrelated failures misses my argument, at best it speeks to the likelihood of success not the possibility.
(Sorry for the late reply, didn't see your response until now.)
So what do you think is a fairer way to measure governments? Ratio of important successes to important failures? A matrix of weighted policies and implementation scores?
You'd probably end up with something similar to to the UN's human development index (http://hdr.undp.org/en/composite/HDI), in which the US ranks #17, behind Norway, Ireland, Switzerland, Hong Kong, Iceland, Germany, Sweden, Australia, Netherlands, Denmark, Finland, Singapore, the UK, Belgium, New Zealand, and Canada. All of those are perfectly livable countries. My only criterion was "developed democracies", and Hong Kong isn't even much of a democracy anymore. I don't think that's an unreasonably high bar.
Our government is just on the low end of mediocre compared to other developed democracies, at least by the metrics I can think of.
If you can think of a better metric, I'm all ears.
Been following this for a while; I'm really surprised it's taken so long for browsers to begin to address this problem. It's kinda crazy that random websites can just send any HTTP request they want to your private network or local PC with no restrictions; that's a pretty huge attack surface to expose to the public internet.
Even this measure by Chrome is extremely limited, as it sounds like they're only blocking insecure (HTTP) sites from making requests to your private network. HTTPS sites are unaffected (for now).
This is infuriating. Google is once again severely hampering openness and accessibility in the name of security.
The web is more than just the big CDNs. It is not all about your business model. If I can not use your browser to open a socket to a routable host and render the data, then it is not a web browser anymore. But I don't think Chrome has been one for awhile.
Google's logical conclusion is to only allow you to connect to a whitelist of sites that they have control over. For that matter, domain names themselves are obsolete, we should just have AOL-style keywords like 'amazon' and 'facebook'. Only a poweruser weirdo would want to have their own website.
Users could also be tricked into adding their own self signed certificates, or having their own private DNS. It would not at all surprise me if that is banned soon as well.
I can't count the number of things that this would break, especial VPN accessible intranets.
Having users register with some outside service for getting a private network SSL certificate is a no go. That is like saying that a platform is open because submitting to their app store is free. If I have to rely on an outside authority then it is no longer federated, and it is no longer the web. Which is fundamentally what they want.
> It's kinda crazy that random websites can just send any HTTP request they want to your private network or local PC with no restrictions; that's a pretty huge attack surface to expose to the public internet.
I don't think there will be any restrictions on what advertising and tracking CDNs a webpage can make requests to, so long as they have their certificates in order and are not on a private network. I'm sure that when I go to log into my bank it will still happily connect to 12+ different analytics services.
I do think it would be nice to expose more control to the user for namespaces of what is allowed to be fetched from what context, but that might hamper advertising and content consumption.
So would this be like creating a public webpage called “help troubleshoot my router”, and having that webpage serve up malicious JavaScript which attempts to attack my routers local web server?
Will this race to secure the internet mostly by applying ever more complex band aids end up in completely discouraging small entities from running their own public resources?
It's already next to impossible to run a public home server. Is it likely to become completely impossible?
It will keep getting harder to do things independently online. The creative, individual empowerment phase of the internet is winding down. It is being gobbled up and calcified as an apparatus for rent extraction, spying and manipulation.
From a historical perspective, the complete opposite is true. It has never been easier to do things independently on the web than it is today and by huge orders of magnitude compared to even 20 years ago. The panoply of mature open source software and tooling at the disposal of a would be webmaster is so enormous that it now actually pisses people off when programmers open source their own library or framework code.
As far as the rent extraction apparatus, it is enumerated almost entirely by a population that simply did not exist online back in the imagined halcyon days of "the independent internet". The masses didn't come online to tap through homespun webrings, they don't care about that stuff, these shiny hyper-optimized manipulation machines are what keep the masses online in the first place.
>I have friends who set up their own web pages in the late 90s armed with Notepad, free hosting supplied by their ISP, and some online help.
>No chance at all of most people being able to do that today.
I used to save my notepad files on a floppy drive. No chance at all of most people being able to do that today. Just because things are different doesn’t mean worse. The exact skills and methods used 20 years ago is not a good metric for something as nebulous as “do things independently online”. The “things” you can do are going to change and evolve.
20 years ago you saved notepad files on a removable media. Today you still do the same. Underlying technology may have changed, but the interface remains the same.
> I have friends who set up their own web pages in the late 90s armed with Notepad, free hosting supplied by their ISP, and some online help.
> No chance at all of most people being able to do that today.
What are you talking about?
You can still do all that. There's still free hosting available, just not through your ISP.
You can still hand-edit HTML with Notepad and publish it to a free web host.
> It's great that you know what a build system and github and dependencies are, but most people don't.
All the added complexity of web deployments these days is not required for a simple personal web page. You don't need JavaScript, a build system, dependency management, etc. The plain HTML written in 1998 still works today. Even the old HTML frames still work, according to a quick Google, even though they haven't been used by any site in well over 15 years.
> You can still hand-edit HTML with Notepad and publish it to a free web host.
Exactly. The last website I made for a festival early last year I wrote by hand with Notepad++. It ended up being 14 HTML files (7 files and 2 languages) and a couple CSS files and a lot of reading about current CSS standards. Initially I started with WordPress but couldn't find a decent theme to do the layout we wanted, so I scrapped it after a couple days of trying to bend several themes to my will.
Not much different than how I did it in the 90s... except back then I couldn't just DuckDuckGo to find thousands of pages with HTML/CSS help.
You can still do all that with any competent shared hosting provider. (I guess "free, supplied by ISP" is rare nowadays, but you'll find other "free at some subdomain" offerings)
Just because all these other options exist doesn't mean you need to use them. Plenty people I know still handwrite their HTML.
Somehow people got to the point of thinking that the only way to host a website is renting a VPS and setting up everything themselves, and that's just not true. (and even if you do that, there's a range of how complex you need to make it)
But the maintenance bar raised. It is not the case anymore that you could set up web server as hobby and leave it running untouched for years. Now you won't connect at all in 2 months if you don't set up letsencrypt correctly. And their API and root certificate also changes regularly so you must keep software up to date. If you use some cross-domain stuff to interact with outside services from browser, that now breaks every year or two. Things like that add up.
> And their API and root certificate also changes regularly so you must keep software up to date.
Their API deprecated one method with a security risk once and their root certificate is none of your concern if you run a webserver (and it also only changed once and not "regularly"). Their certificate chain is an issue that may concern you, but if your software is working correctly then it should just serve the chain that you get with a new cert.
That's a lot of ifs and buts, just to keep up with the last decade's implementations. For a simple blog that maybe ten people a month read. Good luck keeping up on developments of the next one...
Whether it's lets encrypt or Google or Apple or Facebook, the internet has largely moved away from a culture of small time hackers operating on barebones standards to super complex implementations gatekept by a few huge companies with infinite resources and conflicting values. They want to curate the web and monetize it, not lower the barrier to entry. You are free to use their ecosystems to produce things they can share revenue from, but everything else will only keep getting harder... what even is the web anymore but marketing landing pages redirecting to walled gardens.
That dichotomy is false, lots of levels between "self-host everything" (and deal with the pain of maintenance) and "walled gardens". For "just a blog", good old shared hosting works just as well as it did in the 90s/00s.
It used to be a web server was something you could almost auto deploy. Then it became a series of increasingly complex steps as various 'security' measures were employed. You can do these things yourself, and they aren't that hard, but they were never made easy in a way that didn't imply a lot of specific technical know how. I kept up with it for a while, eventually everyone has to deal with the real world and it's time constraints, and the 'security' of today provides undeniable barriers compared to the yesteryears of the web.
I'm not convinced this browser change is a good thing - I think the issue is the aforementioned crap on personal networks, not the ability for a browser to go there. If your security is endagered by your shitty dishwasher, either don't connnect it, or since you are doing the connecting, put it on an isolated private network. This move is encouraging bad security practices while at the same time just throwing another roadblock in the way of legitimate uses of 'home' software.
You do realize that the managed website hosting of the late 90s/early 00s still exists today, right?
You don't have to stand up your own servers in your favorite cloud provider and become a Cloud DevOps expert. You don't have to manage deployments, dependencies, etc. You can still pay $3/month to get shared hosting on DreamHost, upload your HTML file, and it gets served. No fiddling with nginx, no operating system patching, etc.
Even if you don't want to pay $3/month, I'm sure there are still hosts that will give you a few megabytes of storage and a couple gigabytes of traffic for free.
Hmm I'm not sure most people mean html files when they say 'web server' - I used to run a mail server, a couple websites, a blog, a couple wikis, with auth integration, and a couple custom web apps with live two-way messaging capabilities and associated backends...
You don't need much fancy for a plain page, no, but that's also not really what I'm talking about. I still sometimes use local services on my lan, with web interfaces, which are NOT routers, dishwashers, etc. - think file or media management.
> Then it became a series of increasingly complex steps
honestly, what series of increasingly complex steps? The main thing today is an expectation of HTTPS, and that is added complexity, but also something you can auto-deploy today and lots of tutorials available. E.g. I'm fairly sure I've spent more time of my life on .htaccess and nginx redirect syntax than on HTTPS, despite starting early with Let's Encrypt and not choosing the most-automated solutions - and in other setups "add HTTPS to a domain" is literally a line of config file, with the webserver doing the rest. But that's beside the point I made:
This is assuming that you actually are deploying something to a server, instead of making use of the myriad of ways of having that be someone else's problem. How are those "essentially" not true options?
"We can trust users and random developers to do the right thing" is understandably not the security position browsers take, so this needs some solution eventually. What the right tradeoff is is a good question. (i.e. IMHO there should be clear ways for devices to opt-in to being accessed)
(FWIW, their servers use their own certificates, so in fact I had to spend some time today updating the root certificates on a web server so it could fix certificate renewal.)
What does the warning say? "You might not be downloading the file you think you are"? That just seems like useful, accurate information that you probably want to be aware of.
You're lucky. Or skilled. I have half a dozen websites that broke, with maintenance, in the same period. Often because of SSL issues configured by someone else. Plus my own screw ups. It's not impossible to do right but it's definitely not trivial. Even if you configured everything right, something up chain will probably break, in time...
How can something that literally costs $0 be a ponzi pyramid?
And why should a web server need maintenance? I mean, just search Google for your favorite web server software and "CVE" and you'll find plenty of reasons.
I use a CDN (namely the one on Amazon AWS) to provide HTTPS for my website. That knocks two things at once; fast distribution across the globe, and security. Do you wish to abstain from using a CDN ?
Let's Encrypt is easily automated with certbot, I've been running my home webserver for over 10 years with Debian and NixOS, without touching it apart from stable OS version upgrades.
Let's Encrypt needs internet access, something I prefer not to have for various (rather dated) systems on my network. Worse several things that ran on file:// in the past have been blocked by most browsers so even having to set up a server that then needs a valid cert is a painful complication over just clicking index.html and letting the browser handle every request locally.
For local-only access, you could run your own CA. I found gnoMint to be quite easy to use to generate and manage certificates. It does everything in an SQLite database. I do this for OpenVPN, but you could do it for web services just the same.
> As far as the rent extraction apparatus, it is enumerated almost entirely by a population that simply did not exist online back in the imagined halcyon days of "the independent internet". The masses didn't come online to tap through homespun webrings, they don't care about that stuff, these shiny hyper-optimized manipulation machines are what keep the masses online in the first place.
I agree, but I think people arguing over that would have expected to maintain the same ratio as the internet population grew. Frankly utopian IMO but one should dream, no?
I agree with you, with a couple exceptions. I think it's an error (and a frequently made one) to treat "the masses" as being distinct from the curious, virtuous hackers trying to build with the tools they were given. The difference in many cases is just how much they were encouraged to create, either by the example of others or by the tech itself. If we always treat consumers as strictly consumers instead of fellow humans in a process of mutual improvement... that's what we'll tend to get. Why do we on the one hand claim to have a need for more STEM workers or more competency in this or that, then on the other hand sell devices that coddle users and waste their time? Productivity tools are often behind big paywalls. I won't enumerate all the ways mainstream computing sucks, but our manipulation machines actively discourage people from engaging more positively with tech, not to mention each other. It's not very encouraging to me that Open Source is growing in absolute size when it is struggling so much in terms of mindshare.
The other thing to address is about being "independent online". Many of the things that make it so easy to create a website, for example, are made easy at a cost, i.e. vendor lock-in and rent for continued service. Or github will host your code but also use it for their own purposes, training your AI replacement. Those are ultimately good things to have around but do follow the trend of being cages-with-benefits --they increase dependence on central infrastructure.
TLS certificates used to cost a lot of money, now they're free. Pretty much all relevant web frameworks and technology stacks are published under FOSS licenses.
Nothing stops you from running your own web server with either whatever is the current state of the art web tech or whatever you prefer to build yourself.
Well, the web wasn't as encrypted before. And then when it was, the cost of the certificate wasn't a big deal in comparison to the maintenance effort of constantly rotating certificates, as you'd just buy a certificate that lasted for years and years, which isn't even allowed anymore. SSL with its massive set of "authorities" that are all granted way too broad and yet interchangeable powers with keys that both have to be rotated with pre-determined expiration dates and yet also must be shared with the client is just barely a solution to the original problem :(.
But of course, the truth is that the web was never easy: it was just naive. Most (NOT ALL: some categorically only are protecting the interests of advertisers or rights holders) of these security bandaids and limitations are fixing actual problems with the web that were always there... developers just didn't know about them or didn't realize the ramifications. It would be better to have solved some of these things with solutions that are more elegant, and the lack of a definitive guide for "what all you should and should not do" sucks, but mostly the web is just banning stuff that never should have existed in the first place :(.
It's not always possible to get certificates from Let's Encrypt for local network only services. In a big corporate environment jumping through the hoops necessary to deploy keys or get things into a DMZ can be near impossible. Even if you don't have those issues it is still one more thing to learn and setup. All these things pile up. Try setting up a basic e-mail server on the modern Internet and compare that to 20 years ago.
> It's not always possible to get certificates from Let's Encrypt for local network only services. In a big corporate environment jumping through the hoops necessary to deploy keys or get things into a DMZ can be near impossible.
You're not wrong, but if you can go through the paperwork to add a CNAME to external DNS, your team can use DNS validation to verify host record ownership for LE/ACME:
If your company has network admins smart enough to deploy segmentation rules, they are also probably smart enough to setup a internal CA and deploy the certs to everyone’s root store. If not, that stinks.
Smallstep makes a basic CA for free that is ACME compliant, meaning you just need to change the URL for Let’s Encrypt on your server and restart. Microsoft also has a CA included with Windows Server if you’re using that which works fine (although it uses a different API to get certs).
Only if you allow it to become so, there's no rent-seeking on "my" part of the net. Just avoid any and all devices, programs and "services" which exhibit such behaviour, self-host where you can or use decentralised services if you can not, don't use any devices through "cloud" interfaces - and block those devices from the open internet altogether of they'll contact the mothership - and enjoy the 'net the way you want it.
Doesn't seem related though, since if you ran a public server from home, that server would appear through your public facing "Internet" IP rather than its intranet address.
It might make some homelab setups slightly more annoying, though.
FWIW, I do the same as the article for my home network: I hijack all DNS requests from intranet devices and respond with corresponding intranet IPs. Externally on the internet trying to resolve those same (sub-)domains would lead to the public facing firewall.
This makes it so I can manage stuff like NASes and IoT stuff fairly easily regardless of where I'm connecting from.
Luckily none of my stuff really depends on making cross-boundary requests between intranet and internet services (it's always completely internal or external) so I should still be OK.
How so? I run a public HTTP server and a VPN server from a Raspberry Pi in my living room. It was pretty easy to set it up. Regarding the HTTP server, the only thing that was different from the last time I did this (around 2004) were SSL certificates.
What is the easy way to get SSL certs for my private network, which are recognized by all my browsers? My private host has no public IP or hostname, thus can't verify automatically via letsencrypt.
You can do two other things, besides using a wildcard domain as mentioned in a sibling comment.
1) Use public DNS to validate instead of HTTP. I do this for internal-only webservers. TXT records are updated during renewal using Hurricane Electric's DNS service at dns.he.net.
2) Run your own CA. This used to be a huge pain, until I found gnoMint. I use this to generate certificates for OpenVPN. If necessary, installing a root certificate is not difficult on most systems. You can set it to expire in, say, 10 years, so you won't need to update it so often.
Not sure about “Easy” but you want to be an intermediate CA signed off by another CA already recognise by common browsers. LE don’t provide that apprenty but it does seem to be available for a price …
I get a *.domain.tld cert for my public server and then copy that to all my internal hosts, which are only reachable internally, but use the same domain.
You can get a domain name for free from many non-profits (eg. eu.org). And chances are you have a public IP address, it's just dynamic not static, in which case dynamic DNS setup is fairly easy.
The only case you're screwed is in 4G/5G setup where you actually don't have a public IP at all, but only half/quarter IP (just a dedicated port range on a shared IP).
If your host has no public IP or hostname, letsencrypt has no business issuing that host a certificate.
If you wanted to, having a public facing IP that uses challenge files, and just reverse proxying that specific URL-range to the private host might work.
But really, if you want SSL for a private network, self-signed certs or your own trusted CA cert is the way to go. That does mean changing your browser to accept those certs.
Alternatively, drop the SSL requirement, since everything is apparently on private networks.
I’m curious. Are there any new major difficulties when setting up home servers?
Last time I tried it was a little annoying with dynamic IPs getting in the way, but possible with just a port forward or UPnP. Are ISPs making changes to prevent this?
I've never run into trouble running stuff from home. SMTP from home is always going to be problematic, though.
The lack of IPv4 address space has started a trend of CGNAT which makes hosting from home nearly impossible. Luckily, IPv6 continues to be rolled out, but in many situations thst would lead to your services only being reachable through IPv6. There are still a great many outdated networks out there that can't reach IPv6, so you might run into trouble there.
If you can get a public IPv4 IP address, I see no reason why you'd run into issues. That's more of an ISP problem than a hosting problem, in my opinion.
I -- along with many other mail server operators -- block dynamic IP space. Which is almost the same as blocking home users, but not quite -- I have a static /29 on my residential package and ran my mail server from the slow side of an ADSL connection for several years.
Most, if not all, residential IP blocks are in the various blacklists most mail servers query. Merely having a reverse PTR won't get your email delivered. Even SPF and DKIM with DMARC probably won't be enough to get over the blacklist rating.
Some business IP blocks aren't blocked, though, so in rare cases you might get away with running a mail server from a business internet subscription.
> Even SPF and DKIM with DMARC probably won't be enough to get over the blacklist rating.
I can confirm this. I recently tried to set up being able to send emails from an smtp server in my homelab to my gmail address. Even with all the good stuff - a domain, tls, spf, dkim, dmarc, gmail just straight up refuses to receive mail from residential IPs. I ended up proxying it through my VPS, which works better but still requires me setting up gmail rules to NEVER send messages from my special domain to spam. Which it would otherwise do for no apparent reason sometimes.
If you're going the Cloudflare route, then just get a cheap VPS somewhere and run off that instead. Gets you both IPv4 and IPv6 connectivity, and you can always proxy stuff yourself if you need to access local hardware.
It's a solution, but it's hardly the home server project you could (and should be able to) run from your home internet.
If you're going to rely on cloudflare, who control access to a huge number of sites and explicitly choose to cut off some of them for political reasons, why even bother hosting something yourself? At that point you might as well be posting on facebook.
That's an ISP problem, though. For that price it's better to just rent a VPS for hosting/proxying traffic. A wireguard tunnel to your actual home server and an nginx proxy at a $4 VPS provide more options than just a guaranteed static IP. Hell, if you can nab some capacity, you can use free VPS solutions like the one AWS and Google provide for a year, or the one Oracle providers forever if you can live with the 50mbps uplink (you get 2 servers so maybe you could tunnel that into 100mbps?) and the annoying Web UI for setting them up.
Luckily, you don't need a static IP address in most use cases if you set up dyndns, as long as the IP is exclusively used by you and doesn't change too often (e.g. every week or month or so).
It would be more complex, but you'd have a lot more flexibility. Your ISP might not be able to deflect DDoS attacks as efficiently as a remote proxy, you can set up your own caching with a high-speed connection, and you can secure your home network a bit better by only allowing the remote proxy access to your home server.
You'd also save costs moving some of the hosting to the cloud while you're at it, be use you don't don't need to pay a separate electricity bill for a cloud VPS. Plus, VPS storage is usually more reliable than a custom RAID config, as is the power grid around data centers and the internet connection itself.
If you're going for efficency or simplicity then you're totally right, but if you're trying to get value for money I think a cheap VPS would be better.
If you're going to pay $10 a month, you might as well pay for a VPS instead and connect up to it with wireguard, nebula or tailscale. Especially if paying means dynamic IPv4 that isn't behind CGNAT rather than an actual static IPv4 address.
Paying gives you a static IPv4 that remains the same for as long as you're subscribed.
It's not a bad option if you're already paying for gigabit, sadly nearly impossible to get symmetrical gigabit here, but still for an extra $5 or $10 a month it's ok.
That's about it. In addition, some ISPs make sure you cannot change the resolver advertised on DHCP, and that your own IP is routed via the Internet and back (so local requests are slow as hell), both of which can be hacked around by running your own router behind the ISP's modem/router.
Of course, the real solution to the problem is to find a decent ISP, like a non-profit from FFDN.org federation. Then you have "real" internet and no worries for selfhosting.
Which audience exactly is cool with the register a domain name, figure out your ip, setup an A record to point there (maybe dynamically!), enable port forwarding, install a web server, etc, but gets stuck on letsencrypt?
Letsencrypt is almost certainly the easiest part of the entire process of self-hosting a website.
Yes, because if you web server publicly reachable, letencrypt can be automated easily. And there is plenty of letsencrypt software that does fully automates this.
A public web server is the easy part if you want to do letsencrypt.
Letsencrypt is easy, if you are running generic Linux distribution and can easily install whatever you want.
But you might run a device that already comes with software, and letsencrypt support is either limited (example: Synology; their implementation allows only http-01 challenge so if you need dns-01, tough luck. Even wildcards are a new feature) or non-existent (example: Ubiquiti, and their cloud keys (administration UI, guest portal) or routers (Radius/WPA Enterprise needs TLS cert too)).
Yes. Even the most complex technical setup can be accomplished by a non-technical person that can follow directions assuming that someone took the time to write clear and concise directions and included common caveats and troubles that one may run into and where to check for them. I have proven this many times over by having managers and directors that were non-technical follow my instructions. In the rare moments this breaks down, forums and chat rooms can be a very handy gap-filler and provides an indirect feedback loop to further improve documentation.
In case it wasn’t clear: This won’t stop you from going to 192.168.1.1 or otherwise accessing private network resources from Chrome. This is about closing specific public/private boundary security vulnerabilities that you don’t want 99% of the time. The author of this article happens to have a 1% corner case, but the average user generally doesn’t want this.
There often is such a proxy in these environments but it isn't really related. The scenario here is that you have servers with both internal and external IP addresses, and for whatever reason if someone is connecting from inside the network you want them using the internal IPs and not the external ones. (Simpler routing, different features available to internal clients, etc.) So you set up the DNS servers for your network to serve internal IP addresses for your domains, but anyone outside the network sees the public IPs for those same domains. (That's the "split horizon" part.)
Now someone in the network could follow a link from a page served from a public IP to a domain with a private IP address—which this change would disallow unless the first page was served from a "secure context" (with TLS) and the internal server responds to a "preflight" OPTIONS request with the required CORS headers to allow following links from public networks.
This is extremely common in Universities where you share 2-4 public IPs that you can usually ask ports to be forwarded to a internal IP, and often there are resources (Like say Servers with GPUs) available on a interally reachable IPs using a easy to use hostname served by the internal DNS server.
Ofc this change won't be that big of a issue, things would just need to change a little, using Split-DNS was already a pain when students wanted to say use DNS-over-HTTPS and didn't want the University DNS servers to know every site they visited.
+1
Especially for the devices this is targeting, like IoT stuff, printers, fridges, cameras etc. In fact all the IoT stuff I have don't support IPv6 well (or at all) and many of them don't even support 5Ghz ac WiFi.
> Use IPv6 and stop with the NAT. It makes all the split-horizon DNS pain go away.
\s
Sure. Just give me a week or two(or several months) to shut down whole network and reconfigure all the servers, devices,
and services.
Also all our business partners and vendors who integrate with our services, will be glad to switch to our new setup, exactly when we need them to.
\s
If you are building new site/network ipv6 is way to go. Migrating existing ones is next to impossible due to all of the dependencies out of your control
It's about shifting a security boundary from one place to another, because the original location of it has been ignored for generations and is now about as good as keeping invaders out as the Great wall of China.
> While the specification has some suggestions, I'm not sure that it would allow our specific situation even if we could get all of the people with web servers that are possibly affected by this to make changes to them.
Wouldn't this be as simple as an `Access-Control-Allow-Origin: *` (plus all the other junk mentioned in the opt-in section[1] of the draft spec).
I'm having a hard time of thinking what wouldn't be workable in the author's situation (assuming he gets all the people with webservers to add headers to them).
And yes... understanding CORS should absolutely be a requirement for writing local webservers that people can poke from the public internet; being able to stumble your way through writing a CORS policy is basic web security at this point.
The author is in a large University context where getting everybody who runs all the web servers (various departments, schools, etc) to add headers probably is really difficult (especially when you consider potentially closed-source applications or embedded devices that aren't easily updated).
It's not that hard though given we're working with HTTP... Since they control the network (DNS and all) couldn't they just wrap non-compliant devices behind a forwarding proxy that adds those headers?
Like rather than resolve to the device, resolve to a proxy that adds those headers.
No, it's not particularly hard, but it is complicating the plumbing. We have to make the Internet safe for manufacturers of devices who can't be bothered to incorporate a modicum of security in their devices.
I do agree with this. It's too much mollycoddling from browsers and in fact dis-incentivizes manufacturers to fix the real CSRF vulnerabilities...
Heaven forbid if someone joins your LAN with a device running an old/weird browser that doesn't do this preflighting and your intranet just gets caught with its pants down...
Exactly what I think as well. I have not looked into this attack for a while, but have always assumed it still is possible to proxy traffic via a user’s browser
Overall this seems like a very positive change. However, I wonder how it will affect local development of servers that participate in API flows with public-facing systems.
As an example: imagine I am developing my application locally, at http://localhost:8080/ , and this application supports an OpenID Connect identity flow with an identity provider, https://idp.corp.example . Today I can test the login flow by telling the IDP that localhost:8080 is a valid URL to redirect back to, so that I can click a "login" button in my application, log into the real IDP, and get a token posted back to something like http://localhost:8080/idp-callback . This makes it easy to develop a system locally that also communicates with various backend microservices which require authentication with a common IDP.
I can't imagine that this is a rare scenario: it seems pretty normal to me. But if I understand the proposal, it sounds like the long-term goal is to prevent this kind of environment for local development, and instead force you to either run your dev stack remotely (with a public IP!) or else run your entire IDP stack locally (so that it has a local IP). Neither of those seem like good ideas to me.
Of course, the other option would be to modify local dev servers to accept CORS preflight requests and respond correctly, but I'm always slightly uncomfortable adding code into a local dev stack that would be unsafe if enabled in production. At very least it makes it harder to debug when something inevitably goes wrong with this API flow.
There are probably ways to solve this by introducing more proxies into a local dev stack, but I worry that these kinds of little papercuts will just make development that much harder for microservices-architecture applications, which are already hard enough to develop and debug as it is.
If you have a domain that you can control the DNS of, you could temporarily stand up an internet-facing server for localhost.yourdomain.com, get a certificate, then change the DNS to localhost.yourdomain.com to 127.0.0.1 (or put it in your HOSTS file), then address it with localhost.yourdomain.com rather than just localhost.
But yeah, I think really, browsers should be able to allow self-signed certs for localhost.
Let's Encrypt also allows you to do DNS based validation (DNS-01) where you just set a TXT record on the domain and update it from time to time.
The acme.sh script actually supports quite a few providers so you can cron it up and never even have to run an out-facing HTTP server. Useful for if say your ISP blocks port 80 or if you're behind a NAT you cannot control.
It looks like this is still allowed. https://idp.corp.example is a secure context, and you can add proper CORS headers to your local endpoint, and then you meet the criteria for the request.
This seems mostly benign, basically changing the default cross-boundary behavior from "allow" to "deny" but also providing a way for intranet devices to exclude themselves.
On the other hand I'm also not a huge fan of these measures because they address the issue from the wrong end. Servers should protect themselves from CSRF, it shouldn't be the job of browsers to do that. A server (think router or IoT device) shouldn't be like: Hey I'm getting an internal IP anyways so I'm going to be lax.
With this there's now no motivation to fix anything from the server/device side so you have failure modes where someone's using a weird/old browser that doesn't preflight and they're vulnerable but everyone still has a false sense of security...
Additionally, I'm worried that rather than adopt the whole preflight/headers thing, a lazy org might decide to just repurpose some reserved public IP space for their internal private addressing so things don't "cross-bounds" anymore...It basically becomes unusable security that's circumvented like writing your password on a sticky because it's forced to be really complex.
Never underestimate the ability for someone to blow through any and all security or safety measures if they prevent them from doing something that they could do before...
As far as I'm concerned, this change is long overdue: Browsers should never have granted arbitrary websites the ability to port scan your local machine and network.
Scanning is not allowed anyways just from standard cross-origin protections. If intranet services do not send CORS allow headers, a remote site (which will certainly be from a different origin) cannot get any information as to whether certain ports/addresses have services running just like it should be. It's the same reason you cannot embed an Ajax request in your blog to query someone's bank account's API. In order for information to be made available to the external site, the service has to cooperate with the remote site already.
In fact, even CSRF is mostly due to the server misusing HTTP. Look at the example in the W3C doc, where the remote site has a hidden frame/resource that hits local servers. Well guess what, GET requests should not change state, so changing some DNS configuration via a GET request is already doing it wrong.
There's no inherent reason an intranet site should be treated any differently from an Internet site. At best the current argument is that "intranet sites are more likely to be hacky and poorly implemented", but that should reflect more on those devices/services on the intranet than anything else.
If intranet services do not send CORS allow headers, a remote site (which will certainly be from a different origin) cannot get any information as to whether certain ports/addresses have services running just like it should be.
Again, not an issue with local network vs Internet. You can do the same fingerprinting to check for connection ability against Internet sites too...
If anything this is a bug with WebSockets and its cross-origin implementation. The existing cross-origin request specs already specify that there should not be information being leaked about the target resource unless cross-origin was allowed.
Iframes don't let you time how long the thing inside loads anymore, img also stopped making this available, same for things like AJAX, and so should it be for websockets too.
If indeed the only information the scan is gleaning is based on timing then it's no different from the other JS timing attacks (some of the early ones could expose arbitrary slices of ram!)--- it's a bug and should be fixed as such.
When you say doing private/public IP filtering is to fix this kind of issue, then what's really being said is "I still want websockets fingerprinting and port scanning to _work_, just not with any internal network IPs".
> Servers should protect themselves from CSRF, it shouldn't be the job of browsers to do that.
I personally strongly disagree with this point. I believe a web browser tab should only communicate with one network location, and 3rd party content/requests should be disabled overall. This way we would have a much saner and user-respecting web.
So sure i should be able to click a link to turn off my lights using a local address (although why the fuck would i have IoT in the first place? humanity would be better off if IoT just died), but a remote website should not be able to control what i do with other locations without my explicit consent.
> Servers should protect themselves from CSRF, it shouldn't be the job of browsers to do that.
You mean that servers are the ones supposed to protect themselves from 3rd parties commandeering the browser to act like the browser acts when you are using it?
I fail to see how that's the correct way to solve the problem. Doing it on the server just ensures that you'll get an ever growing pile of hacks and never really solve the problem.
Because the protections already exist. The existing same-origin policies already offer sufficient protections for "not having the browser commandeered". The only reason this exists is because bad _local network_ implementations are _common_, not that they cannot be made secure just like those on the Internet.
The example in the W3C doc (https://wicg.github.io/private-network-access/) to justify this is an instance of a broken server implementation. It's using a GET request to set some value even when GET should not have side effects. It's a common vulnerability but its totally the server's fault here.
If you have your pants down at home and someone happens to see you naked through your window, it's on you. The solution should be to close your curtains, not "make everyone wear glasses that automatically blur any open windows".
I feel like this is going to break many IoT devices (definitely my ones) and I wouldn't be surprised if it's a move to push more developers towards integrating with GoogleHomeSpyware™.
Since there's still no mdns (resolution of .local domains) in android nor in chromium despite long standing feature requests, workaround I (a hobbyist) and likely other IoT devs chose chose was to host a small website that would brute-force check local ip ranges until the IoT device was found. At least for now I have another reason to transfer hypertext securely but I've honestly no idea for future workaround (other than a large banner saying "install firefox")
> Since there's still no mdns (resolution of .local domains) in android nor in chromium despite long standing feature requests…
If you run your own DNS resolver for your local network, you can use a Discovery Proxy (RFC 8766) to allow unicast DNS resolution of multicast DNS records. I'm using mdns-discovery-proxy[0] (slightly modified to support a newer version of the zeroconf Python library) with a forward-only zone rule in bind9 so that xyz.local is mirrored in unicast DNS as xyz.home.arpa. The latter address will work for any program on the network regardless of mDNS support.
IoT onboarding has always been less than ideal. I've personally never been of fan of the "spray (local) and pray" approach as it always felt _super_ sketchy...
(IIRC, Magicjack does something like that to register your device from its public site? At least that's what I think it does since it's somehow magically grabbing a serial number during sign up despite the URL entry point not having any identifying parameters...)
Let's just hope this pushes support for better options like mdns further.
> I feel like this is going to break many IoT devices (definitely my ones)
On one hand i'm sorry for you. On the other hand, the less IoT there is as a whole, the better humanity will fare. IoT is a disaster for security, for the environment, and repairability... I have yet to see a single useful application of IoT which does not bring major downsides.
Why is Google trying to be the smart ass here. Since when network access control and topology is the atribute of a web browser ? What's next ? X server ? OS kernel ?
My house has some windows. Acording to google i shall buy a house without windows so neighbours do not look inside my house instead of using courtains or window blinds.
That’s a weirdly good example actually, because if you have a window you want hidden then the average user understands that they can install curtains or blinds. But if stronger countermeasures are required for security (eg the window needs a physical alarm) then you might get a professional in to wire that alarm up. Curtains nor blinds are not going to stop someone from breaking in but the average person knows this.
With home networked devices the average user doesn’t know how to hide and protect those devices so here again they need professionals to do that for them too.
Windows weren’t intended to be entry points for burglaries. It just so happens that some Windows, particularly in older houses, aren’t all that secure so need additional countermeasures. Like wise for some home networking hardware.
How would an app that would probe someone's house for open windows "Your windows are not secure!" go down? I imagine these exist but not widely known. Virus protection sufficiently got in the psyche of computer users... does Windows do this scan on a local network?
Lots of questions as I genuinely don't know, I just turn lights on and off through switch devices mounted on walls.
Easy for you to say but harder to explain to the average layman. How do they know what is and isn’t a secure IoT bulb or router? In fact in the case of an ISP router, how do they even know how to replace a router even if they did know their router was garbage?
You parent poster is ranting in more places in this thread, with complete disregard for the fact that most people aren't professionals in the IT space.
The answer is obvious, they don't know, just like I'm not very good at particle physics or repairing broken body parts.
Fairly easy: anything that's labeled "smart" or "remote control from your phone" is wrong and bad for us. We IT people should be explaining to "average laypeople" that they should never trust the industry, instead of climbing aboard the IoT bandwagon.
I had many discussions with half a dozen neighbors about Alexa after i learned that some people actually have this bundled with their ISP here in France. After i explained how it works, not a single one of them kept it.
1. It’s not just smart devices that are a risk. Eg some ISP routers are insecure.
2. Smart devices aren’t always so easy to spot. An IoT light bulb is pretty obvious. But what about a TV? These days it’s almost impossible to buy dumb TVs and most computer monitors don’t have speakers built in. So in some instances it takes extra effort to avoid smart devices.
3. Some people actually like the convenience of smart devices. I have some IoT bulbs at the bottom of my garden and they were actually the best solution to the problem I had (which I won’t bore you with here but the other options like solar lights weren’t suitable). What if someone wants to watch Netflix or Disney+, should they be denied that because they are told to avoid anything “smart”?
Saying “we shouldn’t have nice things” isn’t a good enough answer to the “how do we secure bad things” argument. It’s throwing the baby out with the bath water. And even if I agreed with you, “smart” is already in our lives, there is no way to put that genie back in the bottle even if we wanted too.
"Crap" to me would be low-tech non-smart light bulbs.
That's what I buy. IMO, "smart" bulbs are dumb and unnecessary. The only time I ever need to turn a light on or off is when I'm entering or leaving a room, in which case I'm passing the light switch anyways. I have no need to be able to turn my lights on and off from my phone, and don't understand the use case.
Please don't. Just like with the rest of "smart" and "IoT", "just don't" is the correct answer in terms of privacy, security, and other basic human rights.
Just use a jack cable for audio "pairing" and an actual button for turning on/off lights, and a key for opening your door. It's as simple as this: really ecological, secure and user-friendly.
Bluetooth in cars is useful, especially since it allows the car’s built-in ergonomic media controls to affect your phone. Being able to take a call or pause your music without looking anywhere or moving your hands away from the wheel is a safety improvement.
I understand this UX argument: there's however arguments against it as well. For example, "taking a call" (at all) while driving could be illegal in your jurisdiction, and is in any case a safety risk.
The computerization of the car has other negative consequences:
- cars are increasingly more expensive due to electronics (which represent up to 50% of the price of a car nowadays), and are victims of chip shortage
- cars are much harder to repair and require cracked firmware downloaded from sketchy websites or an official and super-expensive maintenance kit, which of course only works for one brand/manufacturer
- cars are much less reliable and electronics are responsible for a great number of recalled products
I miss my old autoradio with a jack input. Overall, i miss when my car was not a computer: we computer people can barely display a few pixels on a screen without writing a dozen bugs, why the hell would we responsible for making software for cars?! see also boeing scandals.
So while bluetooth is a safety improvement over the worst-case of using a mechanical car and a phone at the same time, it's part of a trend that overwhelmingly makes cars less safe and reliable.
Bluetooth is entirely unrelated to other computers in cars. In fact, it's just the car's "radio" component that speaks bluetooth. You can easily install such a component in 70 year old cars. You'd only lack the ergonomic steering wheel controls, which are generally connected to the "radio" directly.
Other car functions use computers for very different reasons. The most successful is injection timing, which does significantly improve performance. It's why we have extremely efficient small diesel engines and reasonably powerful 1L petrol engines.
...oh wait, google drive updated itself without me asking a few weeks ago, even though I haven't used it in ages, and now, randomly, every few days, I get a popup telling me I need to sign back into drive, even though I closed the system tray application. (is being annoying to the user just an inherent property of this kind of service? are they trying to compete with OneDrive for reluctant end-user obnoxiousness??) there's probably half a dozen ways to get rid of it but it's a nice periodic reminder that I really don't care for google anymore
We run all of our traffic over ipsec to our VPS. It's https anyway, but I don't really care for Google to know how many clients I have behind my network. Everything is in 10.x.x.x space.
No, we're not going to do let-everyone-access-everything. We believe in layered security. Locks on the front door, locks to get around the house.
Seems like as long as everything is in the 10.x.x.x/8 range and a user gets there by typing the IP into their address bar, nothing is going to change for you. This is only about mixing public and private contexts. If you have a link to private IP space on a public website, then you're going to get warnings.
I presume DNS names that resolve to 10.x.x.x are also fine to write into your address bar directly?
It seems to me the issue with a DNS split-horizon is that google results return DNS names that are in your local intranet, and thus resolve to a local IP, but you clicked on that link from the outside world, and hence chrome would block that.
I think any sort of split-horizon on devices you don't fully control is going to start giving you a lot of problems going forward. For example, any sort of DNS-over-https is going to make split-horizon non existent unless you control the client device. And presumably if you control the client to change the DNS behaviour, you could turn off this new behaviour too.
Fwiw, I think this is generally good. Network operators shouldn't be able to arbitrarily intercept traffic on their networks.
Linked in the Private Network Access draft is a discussion which outlines some general points about HTTPS on local devices and participation of local devices on the web in general. It's interesting to read, even though the results seem rather bleak.
Years ago I noticed any webpage could just use an <img> tag to include an image hosted in my LAN. For example, adding such a tag to any website to include a link to the usual gateway ip addresses (192.168.0.1 and so on) will trigger an http authentication dialog for many users.
It's the kind of thing that I thought would be impossible because it seems like such a boundary crossing. Glad they are eventually fixing this...
That's what we get for enabling 3rd party requests on the web in the first place. Hyperlinks are great but enabling content from/to 3rd parties was definitely a design mistake of the web. Gemini definitely got this right, so when you run a query to a server, there's only 2 parties involve and no external tracker, and no served script to try and undermine your security by scanning your local network.
Seriously, if you'd told me in the mid-90s we'd have connected locks and lightbulbs and surfing to a random website could alter physical properties of my surroundings and scan my local network for vulns without my consent, i would probably have given up on IT entirely as humanity's doom.
I'll work around this, when I need to, by picking some backwater public IPv4 space and using that for the subnet where I put my on-prem-hosted servers. (Maybe North Korea's address space...) I get the "public IP" behavior of the browser and, as a bonus, that "real" IP space on the Internet is inaccessible to my end users and servers!
I did something like this, though it wasn't public space exactly.
I had to live with a work VPN that routed all traffic for all RFC1918 addresses into the corporate network. Because we had acquired companies over the years with their own RFC1918 choices, and not all was integrated and renumbered.
Since I wanted to be able to reach my own home gear at times, I used 192.0.2.0/24 for my network. Not technically RFC1918, but not used in public, either.
I still remember the incredulity from one of my friends saying "but, but, you just took someone else's addresses?" and I said "sure, but whose? does it matter?" and then he looked it up on rwhois and started reading the company name... he got as far as 'Internet Corporation for Assigned...' and then he stopped and threw something at me. Ha!
There are a number of non-public but not commonly used address spaces out there, I suspect most of them would suffice for getting around Chrome's RFC1918 blocking. Though it's possible they have another heuristic for determine what constitutes a 'private' address that includes space other than the officially recognized RFC1918 subnets.
One thing I forgot to mention. Some consumer devices make assumptions about RFC1918 space, and will not operate if they think they're running on a public IP. This is what prompted me to switch back into RFC1918 for my home network (and I had long since left the company with the voracious VPN anyway). YMMV, of course, but it's something to keep in the back of your head if you plan to switch out of RFC1918 for your home network.
We ran into some of these different behaviors when we hit a Customer who was using the 192.9.0.0/24 network internally. (Then we hit a few more. Several police departments in Western Ohio are using that subnet internally for some reason. Some vendor in common must have come thru and done it.)
175.45.176.0/22 if anyone is curious. This is a hilarious and probably terrible idea. Also I like how my home LAN has a bigger CIDR (/20) than all of NK's allocation.
Would we be nice if they'd fix wifi terms of service pages first before layering on yet another security screen. Every time I go to the gym or mail room it's a five minute battle with Chrome to let me see my apartment complex's terms of use screen so I can access the internet. Our building doesn't get good cellular data service inside.
Yes, I'm sure I want to see the terms of use page, Chrome. No, I don't need to be an advanced user to know that. Actually I do since you only let me get redirected to it if I know to type an http site you haven't seen first instead of an https one...
Whenever I'm fighting with a WiFi that doesn't properly send me to a capture portal, I open a browser window to http://www.neverssl.com/ . It was specifically created for this purpose.
Hmm, this may break a project I'm working on, depending on how far it goes. It's a web based client to a service that is commonly self hosted. Users enter the URL of their server instance at startup. That server is frequently on the users home network. I have a public instance of the client hosted for a zero install user experience.
I think this immediate change would be fine since that public instance is a secure context, but it sounds like the ultimate direction is to prevent web->local access altogether.
I don't control/develop the server application. It lets users configure it and defaults CORS to pretty open access to its API endpoints. That part is fine. It just seems from this thread that the direction is to shut down that access entirely.
Best answer so far. There is absolutely 0 legitimate reason (that i can think of anyway) a website should be making requests from your browser to another website without explicit user consent/interaction.
Hyperlinks are great, but random javascript crap making request on our behalf is just harming us users.
It doesn't have to be Javascript. I made an application many years ago that used almost zero Javascript. The only piece of JS is an onLoad handler to position a cursor on a particular field when a page loads. The application uses its own web server, so the browser talks to localhost. For licensing, the application sends the user to my domain, whose pages can send the user back to the app on localhost, again with no JS. The NoScript suite detects and blocks that. The simple fact of a remote website serving up a local URL via dynamic HTML is enough.
with "The application uses its own web server, so the browser talks to localhost." you are not actually saying that your application itself contains a webserver, running inside a browser, serving a website running on localhost, are you?
In this case, the application is a native executable containing a web server. (It runs in the background, controlled by an icon in the desktop notification area, which can also be double-licked to open a browser to the application's URL).
Just integrate a basic uMatrix into the browser, as it should have, and none of this matters.
Why is it impossible to view the origins of resources in one simple interactive panel? That's simpler than the major browsers' dev tools' features, in fact. Except then, the user would have a dashboard of control, which they can not be trusted with.
Chrome is a bit of a PITA for local dev now that it also prevents me from using https://localhost with self signed certificates. There are some flags you can toggle in chrome/chromium but I can't figure out which versions still support them.
The modern guide for this is https://web.dev/how-to-use-local-https/ and the flag you probably want is #allow-insecure-localhost but the guide steers you towards local trusted certs as an improvement over self-signed.
If you have a domain already, you can get a certificate for free from Let's Encrypt and just repurpose the certificate. You don't even have to run an out facing HTTP server as they have a DNS-only challenge.
I have a similar setup where I have something akin to "my-home.network" and manage a wildcard cert from Let's encrypt through a local cron job. My router intercepts all the DNS requests to the domain and returns local IPs on the local network. Machines that need HTTPS can get a copy of the wildcard key and do something like "hostname.my-home.network" and get HTTP.
I don't understand. You are getting a LE certificate for example.com (that you are putting where ?) then when your browser requests example.localhost it gets the certificate for example.com... but how (and how does the browser accepts a mismatching domain name certificate) ?
Assuming you are working on the dev machine, the process is as follows:
1. Buy a domain name. Certificates can only be issued if you have a real domain name. You can't get a certificate for "localhost" or "blah.localhost". You don't actually need to point this domain at your dev machine, you just need to own it. Let's call this domain "my-domain.com"
2. Follow the instructions for setting up the DNS-01 challenge. As a part of this, you'll need to provide credentials to allow LE to change your DNS records so it can renew the certificate automatically. Most registrars you can buy domains from will provide free DNS service and many will also provide API access to change DNS records. If this is the case, there's probably already support in LE for setup so you can just follow the instructions [here](https://github.com/acmesh-official/acme.sh/wiki/dnsapi) to provide the needed credentials.
3. Once the setup is complete, you should have a certificate (public certificate chain + private key) issued by LE and it should also automatically renew. Edit your dev server's configuration to use to these issued files for HTTPS.
4. Add something in /etc/hosts (or equivalent in Windows) like:
127.0.0.1 my-domain.com
5. Now load your dev environment by visiting `https://my-domain.com` on your local machine. It should work now with no certificate errors (green padlock). Congrats! You now have HTTPS for local dev at no cost beyond owning a domain + you can still use this domain to serve a real website (say hosted with some cloud host). The domain _does not_ need to point to your dev environment. In fact your dev machine doesn't even need to be connected to the Internet (as long as you can handle the renewals).
Advanced:
Now, if you want `my-domain.com` to also be productive, say, to serve your blog hosted on say EC2 over HTTPS, you can just edit your renewal setup. Add a post-renewal script that copies (e.g. scp) the renewed keys over to your EC2. Now both your dev machine (local) and your "production" server (remote) have the same key.
When you visit the domain locally, the /etc/hosts record circumvents DNS so you end up just hitting your local dev server which has the key info it needs to serve you HTTPS. When you visit the domain from a different machine, it would do a DNS query and end up hitting the remote instance. But since that instance also has the key info, it can also serve HTTPS.
Now if you want your dev machine to also be able to hit production, you can (1) when you setup LE in step 2, set it up so that it issues "*.my-domain.com" instead. (2) In step 4, use `dev.my-domain.com` instead of `my-domain.com`. This way on your dev machine, hitting `dev.my-domain.com` gets you the local dev server with a green padlock, hitting `my-domain.com` gets you the remote server, also with a green padlock. Both servers are sharing a wildcard certificate. The interesting part about doing it like this is that you don't need "dev.my-domain.com" to actually be a real DNS record. From the internet, nobody would even know that that subdomain exists.
Ain't got no time for that. In Firefox I just click "accept" for my *.localhost and I am done. It used to be that straight forward in chrome/chromium but now I'd have to meddle with certificate stores, etc.
Pretty sure you will be able to opt out via group policy. If you control your network so hard that you can accept CSRF, it even makes sense. But would you swear that none of your smart devices are terribly insecure and incapable of even updating at all? You’ll have my utmost admiration if you can.
Restricting such requests can be a good idea, but it should be user configurable (e.g. in case you want to redirect some requests to your own implementations), and whether or not it is allowed should have nothing to do with whether or not TLS is used (it means HTTP vs HTTPS, but really that is irrelevant to this issue, which is a different issue). Furthermore, if you do not explicitly type a localhost URL or IP address, it should also block access to stuff on localhost or local networks, again it should be user configurable though in case you want to enable specific cases.
Ok, so I have a dynamic page hosted externally. It's set as my homepage on all my browsers. It's full of links to things I use.
Some of those links are on my internal network, and are NOT https (seriously, for small things/apps you don't need it). If I understand this correctly, Chrome will now block me clicking on these links?
I'm all for improving browser security, but as an end user, I should have the option to turn settings off that cause me a problem.
> Some of those links are on my internal network (...) Chrome will now block me clicking on these links?
Unless i missed something, you will still be able to click links. Chrome will block the original page from running queries without your consent to your local networks, eg. a javascript bit trying to find a vulnerable router/crapware on your local network.
> I'm all for improving browser security, but as an end user, I should have the option to turn settings off that cause me a problem.
I agree. I will want to configure a lot of things. However, allowing finer configuration can also be helpful; you might not want to turn off that feature entirely. (For example, maybe if you want to assign that dynamic page to the local zone (if such a thing is possible; the article says something else, though) even if it isn't local, then it will be able to work such links.)
I talked myself out of writing 'GET /quitquitquit' handlers for debugging because of CSRF, but I guess enough people haven't that we have to make browsers just deny access. I think that's probably fine.
Is anyone working on making CORS stricter? I have always been annoyed that you can do cross-origin GETs and form submissions without a preflight. Whenever I Google it I just find people talking about how the existing CORS restrictions ruin their lives. Personally, never had a problem, so I'm not sure what all the fuss is about.
I was also curious about IPv6 non-local scopes. The spec linked in the article says
> User Agents MAY allow certain IP address blocks' address space to be overridden through administrator or user configuration. This could prove useful to protect e.g. IPv6 intranets where most IP addresses are considered public per the algorithm above, by instead configuring user agents to treat the intranet as private.
So aside from loopback and link-local, the only effect this will have on IPv6 is what the browser decides to do. If that's a manual add/remove or a look into the routing table seems unspecified.
Ah, I missed the spec link and landed on the design documentation...
So I will have to wait to see what form this configuration takes.
Hang on a minute, this is suddenly sounding rather familiar--I'm suddenly reminded of the Internet control panel in Windows, which lets you assign web sites to different zones (internet, LAN, trusted, restricted)...
Yeah but then I have to use firefox, and the way Mozilla's marketing (and user) numbers have been going it feels like a losing battle.
I mean not to long ago they put out a paper on why a distributed internet is bad and ways to stop it; the people running Firefox are completely out of touch
"""
But perhaps the risk of CSRF attacks against insecure devices on private networks or localhost is severe enough to justify such a change. I see only one side of this whole issue, not both of them.
"""
That would completely screw up my Chrome usage as I’ve got some pretty esoteric network setup having dual-homed route table, some private 13-server Root DNSSEC, and no, just no, Chrome.
oh wait, I haven’t used Chrome in 2 years. whew. Never mind.
Maybe different scales, but some of the restrictions already happened in M94. I used to use a remotely hosted web GUI to control my local ari2c RPC but since then I have to move it to my local network.
Think of every developer tutorial that shows you how to get started running HTTP on localhost port 8080 (or whatever). Authentication is left as an exercise for later.
Of course one obvious workaround is to misconfigure your router to allocate from a non-private but unused/reserved address space. That way your internal network is also on the "outside".
You should probably be mostly thrilled about this development.