And each time this happens, you get better about writing readable comments up front describing edge cases and difficulties so that future self can avoid steps 1 - 6 with a head start on refactor ideas / feasibility.
> Perhaps a site that's been around for over a decade and already has mass brand recognition might not get as many people googling it bare, since most people can just... go there directly.
It seems to me that googling for a popular site like reddit is actually pretty common. A couple of reasons:
1. Because google is often much better at returning relevant results for site:<site>.com <query> than searching for <query> in the site's search engine. I would think this to be the case even more so for types of sites like reddit simply because of such high total post / word count. In fact, brand recognition would seem to be more of a reason for people to google it. E.g. If you want to read a wikipedia article on some topic, you normally wouldn't visit wikipedia.org first to use its search, you'd google "wikipedia <topic>" or "site:wikipedia.org <topic>".
2. To avoid hitting enter after a typo and ending up on some malware site <domain with typo>.com.
Do people even use the site:reddit.com google query? If I want to search for something on reddit, I just search “something reddit” and the results are nearly always sufficient.
Yes. I use it for comment searching, you'd be surprised how useful it is for miscellaneous subject research. Then again I always try to use operators when applicable.
My guess is that most companies decide to cut the cost of maintaining responsive web apps since they are already paying developers to maintain native mobile and desktop web. Because, in their minds "who wouldn't just use the app"?
To get around it, there are a few lesser known mobile browsers that allow you to modify the "User-Agent" header, in which case you can bypass by masking yourself as viewing on a desktop browser. Sleipnir is one.
Probably because it's same group of people that defined standards like media queries and wrote "best practices" ...
Technically browser would need to report different pixel density or resolution (check what https://www.whatismyscreenresolution.com gives you) to get "desktop looking" site.
Though in reality it's one/same site - so I'm not sure how much would you get with that...
The worst is when some consultant designer does some journey mapping exercise, dropping half the functionality of the site and disappearing.
My bank recently did this, they decided that “complex” workflows like seeing how much interest you paid or transferring money wasn’t in their top 10 transactions that were journey mapped. The consultant probably took the fancy post it notes with him.
What exactly do you mean by "user"? Can they query DNS traffic by IP address / subnet? Exactly what are all of the restrictions there?
EDIT: Is there a whitelist of things they can query by or do you simply trust them to be good citizens, have a binding legal agreement, all of the above?
No. We have a legally binding agreement. And, more importantly, we don’t store or give them access to IPs or anything else that may be associated with any individual. Look at a DNS query, look at what could be identifying — let us know where concerns are. My hunch is we’ve thought of it. If not, we will fix. We don’t want personally identifiably info. It creates a legal risk for us. We purge it as quickly as we can.
We don’t want personally identifiably info. It creates a legal risk for us. We purge it as quickly as we can.
Ding ding ding, we have a winner. If more people would realize this, we would have less data breaches. To get there, a data breach must become more costly for the companies.
>APNIC gets to see the noise as well as the DNS traffic
>Huston emphasised that APNIC intends to protect users' privacy. "DNS is remarkably informative about what users do, if you inspect it closely, and none of us are interested in doing that," he said.
Maybe it is reasonable to take them at their word as they seem trustworthy, but we should at least consider the fact that at least some of this DNS traffic is indeed being analyzed.
Users of the DNS service get the privacy guarantee.
Non-users do not. If you floodping 1.1.1.1 you are not a user of the DNS service and the privacy terms don't apply to you. Rather you're a member of the Misconfiguration Club, and the site you're pinging has the usual right to analyse your pings.
What if somebody has a bad DNS resolver and what he qualifies as a valid DNS request, researchers do not.
I get the general idea, but having "user-privacy oriented" and "we collect everything and make it available to many researchers" services under the same IP may lead to some issues.
Oh, in that case you can apply those issues to all of Cloudflare. They serve many thousands of websites from each node. God only knows how many different privacy policies may apply depending on which bytes you send to TCP port 80.
I don't think knowing exactly what parent comment is talking about is required to see that they weren't suggesting we should do away with all visualizations just because there are some cases where they might not be the best tool for teaching.
I'm not sure what posting this accomplishes, but it certainly was not intended to shame the developers, though I can see how it might be perceived that way. I am more than satisfied and thankful for everything else that Mozilla provides me in Firefox (especially after the 57 update).
It was, more than anything, intended to offer a noticeable bump in hopes that I don't have to continue wasting 5 seconds finding and closing the window every time I open Firefox (as I have had to do since using this feature over the last hand full of years). I've probably spent (roughly) 2 hours of my life closing these duplicate prompts over time.
It's an annoying user experience that seems fixable which is why I'm surprised it still remains at "normal" priority. I would spend the time to read every comment, read through and understand the architecture of this legacy code, fix the bug and submit a patch only to have it code reviewed and have me resubmit or be forced to resign my attempt because political reasons or some other factor, but that would probably take me more than 2 hours so I refrain and hope that it gets fixed in the next 7 years.
Not intending to be snide, just a moderately inconvenienced user trying to stir action.
Why don't you grab the Firefox source code and fix the bug? You're in the perfect position to do so: the bug affects you daily, so you won't have to do anything special to reproduce the problem. You'll also be the one who is most satisfied to see it fixed; you'll be scratching your own itch. You've probably also overestimated the difficulty of getting the patch committed.
What many seem to have missed from this is the bit at the end where Fowler concedes:
> I don't feel I have enough anecdotes yet to get a firm handle on how to decide whether to use a monolith-first strategy.
after linking and mentioning points of a guest post [1] (with which I strongly agree) which argues against starting with a monolith. A key part from that post:
> Microservices’ main benefit, in my view, is enabling parallel development by establishing a hard-to-cross boundary between different parts of your system. By doing this, you make it hard – or at least harder – to do the wrong thing: Namely, connecting parts that shouldn’t be connected, and coupling those that need to be connected too tightly. In theory, you don’t need microservices for this if you simply have the discipline to follow clear rules and establish clear boundaries within your monolithic application; in practice, I’ve found this to be the case only very rarely.
>Rule of thumb: On a desktop, if you have an i5 you do not have Hyperthreading. All i3s and i7s do have Hyperthreading, as do new Kaby Lake Pentiums (G4560, 4600, 4620).
Hmm...either this statement is wrong or this desktop /proc/cpinfo is wrong:
$ grep -E 'model|stepping|cpu cores' /proc/cpuinfo | sort -u
cpu cores : 4
model : 94
model name : Intel(R) Core(TM) i5-6600 CPU @ 3.30GHz
stepping : 3
$ grep -q '^flags.*[[:space:]]ht[[:space:]]' /proc/cpuinfo && echo "Hyper-threading is supported"
Hyper-threading is supported
Intel's product spec page[1] lists this CPU as not supporting Hyper-Threading so I'm a bit puzzled as to why the ht flag is present.
To quote the Intel Developer Instructions[1] on the HTT flag:
>A value of 0 for HTT indicates there is only a single logical processor in the package and software should assume only a single APIC ID is reserved. A value of 1 for HTT indicates the value in CPUID.1.EBX[23:16] (the Maximum number of addressable IDs for logical processors in this package) is valid for the package.
UPDATE: It appears these flags refer to each initial APIC ID, so it seems the HTT flag value should be 0 in all cases where the overall processor:thread ratio is 1, suggesting there might either be incorrect information in the CPUID instruction for some Intel CPUs or the kernel is not correctly evaluating CPUID.1.EBX[23:16].
Hopefully, someone more versed in CPUs can correct me here.
Never rely on your ISP to provide great wifi equipment. This is not something specific to Comcast. Generally, it seems residential ISPs are only on the hook for providing quoted speeds via a wired connection to their gateway.
This is why I always either disable the wifi from my ISP's modem/router combo and branch off my own wifi router from the modem's LAN or request a modem only device from the ISP and use my wifi router's LAN. The downside to the former case is that your wifi devices are now double NATed (unless you use a wireless bridge) which can be annoying if you want to forward ports (you now have to do it twice). The modem/router combo might not support disabling its LAN to act as a bridge very well.