I'd love to hear from some of the team who built this about differences between Keywhiz and Keyczar, which to my mind was the best-practice open-source cross-platform solution to date (i.e. if you're not relying on things like AWS Cloudformation config or Heroku config vars to "manage" secrets).
Obvious pieces to me appear to be (1) roles and auditability (2) end-user front-end (3) filesystem interface & associated ease of access for various services. But I'm not an expert!
Keyczar is meant to solve a different problem. It’s meant to be a simple programmatic API for crypto operations, while being high-level and excluding unsafe options. NaCl (http://nacl.cr.yp.to/) has similar goals to Keyczar.
Keywhiz isn’t an interface for software to do crypto. Rather, it’s a system to manage the secrets/keys used for crypto and making them available to the services that need them. It doesn’t explicitly look at the content of secrets, unless a plugin is used.
Understood! I'd looked at Keyczar in the past as a component of a system to manage secrets/keys, but I see it's actually providing about 0% of what Keywhiz does.
Filesystem interface just by itself is a big difference.
Keywhiz lets you manage things like mysql or other configs which might contain things like username/passwords, passwords to unlock certificates, API keys, etc. If you don't have the resources/option to modify applications to use a specific API, the filesystem might be your only viable solution.
The advice on min-width vs min-device-width strikes me as controversial:
Therefore, you should avoid using *-device-width, since the
page won’t respond when the desktop browser window is
resized.
GMail, Facebook, New York Times, and lots of other smart folks don't show their mobile-web versions if you make the browser viewport very small. Their sites have min-width ~980px when viewed on desktop.
I imagine this is because (a) their mobile sites use different interactions that require different DOM and JS, not just different CSS styles, and/or (b) those different interactions might not be appropriate on desktop even if you shrink the viewport and/or (c) it's confusing for the site to work differently when you resize your window and/or (d) they don't want to send unused JS/DOM to devices that aren't going to use it.
Good point. In lots of cases these sites are actually doing the detection of the device class on the server and never returning a fully responsive site in the first place. There is certainly a balance to be had on responsive vs adaptive, in this case with this piece of guidance it is to give developers and users the chance to adapt more appropriately to the screen constraints at any time. Min-device-width is pretty much permenant so it is harder to adapt to when the user is on a big screen but only usng half of it.
For most content based sites there is not a huge amount of reason to have drastically different html, CSS or js. For apps we are yet to cover that fully.
Personally I wish they did. On my 13" MacBook, I want to be able to make those windows smaller and still be able to see the content without having to pan back and forth or zoom the content.
Agreed. To quote Colin Percival's excellent article which addresses both topics:
"Assessing the security of software via the question "can we find any security flaws in it?" is like assessing the structure of a bridge by asking the question "has it collapsed yet?" -- it is the most important question, to be certain, but it also profoundly misses the point. Engineers design bridges with built-in safety margins in order to guard against unforeseen circumstances (unexpectedly high winds, corrosion causing joints to weaken, a traffic accident severing support cables, et cetera); secure software should likewise be designed to tolerate failures within individual components. Using a MAC to make sure that an attacker cannot exploit a bug (or a side channel) in encryption code is an example of this approach: If everything works as designed, this adds nothing to the security of the system; but in the real world where components fail, it can mean the difference between being compromised or not. The concept of "security in depth" is not new to network administrators; but it's time for software engineers to start applying the same engineering principles within individual applications as well."
Given it's possibly (likely) ostracizing and there's nothing categorically wrong about using "they" as a singular neuter pronoun, I think you're on the wrong side of this.
People should certainly stop fighting; the answer is to just roll with "they".
Thanks for all those links — I wasn't aware of those studies (especially not about the deliberate effort to force the singular "they" out of use) and I think they have definitely influenced my conscious personal writing style (I do tend to avoid gendered pronouns anyway).
I will agree that I was on the wrong side of this in the sense that "he" is more or equally preferable to "they", but I made another point too — that this entire topic is a giant bikeshed.
You say "people should certainly stop fighting", but do you then imply that fighting for "they" is justified? Personally, I don't think it is.
I use gender-neutral pronouns unless context dictates otherwise, but I want you to consider something.
If a policy (or lack thereof) is correlated with women feeling excluded or intimidated, it doesn't necessarily mean that the policy (or lack thereof) is a bad thing.
If women tend to feel intimidated by eye contact with men with whom they are in competition, it doesn't mean such eye contact should be banned. If women tend to feel feel excluded if the founding members of an organization or club are all men, it doesn't mean men should be forced to find a woman before founding an organization or club.
We're used to looking at a pronoun's antecedent to determine its subject. In the sentence, "Jacob finally published a post he'd been mulling for a long time," we don't object that it's unclear whether "he" refers to Jacob. I don't understand why "The user receives the data they requested" is so fundamentally different. You still look to the antecedent; it's still clear.
I grew up taking great pride in "correct" English usage, but I now think insisting on gendered pronouns causes harm without any tangible benefit.
In the most meaningful way, it could be Android for consoles. It's "freely licensable" for hardware manufacturers, so companies will be able to build new consoles without building their own OS -- the same innovation that led HTC and Samsung to become two of the world's leading phone manufacturers.
Imagine being a single mother with a young child. You're a great programmer. But you spend 90% of your time outside work raising your kid, so you can't make the time to make significant contributions to OSS.
...
You can't get a job that requires a Github OSS history. You're severely disadvantaged in job hunting when OSS contributions are a major metric.
If people tested your suitability for a job by pair programming with you, or looking at code samples, you'd look just as good as someone who contributes to OSS. But because OSS is an important metric, you're disadvantaged.
It's pretty clear that requiring a Github history is discriminatory. I wonder, though:
(1) Are there any companies which literally remove a person's resume from the running if his/her fit seems to be good in other ways, just because they do not have a public facing github repo? Or is this just something that people are talking about but not actually implementing.
(2) In what sense is any job requirement not discriminatory? Requiring 5 years of Ruby experience discriminates against those with only 6 months of Ruby experience. Requiring a Masters degree discriminates against those with a Bachelors or no degree. Making the candidate do a programming test discriminates against people who don't perform well on tests. The question is whether the programmers that a company hires based on a Github criterion are measurably "better" than those hired using different criteria. As far as I know, no such comparison has been done.
I could absolutely see myself using this as a screening mechanism. When I was hiring developers, I would get literally hundreds of resumes sent to me. Filtering through resumes is a waste of time. You cannot tell from that piece of paper (PDF, Word Doc, etc) how technically capable a person is, or if they would be a good fit for the team.
I do not have time to interview hundreds of people, so we have to apply some filtering. Are we going to potentially filter out a good candidate because they didn't take the time to make themselves stand out? Maybe, but I'm okay with that.
Good point. If you have to go through hundreds, then you're more or less looking for a reason to say no. Better that you miss the odd good candidate, than waste weeks of productive time trying to make the process 100% perfect.
What makes her a great programmer? Some natural ability? Surely during the time she has developed all of these skills she has developed something that is worth showing off? If she is such a great programmer, how is she maintaining her skills if all she is doing is working on what her employer puts in front of her?
If that single mother literally has no time outside of her 40 hour work week to be learning new things and staying current with technology, she will have a hard time finding new jobs regardless of the screening methods used.
This is also one contrived example. Not an entire "social class" of the programming population.
- Build a workflow around pull requests / review branches a la github/bitbucket/gerrit so you can have them submit changes but an engineer reviews/verifies before it's merged.
In general, empowering them to do this is an investment in the future and well worth it.