Hacker Newsnew | past | comments | ask | show | jobs | submit | kerneis's commentslogin

Thank you! This is a very useful one-pager, answers many questions I had I couldn't find in their documentation (being on mobile I couldn't test with SSH).

https://newsletter.pnote.eu/p/life-in-weeks and it's interactive (you can make your own timeline)


What do you mean exactly by "push as a comment" and "pulls text comments"? Is it some sort of custom logic specific to your work place?


Oops, poorly worded and using internal terminology. I'll try again ;)

We use some internal tooling to make things work, but the concepts are generic git and $forge.

Push a comment: I'll make a hand-wavy suggested edit, then a vim mapping basically performs a ":`<,`>w !curl". If you only used GitHub, then piping to something like "gh pr comment"¹ could perform a similar-ish role(albeit a little weaker).

Pull comments: We automate merges so that PR text(including replies) are available in the repo. For the general discussion they're attached as notes², and for code comments(including --fixup commits) they're processed to assign the correct attribution via trailers³ to the original commit at merge time. Most of the attribution functionality could be re-implemented with "git rebase --autosquash" and by providing a "git interpret-trailers" wrapper as $EDITOR.

¹ https://cli.github.com/

² https://git-scm.com/docs/git-notes

³ https://git-scm.com/docs/git-interpret-trailers


Not sure if it was DARPA, but the web server used Tame, a custom event-driven framework at a time where the thread vs. events debate was all the rage in the academic community. (I did a PhD on the topic and that's how I learned about Ok Cupid!)

You can read the paper they published: "events can make sense" https://www.usenix.org/legacy/events/usenix07/tech/krohn.htm...

I met some ex-OkCupid engineers at a later company who said the framework was smart but a pain to maintain in then long run.


Both exist.

Trivia time: Laeticia (née Laetitia) was married to Johnny, the famous French rock singer; Estelle was married to David, son of Johnny. Note that Johnny is dead now, Laeticia has been remaried twice but kept Johnny's last name, Estelle is divorced, and David is not the son of Laeticia as Johnny got married several times (David's mother is the French actress Sylvie Vartan).


To summarize : Laeticia is basically the step-mother of Estelle. But she’s 9 years younger.


Well, that's just France for you.



They're still used in France even if mostly by old people.


Slight tangent: it looks like Crisp is also available on Linux, according to their website.


There are several points in the article where the examples didn't make sense at all to me. Overall an interesting article, but I'm either a bit dense this morning, or it's sloppy in the details and explanations

For instance, in table 3, it looks like they excluded backend tasks {0,1} (for frontend tasks {0, 1}) then {2,3} (for frontend tasks {2,3}) in the N=10 case, but backend tasks {1,2} then {3,4} in the N=11. Why the discrepancy? I get that it helps them make the point about task 3 changing subset, but it's inconsistent with excluding left-overs in a round-robin fashion presented in the previous paragraph.

Another sentence that I couldn't make sense of is: "If these [tasks 2 and 4] carry over to the subset of the next frontend task, you might get the shuffled backend tasks [7, 2, 0, 8, 9, 1, 4, 5, 3, 6], but you can't assign backend task 2 to the same frontend task. " The "same frontend task" as what? Obviously note the one task 2 was already assigned to (the most intuitive reading to me), since precisely task 2 was not assigned and is a left-over. But then again, what does this mean?


> in table 3

Figure 3 is an (arbitrary) example of round-robin sunsetting with randomized sunset ordering. The point is to demonstrate how bad backend churn is with this algorithm, by inspecting a normal example of the decisions this algorithm makes.

> The "same frontend task" as what? [...] what does this mean?

It's not phrased great, but it's also tricky to communicate. My read is this: given backend shuffles [9, 1, 3, 0, 8, 6, 5, 7, 2, 4], [7, 2, 0, 8, 9, 1, 4, 5, 3, 6], if you combine these shuffles to choose a backend assignment, you end up with subsets {9, 1, 3, 0}, {8, 6, 5, 7}, {2, 4, 7, 2}, {0, 8, 9, 1}, {4, 5, 3, 6}. That third subset means the third frontend only has a subset of three backends, even though you want it to have four.

The rest is reductio ad absurdum- reasoning through the ways you might fix this, and explaining why they in turn don't work. (I believe there's also an implicit assumption about the requirement that the final algorithm require no dynamic/runtime coordination, only static before-the-fact coordination amongst the front ends, i.e. agreement on a hash seed for a given subset, and say which hashing strategy the front ends would use).


(article author here)

> That third subset means the third frontend only has a subset of three backends, even though you want it to have four.

This explanation is correct, thanks. Alas, word limits demand brevity.

> implicit assumption about the requirement that the final algorithm require no dynamic/runtime coordination

An earlier iteration of this article included coordination as one of the properties, but this unfortunately had to be cut. AFAICS, the only other two kinds of coordination are “frontend tasks talk to each other” or “frontend tasks ask a subsetting service for their subset”. Within Google, both of these options are unacceptable: we either introduce the risk that a rogue frontend task brings down all the frontends, or introduce new unappealing failure modes (what do you do if the subsetting service is unavailable?). There is potential for other subsetting algorithms in this space, and while I’d be excited to see them, I’m mildly sceptical about their practicality at scale.


Thanks for confirming!

Yeah, the brevity thing is always tricky- the classic problem with any academic paper is that you need to assume your reader has some level of background that enables them to follow your reasoning; otherwise you end up derailing your paper by explaining too much of the background material.

FYI your claim re coordination isn't true across the board. The second option is used for some services (Slicer has an opt-in integration on the L2s).

(xoogler here ;)


> For instance, in table 3, it looks like they excluded backend tasks {0,1} (for frontend tasks {0, 1}) then {2,3} (for frontend tasks {2,3}) in the N=10 case, but backend tasks {1,2} then {3,4} in the N=11. Why the discrepancy?

With N = 10, there will be N mod k = 10 mod 4 = 2 leftover tasks, and so the round-robin fashion excludes {0, 1} then {2, 3}. However for N = 11, there will be N mod k = 11 mod 4 = 3 leftover tasks, so the round-robin fashion excludes {0, 1, 2} then {3, 4, 5}.

But as joatmon-snoo correctly said, the more important point is demonstrating how bad backend churn is with this algorithm.


OK, that makes a lot of sense. Thanks for taking the time to clarify!

> But as joatmon-snoo correctly said, the more important point is demonstrating how bad backend churn is with this algorithm.

Yes, again the overall point came across clearly, but faced with specific examples I like to dive into the details to check my understanding of how things work. Otherwise, it's easy to overlook key but subtle details.


Other typos: Bbackend; missing math formula in "(for a frontend task m, this is )" — should be ⎣m/L⎦ I guess. Proofreading at ACM seems to be lacking.


I found the blog post slightly confusing because it never explicitly spells out that endorsing a new node is a manual operation that the administrator has to perform from one of the trusted nodes. Of course this is what you'd want, anything automatic would ruin the purpose of tailnet lock. But still not seeing it mentioned, neither in the text nor in the pictures, made me wonder what I had missed, until I watched the video which features that very step as part of the demo.


I had the same issue. I think the idea is that you build something yourself on a trusted node that decides whether or not to endorse a new node.

Off the top of my head I'd do something dead simple like verify the user account matches our domain and then also query an inventory system to verify it is indeed a device we manage through MDM (though I'm not sure how this will work for mobile devices. We don't MDM those).

When a new device attempts to join you should have some data on it via the API (User, OS, Tailscale version, source IP, machine name). You could use that data to decide to endorse it or not.


(Tailscalar and a tailnet lock author here)

If you're okay with trusting Tailscale's control plane, we have a feature for exactly this use case! Its called Device Authorization: https://tailscale.com/kb/1099/device-authorization/

You could also use tailnet lock in this fashion, by issuing a `tailscale lock sign` command for the new node once you've verified the provenance of the new device. Because it involves signatures with keys on your device it could never be as simple as a REST API, but maybe we could offer a more easy to automate command or better client library support (suggestions welcome!)


(Tailscalar and a tailnet lock author here)

Thanks for the feedback!! Writing the documentation for how this worked was a challenge, and its good to hear what pieces we need to call out more strongly in the future.

If you're interested in gory details around tailnet lock internals, we have the beginnings of a whitepaper here: https://tailscale.com/kb/1230/tailnet-lock-whitepaper/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: