Hacker Newsnew | past | comments | ask | show | jobs | submit | more jacques_chester's commentslogin

Grids! I knew I forgot something in my list. Updated.


Fair enough. If it makes you feel better, I have a very different project where I have deliberately avoided any AI-generated code: https://github.com/jchester/spc-kit


I think the confusion arises because of the difference between optimization and control, which are superficially similar.

Having control lets you see if things changed. Optimization is changing things.

This team seems to be focused on control. I assume optimization is left to the service teams.


I think by control you mean observability?

I get that they're different, but the whole point is optimization here. They're not gathering performance metrics just to hang them up on a wall and marvel at the number of decimal points, right? They presumably invested all the effort into this infrastructure because they think this much precision has significant ROI on the optimization side.


I'm using "control" in the statistical process control sense, where it means "we can tell if variation is ordinary or extraordinary".

To me it seemed clear that the paper is about detecting regressions, which is control under my definition above. I still think of that as distinct from optimization.


Right, but what is the point of optimizing?

It often isn't to make things go faster for an individual user (often times the driving factor of latency is not computation, but inter-system rpc latency, etc.). The value is to bin-pack processing more requests into the same bucket of CPU.

That can have latency wins, but it may not in a lot of contexts.


I don't agree. This is basically an elaborate form of statistical process control, which has been proving itself useful and effective for nearly a century. We can quibble about the thresholds and false positive rates, but I think the idea of automating regression detection is perfectly sound.


Statistical process control as a concept is a sound idea in theory. When talking about real world complex dynamic systems like operating systems and processes handling random load vs things like assembly lines, it’s less clear it’s on a solid mathematical foundation. Clearly what’s happening in that paper isn’t starting out with some principled idea but instead filtering out patterns deemed “noise” and adjusting the levels for those filters to generate “results”. I think if you read through the lines it’s clear the engineers being supported by this tool aren’t really relying on it which tells you something


SPC gets used on complex dynamic systems all the time. It takes more work and more nuance, but it's doable. I don't see a categorical error here, it's about fine-tuning the details.


Maintaining this capability isn't free, it is of dubious benefit and there are much better alternatives.

On a cost benefit analysis this is a slam dunk.


What are these "much better alternatives"?


https://www.sigstore.dev/

The emerging standard for verifying artifacts, e.g. in container image signing, npm, maven, etc

https://blog.sigstore.dev/npm-public-beta/ https://www.sonatype.com/blog/maven-central-and-sigstore


Emerging standard = not yet the standard


Nobody said it was. The point is that it's better.


And my point is that “it’s better” and “new standard” are not compelling emotional directives to me after hearing them Integer.MAX times.


9 months sounds like plenty, but don't be hasty.

I've been unemployed for almost a year. I would not take a buyout in this engineering market. It is brutal.


Sure, but WP Engine will need to hire people to work on ForkPress. Or people can start a ForkPress foundation that actually includes stakeholders and get grants to work there. Someone will need to maintain the fork which is definitely coming.

After all, let’s say that WP Engine gives in and pays Matt 10% of their revenue. Then… who’s next? Can Automattic be trusted as a steward of the software, ever? I suppose if Matt sells the company it might be possible. Nope, the fork is inevitable and some company(s) will need to sponsor it.


So have I, and I would still be very tempted to take the 9 months severance.

Don't fool around with tyrants:

With something like this going down, I'd wonder if my employer will even be around in 9 months. I'd also worry that my employer could "go for broke" and just close its doors on me, leaving me with nothing.

So, yeah, I'd have to really love my job to stay.


Yeah this job market sucks, but as far as outside looks at Automattic go now, I have a feeling there is more trouble to come. Considering this, I would take the payout as a cushion to find something new, potentially in a new industry outright


I performed a similar analysis on RubyGems and found that of the top 10k most-downloaded gems, less than one percent had valid signatures. That plus the general hassle of managing key material means that this was a dead-end for large scale adoption.

I'm still hopeful that sigstore will see wide adoption and bring authorial attestation (code signing) to the masses.


I agree, where is the LetsEncrypt for signing? Something you could download and get running in literally a minute.



I don't think Sigstore is a good example. I just spent half an hour trying to understand it, and I am still left with basic questions like "Does it require me to authenticate with Github & friends, or can I use my own OIDC backend?": it seems like you can, but there are cases where you need to use a blessed OIDC provider, but you can override that while self-hosting, and there are config options for the end user to specify any IODC provider? But the entire trust model also relies on the OIDC backend being trustworthy?

The quickstart guide looks easy enough to follow, but it seems nobody bothered to document what exactly is happening in the background, and why. There's literally a dozen moving pieces and obscure protocols involved. As an end user, Sigstore looks like a Rube Goldberg trust machine to me. It might just as well be a black box.

PGP is easy to understand. LetsEncrypt is easy to understand. I'm not an expert on either, but I am reasonably certain I can explain them properly to the average highschooler. But Sigstore? Not a chance - and in my opinion that alone makes it unsuitable for its intended use.


The important difference is that sigstore enables a "single click" signing procedure with no faffing around with key material. How it works is much less important than the user experience, which is vastly better.


> How it works is much less important than the user experience, which is vastly better.

I disagree. If it requires a Magic Trust Box which can be Trusted because it is made by Google and Google is Trustworthy, it has exactly zero value to the wider community. It doesn't matter how convenient the user experience is when it isn't clear why it provides trust.

Let's say I created an artifact upload platform, where the uploader can mark a "This file is trustworthy" checkbox, which results in the file being given a nice green happy face icon in the index. It is incredibly convenient and provides a trivial user experience! And it's of course completely legit and trustworthy because *vague hand waving gestures*. Would you trust my platform?


Specifically, the CA signing the code certificates (that are valid for 10 minutes) is https://github.com/sigstore/fulcio.


1. SPC kit [0]. Once made it to the front page! [1]

It's an SQL library for doing statistical process control (SPC) calculations.

This has been a labour of love for about 2 years now. I work on it sporadically. Recently I got more disciplined about what I am working on and I am slowly closing the gap on a first 0.1 release.

2. Finding work. As much fun as it is to tinker, I am nursing the standard crippling addiction to food and shelter. I am also nursing an increasing loathing for LinkedIn and wish to be free of having to check it.

[0] https://github.com/jchester/spc-kit

[1] https://news.ycombinator.com/item?id=39612775


I remember seeing a VMware-internal presentation on the DDlog work which led to Feldera and being absolutely blown away. They took a stream processing problem that had grown to an hours-deep backlog and reduced it to sub second processing times. Lalith & co are the real deal.


Thank you jacques_chester! Piping all that credit to my co-founders Mihai and Leonid, the key inventors.


Also Gomponents, a similar project for Go: https://www.gomponents.com


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: