Hacker Newsnew | past | comments | ask | show | jobs | submit | jacques_chester's commentslogin

At Shopify I was the person who first proposed that we needed to stump up $$$ for RubyGems (and only by implication Ruby Central).

This is not what I had in mind and now I'm embarrassed that I helped make it possible.


Sounds like Shopify has some leverage then to open a line of comms with Ruby Central. "Explain yourselves or we will pull funding"


The problem is that Shopify is leveraged by DHH (who is on their board) to be the financial support referenced elsewhere in today’s discourse. Shopify is a bad actor here


If that's the case, sounds contradictory to their status as a 501c3 and could get their tax exempt status pulled if it's true.


I should add, to clarify: I don't work at Shopify anymore and I'm not speaking for them. Purely a personal view.


Oh, this old chestnut. "Just do what the distros do".

OK, sure, let's pencil this out.

Debian has ~1k volunteers overseeing ~20k packages. Say the ratio is 20:1.

npm alone -- not counting other ecosystems, just npm -- has 3 million packages.

So you'd need 150k volunteers. One hundred and fifty thousand unpaid individuals, not counting original authors.

For one repo.

"Nonsense", you riposte. "Only maybe 100k of these packages are worth it!"

Cool, cool. Then you'd need "only" 5 thousand volunteers. Debian maxed out at 1k and it is probably the source of the most-used software in history. But sure, we'll find 5 thousand qualified people willing to do it for free.

Oh, but how do you identify those 100k packages? OK, let's use download count. Or maybe reference count. Network centrality perhaps? Great, great. But some of them will be evicted from this paradise of rigorous repackaging. What replaces them? Oh, shoot, we need humans to go over up to 3 million packages to find the ones we want to keep.

What I need distro boosters to understand is that the universe of what is basically a package manager for large C libraries is at least two orders of magnitude smaller than everything else, bordering on three if you roll all the biggest repos together. The dynamics at language ecosystem scale are simply different. Yelling at the cloud that it should actually be a breeze isn't going to change things.


There are probably 5k libraries and frameworks worth paying attention from OSS community and organization structure similar to Eclipse Foundation or Apache. The rest is either junk, low risk solo maintained project or corporate stuff maintained by someone on salary.


> Oh, this old chestnut. "Just do what the distros do"... The dynamics at language ecosystem scale are simply different.

The reason for the unwieldy scale might be the lack of proper package inspection and maintenance, which the dreaded old chestnuts do provide.

With proper package management, the number of packages will go down while their quality will go up, it's a win-win.

Can that be done for all packages at once? No, just give a mark of quality to the packages whose authors or maintainers cared to move to the new process. The rest produce a warning - "package not inspected for quality". Done!


Glad to hear it's all so simple. So you'll have no problem setting it up and finding thousands of volunteers to help, right?


Yes, I'm perfectly fine with setting up and recruiting volunteers for important software initiatives and no, I'm not going to do that for npm before they fix the mess they themselves created, there are more productive ways to get the job done without using npm. It's good that we have choices.

What I advised doesn't require "thousands of volunteers", you can start with one but that's not going to be me because you might be right - what Linux bistros are doing might be impossible in the npm community given the widespread 'do-first-think-later' attitude. As I said, it's good we have other choices.


I disagree with theses in this piece.

1. "2FA doesn't work". Incorrect. MFA relying on SMS or TOTP is vulnerable to phishing. Token or device based is not. And indeed GitHub sponsored a program to give such tokens away to critical developers.

In 2021.

2. "There's no signing". Sigstore support shipped in like 2023.

The underlying view is that "Microsoft isn't doing anything". They have been. For years. Since at least 2022, based on my literal direct interactions with the individuals directly tasked to do the things that you say aren't or haven't been done.

I have no association with npm, GitHub or Microsoft. My connection was through Shopify and RubyGems. But it really steams me to see npm getting punched up with easily checked "facts".


Most of the biggest repositories already cooperate through the OpenSSF[0]. Last time I was involved in it, there were representatives from npm, PyPI, Maven Central, Crates and RubyGems. There's also been funding through OpenSSF's Alpha-Omega program for a bunch of work across multiple ecosystems[1], including repos.

[0] https://github.com/ossf/wg-securing-software-repos

[1] https://alpha-omega.dev/grants/grantrecipients/


Trunk-based development, by itself, is a fool's errand.

But combine it with TDD & pairing and it becomes a license to deliver robust features at warp speed.


I don’t follow. Regardless of where you merge, are you not developing features on a shared branch with others? Or do you just have a single long development branch and then merge once “you’re done” and hope that there’s no merge conflicts? But regardless, I’m missing how reviews are being done.


It's not for everyone. Some people have excellent reasons why it isn't workable for them. Others have had terrible experiences. It takes a great deal of practice to be a good pair and, if you don't start by working with an experienced pair, your memories of pairing are unlikely to be fond.

However.

I paired full-time, all day, at Pivotal, for 5 years. It was incredible. Truly amazing. The only time in my career when I really thrived. I miss it badly.


Interesting watching this part of the landscape heating up. For repos you've got stalwarts like Artifactory and Nexus, with upstart Cloudsmith. For libraries you've got the OG ActiveState, Chainguard Libraries and, until someone is distracted by a shiny next week, Google Assured Open Source.

Sounds like Pyx is trying to do a bit of both.

Disclosure: I have interacted a bunch with folks from all of these things. Never worked for or been paid by, though.


It's a great feature, but GitHub's parser chokes on it.

Compare:

https://github.com/jchester/spc-kit/blob/eb2de71d815b0057e20...

To:

https://github.com/jchester/spc-kit/blob/main/sql/02-spc-int...

Basically the original rendering makes me look incompetent to a casual skimmer. Plus tools like JetBrains IDEs can suss out what comments belong to what DDL anyway.


"The web interface to the version control system doesn't parse the here-string correctly" isn't really a criticism of the PostgreSQL extension. It's a bug in the syntax highlighting.

The COMMENT feature isn't even a good choice for a VIEW, PROCEDURE, or FUNCTION, each of which already supports comments inline in the object definition on the server. No, the main benefits are adding comments to objects that DON'T retain them, like a TABLE, COLUMN, CONSTRAINT, ROLE, etc.


Since the project consists of multiple SQL files already, a workaround might be to split out all the “comment on” statements from each file into new files following them.

So in file 02-… you have your “create schema”, “create view” and so on. And then in file 03-… you have only the “comment on” statements that go with the things from file 02. And then file 04-… contains “create schema” and “create view” and so on, and file 05-… has the “comment on” statements for file 04-….

And in addition you could then add dash dash comments in 02 and 04 referring to files 03 and 05. And in file 03 and 05 at the top mention that these are valid SQL for PostgreSQL and that GitHub has trouble rendering them properly.

It’s a bit messy of course, but that’s why I say it’s a possible workaround rather than a great solution. Could be worth considering and trying, anyway.


> Brian Kernighan and Rob Pike

Most of us aren't Brian Kernighan or Rob Pike.

I am very happy for people who are, but I am firmly at a grug level.


This! Also my guess would be Kernighan or Pike aren't (weren't?) deployed into some random codebase every now and then, while most grugs are. When you build something from scratch then you can get by without debuggers, sure, but foreign codebase, a stupid grug like I can do much better with tools.


This triggered some associations for me.

Strongest was Cells[0], a library for Common Lisp CLOS. The earliest reference I can find is 2002[1], making it over 20 years old.

Second is incremental view maintenance systems like Feldera[2] or Materialize[3]. These use sophisticated theories (z-sets and differential dataflow) to apply efficient updates over sets of data, which generalizes the case of single variables.

The third thing I'm reminded of is Modelica[4], a language where variables are connected by relations (in the mathematical sense). So while A = B + C can update A on when B or C change, you also can update just A and B, then find out what C must have become.

[0] https://cells.common-lisp.dev

[1] https://web.archive.org/web/20021216222614/http://www.tilton...

[2] https://www.feldera.com

[3] https://materialize.com

[4] https://modelica.org


> Strongest was Cells[0], a library for Common Lisp CLOS. The earliest reference I can find is 2002[1], making it over 20 years old.

How about Microsoft DirectAnimation[1] from 1998, literally designed under the direction of Conal Elliott? Serious question, for what it’s worth, I’ve always wondered if all discussions of this thing are lost to time or if nobody cared for it to begin with.

[1] http://sistemas.afgcoahuila.gob.mx/software/Visual%20Basic%2...


... or Visicalc, TK/Solver, etc.

I've always been baffled that people think spreadsheets are like dataframes when the really interesting thing has always been you can write formulas that refer to each other and the engine figures out the updating. Most of the times I've written a spreadsheet I haven't used the grid as a grid but just a place I can write some labels, some input fields and formulas.


well it is both an easy way to compute in a dataframe context and a reactive programming paradigm. When combined, it gives a powerful paradigm for throwing data-driven UI, albeit non scalable (in terms of maintenance, etc.).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: