I'm still trying to wrap my head around the anti-forking sentiment among the community and how tight the governance structure of the project is. Open source is hard, and people will always have unfair expectations of open source, but if I'm to take the article at face on the forking drama, what's so bad about letting people tinker with it and release their own forks of Elm? If you're burned out, what's the harm in letting others step in more?
Because it means having to give up control and accept that other people are going to contribute - even when they don't follow your own long-term vision.
The Elm Vision is that every library should be infinitely powerful, trivial for novices to learn, and written 100% in Elm. Any deviation from this is seen as a failure of the entire Elm project and a personal embarrassment. The problem is that these standards are way too high and essentially impossible to achieve for any nontrivial library.
The biggest risk of letting other people step is that they may succeed. They're going to write code which is incredibly useful to a lot of people, and it's going to become popular. But those developers aren't strictly following the Elm Vision. They are pragmatic, and they are willing to cut some corners in order to actually ship something. This of course leads to difficult questions like "It works great in library XYZ, why don't we just adopt that in Core Elm?". But you can't, because it breaks your Elm Vision, and you don't have infinite time and money to reinvent the wheel for dozens of core web features until it is 100% perfect according to your Vision. You're rapidly losing grip, and people start talking about a fork.
Elm is becoming a success, but your Elm Vision is rapidly degrading and at risk of failing completely. You can't let them fork. You must maintain control. Your Vision must survive, and it's the only way. You've spent so much time and effort on this. You can't let it fail. It's your life's work. You don't want all that effort to go to waste. You don't want to be a failure. We just have to make sure everyone understands the Vision, and everything will work out okay. We just have to properly educate the dissenters and all will be fine.
---
I don't live in Evan's head, so I'm not going to claim this is his train of thought. But if you're looking for a reason not to let other people step in, here's one possible explanation for you.
I've written up an at-scale production backend in Node.js and can very much stand by the decision to use Node over Elixir or Go (which I was considering at the time). I think fundamentally, the power of a JS-based backend is its pragmatism--it's not the best at most things, but it comes very close to it in so many categories that it's a safe option for a lot of use cases.
> It seems like people jumped to node based on some performance promises that didn't really pay off (IMO). And since then, we have newer options like Rust, Go, and Elixir as performant back-end options, and even older choices like Ruby and Python have continued to improve.
I'd agree that Node.js performance is generally not the best reason to be writing a backend in it since a static language will often yield better performance, but for the amount of dynamic power you get, it's extremely performant by default[1]. The next most performant dynamic language for I/O is, like you said, probably Erlang/Elixir, but V8 is generally understood to have better CPU-bound performance than BEAM.
> Seems like the standard arguments would be that developers already know JS, and that you can share code with the browser. I don't find these highly compelling.
I've found that developers already knowing JS is a very practical reason, if not ideological. I'm in a team with a lot of generalists who like to work full-stack, and being able to use the same mental models and syntax is a lot of cognitive load lifted off our shoulders. It also doubles the hiring pool of people who can hit the ground running on the backend, because now anyone who has experience with JS on the frontend can jump over to the backend with relatively little training.
The other key reason for a backend in JS is that the community is extremely large, which means that a lot of the troubleshooting I'd have to do in languages with smaller communities is done for me by someone who was kind enough to post a workaround online. This saves me a lot of time and energy, as does the plethora of packages.
And the performance argument isn't even just about CPU time, right? The fact that JS is heavily event-friendly, and all of its IO APIs are non-blocking by default, gives it an automatic advantage over busy-waiting languages like Python, and also languages where concurrency means writing threads manually. If your web server spends most of its time on IO (network, DB, file system), as many do, JS acts as a lightweight task-delegator to a highly parallel and performant native runtime.
I haven't worked on a large-scale JS back-end myself, but this is the case I've heard others make
Sure, if you use Python's async feature. But my understanding is that it's relatively uncommon; blocking IO is still the norm, right? I for one have worked in or around a couple of nontrivial Python servers, and I've never once seen an await statement. My understanding (correct me if I'm wrong) is that this comes down to it being newer, and having worse ecosystem support (more "synchronous-colored" APIs, less battle-tested frameworks, etc). It's not a first-class citizen like it is in the JS ecosystem [1]
[1] Technically JavaScript's async/await syntax came later, but it's just sugar over Promises which have been around for a much longer time, and those are built atop the event loop, which has been core to the language since day 1
In non-async Python, generally the thing that blocks is a thread -- something Javascript doesn't even have! A different thread will happily run in the meanwhile.
Right, but some other JavaScript on the same thread will run while a different piece of JavaScript is awaiting. That's why JavaScript can get away with not having threads. Also- any number of background threads will be running at any given time to read data off of disk, load and process network requests, load data from a DB, delegate commands to system processes, etc, in true parallel with the JS code. When one finishes, it'll put an event on the event loop and JS will pick it up when it gets the chance.
Designing a solid contract can be useful even when you’re one person, for the same reason that decoupling classes or concerns is useful. I don’t need to consult another person or read the code to understand what it’s expecting (especially true in contexts where documentation may not have been written yet).
The main use case I found was that my production database had a bunch of changes we did manually (early stage startup and all), so I used Migra to figure out what changes we needed to make to keep the migrations in sync with what was actually in production.
The more common use case is this idea in development—-experiment with different schemas manually and then use a tool like Migra to figure out what migration to write, without keeping in your head what changes you’ve made.
Throw lots and lots of money funding at a bikeshare system so that the system doesn't need to care about profit. Hire competent, driven people for the system so they feel motivated to make the system better even without any financial incentives.
"A room with four feet of water was built to last for several years with the intention of being more generalizable to the general population; but with each new room under its original design the old room is no longer suitable for real life use. So why do we keep asking the older room owners to build a new room with four feet of water?
"It doesn't provide a room with many plots that have no walls in it? They aren't as useful."
Is it that hard to design a house with multiple plots over it?