I have worked with some on the team in a professional setting in the past. I've seen them deliver brilliant enterprise-grade engineering to the Fortune 500. Now they're shaking up the world again. For some reason they're just giving it all away. Take advantage of their generosity. This is pro.
If your goal is profit then giving away your work for free isn't a good strategy.
What you gain with open source is free distribution and advertising (public git, free advertising in the form of package.json, mentions in readmes, articles demonstrating your code, and so forth). OS maintainers are also gifted free labor (I didn't read anything here about profit sharing with contributors/debuggers, or how that might work, or even if it could work). It may not feel nice, but if you're going to question the economics of a market you need to be honest about cost and value.
In fact OSS maintainers are massively subsidized by the community, by large companies (like Github) and so forth. And this is an automatic cost subsidy that operates outside of value. Most OS software has little (or negative) value, yet is allowed to survive, when in an open market (where it costs something to maintain market presence) most OS projects would be eliminated, reducing the massive cost to teams and companies doing the (free) work and assuming the large risk of eliminating bad products, work a real market would have done for free, by voting with their power to install (or not), ie. buy. If I install one NPM module I am essentially "blackmailed" into installing several others, which may introduce hidden costs that only I will be responsible for (and not the original product creator). It cuts both ways.
It is also disingenuous to elide the real financial benefit of maintaining a popular OSS product. A fair analysis demands a clear accounting of year over year income growth to OSS maintainers able to convert the (perceived) demand for their products into higher wages or other payments for their time in terms of design, engineering, community management, and so forth.
I feel the "good cause" argument is being misappropriated. If you believe in software and making the world a better place through development and sharing then there should be a dot at the end of that sentence. I write books (don't do it!) and give away a lot of software (do it!) because I think freedom is at the end of that work. I'm happy with that in this margin-focused, "if you're not making money you're a nobody" world. The relevant progenitor here is Stallman, not Smith. Idealism is low margin, typically.
Facebook and Google release a lot of open source software. Why aren't their Patreon numbers through the roof? Should they be? Have you used Linux? Git? Why haven't you paid Linus something (a lot!)? Still getting nagged on your code editor about paying for a license? Why haven't you paid it? Why aren't businesspeople who also give away their software for free (for a time) being paid by every single consumer using their software (see: https://www.sublimetext.com/buy?v=3.0)? Why is nagging even necessary?
There is more. Will a hypothetical "deserving" maintainer add features based on the requirements of her sponsor(s)? Will bug fixing priority be determined based on sponsorship amounts? This already happens. And it should! But that's a very different game, and please don't claim that this special class of human beings is beyond corruption. Beware the person who claims their purpose in making and giving away free stuff is "love ️ ", and then later asks for a donation. Cults work that way. Drug dealers work that way.
That being said, please do make money if you can. Money you earn. By delivering rare value that you can convince someone, at your own expense, to buy. Like everyone else. The consumer isn't obligated to pay for what they can get for free. If you are upset about the lack of income, stop doing free work. Those who intend to pay nothing for the OSS they use are "less ethical" in no way whatsoever. And claiming to deserve what amounts to charity isn't convincing to me, given that I expect every single one of the people who benefit from say the top 100 repos earn six figure salaries and may even get a free lunch, every day.
These "JavaScript is stupid, why don't you all see that!! Why! Why! Why?!??!!" rants are really entertaining nowadays. The author has also invented a (perverse?) extension of Atwood's Law (which law predicted React Native): anything that can be blamed on JavaScript will be blamed on JavaScript. If JavaScript ran for president it would be Hillary Clinton, and every other language would occupy one thread on Trump's Toupee.
Go is a great language, and I'm looking forward to see what TJ does with it.
He's bored, and moving on. That's it. He's trying to justify what must be a painful decision by making a bunch of abstract claims about how Node isn't a production-ready language. The elephant in the room: how is it that TJ was able to build so many things, over so many years, that are used -- in production -- in so many places, in a language that is "difficult to debug, refactor and develop."? Is he a masochist?
I think he's hit a personal wall. It seems he has a beef with the way the core is being developed. And Go is indeed fun, and exciting. This isn't surprising. People get burned out.
Not at all, my claims of node not being production ready are 100% legit, and based on real-world applications. The thing that is enjoyable Go is that it IS production ready, if I could say the same about node I would still be using it.
Like I mentioned in my post, I decided to rewrite an application that I had been working on for the last month with Go. I decided that if I could do it in a week, and if it went well that I would ditch node. It went even better than I had hoped, and is much less frail. Node has many design flaws down to its very roots.
I'm not saying those are due to terrible engineering or anything, but they were choices that were made and Node is stuck with them for the foreseeable future until breaking changes can be made. That is if the core people can admit that it's not great to work with :)
I think the main problem is often people working on the depths of some system forget to really use it. It's so easy to get caught up in the details but if you don't step back and use your own product you're screwed!
Your experiences moving a Node app to Go would make for a great study. You're extremely influential, and it may be better to give concrete examples of how Node can be improved.
I wonder about this "enjoyable" criteria. I'm assuming you're not claiming to be the only person who has written large Node systems, or that nobody has ever written larger, more highly trafficked, more complex Node applications than you have. So the point seems to be: for you, Go is more fun than Node at scale. As a well known contributor, that's a valuable perspective. I'd like to know more about how your personal experience could be translated into general truths all developers could understand.
It seems to me that Node is used rather extensively by the core contributors. I don't get the impression that they have never actually used the product, based on their repos etc.
Thanks for all your work. I'm sure the Go community will similarly benefit from your efforts.
I think the issue is we've been slowly accepting that the node is the right way to do things. Five years of working with someone will do that to you. If you follow me on twitter you probably see me complain about it in various ways all the time, it's not that I'm bored with it, node's problems are real and some people just seem to ignore them, or maybe just deal with them because they know backwards compat can't be broken immediately. It has always bothered me that people would advocate such broken systems (streams etc) when real-world alternatives were better even before now, even C has a much better example of what a stream should be.
I'm definitely not the only one writing large systems, but I think the tolerance level varies per developer. Also plenty of people make a living/name from Node, of course they're more likely to praise it than to be honest about its faults, I'm definitely a minority in that respect. Having broken concepts is one thing, but resisting change when they're obviously broken is not a good thing. Many of these same people make money from consultation, where it's advantageous to have a broken system. I'm not trying to screw them over but someone has to be honest.
It took years to get .addHeader() in because no one in core believed in a progressive API, they didn't use node in real-world applications to see the need. This still happens all the time, take npm-www for example, Isaac is a rad guy but him aswell as most other "core" community members advocated building tiny little things and stitching them all together, and just recently realized that in practice this doesn't scale, thus ended up going with Hapi. This lack of insight is all over Node as a community.
It's hard to describe well, but I hope people will try Go (or similar alternative), you'll really see how much more robust it is. If node fixed its conceptually flawed event system, rewrote http so it wasn't awful to work with, and fixed streams then we'd have a pretty good system to work with. It's very easy to pass off problems with node as problems you'd have with any platform, but that's unfortunately just not reality.
I have to push back against some of your statements.
“I think the issue is we've been slowly accepting that the node is the right way to do things. Five years of working with someone will do that to you.”
Who is “we” and what is the evidence for the conclusion? How is it true that 1) In 5 years all coworkers think exactly the same way; and 2) Some purported truth about human behavior relates to the general usage of a programming language within an opinionated and free community? This seems like some random statement without any (provable) basis in reality.
“It has always bothered me that people would advocate such broken systems (streams etc) when real-world alternatives were better even before now”
You’ve started companies, joined companies, and encouraged Node technology at companies. You’ve advocated for others to use the systems that you’ve built. These companies have investors and other who trust your judgement. Why did you advocate a broken system, for so many years? Or did you come to this conclusion just a few weeks ago? What led you to that conclusion?
“I'm definitely not the only one writing large systems, but I think the tolerance level varies per developer.”
I’m not sure that anyone I know who builds massive enterprise systems ignores the “bad parts” of a system. None of them are tolerant of bad ideas. You really can’t be if you’re a professional with responsibilities to your team, your company, and shareholders. To suggest that Node is popular because Node developers are more tolerant of bad systems is, well, a weak theory that I think doesn’t stand up to reason.
“Also plenty of people make a living/name from Node, of course they're more likely to praise it than to be honest about its faults, I'm definitely a minority in that respect…Many of these same people make money from consultation, where it's advantageous to have a broken system. I'm not trying to screw them over but someone has to be honest.”
It’s not controversial to point out that you have advocated for (praised) a system that you believed was full of faults, for years, for great profit, in many senses. So you’re indeed a minority, but not in the way that you think you are. All of this sounds, to me, a little screwy.
“This lack of insight is all over Node as a community.”
I’ve re-read the paragraph preceding the above conclusion, but have failed to find a cogent argument. Something about #addHeader, and Isaacs is rad, and something else about “building tiny little things and stitching them all together” is a fools errand — this seems to be an “insight”. From Eric Raymond’s Unix rules: “Rule of Composition: Developers should write programs that can communicate easily with other programs. This rule aims to allow developers to break down projects into small, simple programs rather than overly complex monolithic programs.”. Unix has scaled pretty nicely.
Now, you may be right. Also, Raymond might be right. Also, the entirety of the thousands of developers who build serious, professional software by following this credo might be right. Or some may be wrong. Or all may be wrong. But the point is… I still don’t see you producing an irrefutable argument, especially since you want to conclude that Node is fundamentally broken, isn’t “ready for prime time”, and so forth. I mean, you’re the most prolific contributor to the Node project, and you have advocated on several occasions for exactly this sort of methodology. It’s confusing.
“It's very easy to pass off problems with node as problems you'd have with any platform, but that's unfortunately just not reality.”
Not sure what you’re saying here. I could link a few hundred articles covering every language, including Go, that say the same thing you’re saying about Node — but I don’t know what they’re getting at either. No programming language solves all problems, and is easier than all others, and has no mistakes in it, and etc. Do you dispute that? If not…why exactly is the fact that Node has some problems, and some imperfections, and some mistakes, a reason to dismiss it outright? From your earlier post: “my claims of node not being production ready are 100% legit, and based on real-world applications.”. 100% legit? That’s pretty legit! But howzit legit?
I’ve been a little hard here, but I feel I had to be. Again, you’ve been a great contributor to the project, and I hope you do some more great things.
It's great to remind everyone how Node follows various rules of the Unix Philosophy, and how it is designed to make process spawning/streaming as natural as on the OS.
I would prefer it though if the implication wasn't that a failure in Node's design is responsible for the failure of this in-process-memory technique of sorting massive data sets. From the article:
"However, as more and more districts began relying on Clever, it quickly became apparent that in-memory joins were a huge bottleneck."
Indeed...
"Plus, Node.js processes tend to conk out when they reach their 1.7 GB memory limit, a threshold we were starting to get uncomfortably close to."
Maybe simply "processes" rather than "Node processes"? -- I don't think this is a Node-only problem.
"Once some of the country’s largest districts started using Clever, we realized that loading all of a district’s data into memory at once simply wouldn’t scale."
I think this was predictable. Earlier in the article I noticed this line:
"We implemented the join logic we needed using a simple in-memory hash join, avoiding premature optimization."
The "premature optimization" line is becoming something of a trope. It is not bad engineering to think at least as far as your business model. It sounds like reaching 1/6 of your market led to a system failure. This could (should?) have been anticipated.
To some extent we knew that in-memory joins would eventually cause problems, but we were certainly surprised at how quickly Node memory usage became the bottleneck. Here's a little gist I used to test it a while ago https://gist.github.com/rgarcia/6170213.
As for your point about premature optimization, in my opinion a startup's first priority is getting something in front of users in order to start improving and iterating. The first version of the data pipeline discussed in the blog post was built when Clever was in 0 schools, so designing it to scale to some of the largest school districts in the country would have been fairly presumptuous.
Seems rather fat. Fatter than what I've seen for Knockout, or Ember, or even Backbone.
As I suspected this idea of just dropping any package and bang it works depends on that package having been made available by the Meteor team, and the requisite wrapper/bindings made around the original package. Which means now using Meteor means you must use their package, which even with best intentions, means their team will have to keep up to date with every package you'd like to use, every new version, the bugs, etc.
What if my data comes from multiple sources ("the cloud" and "big data" and "3.0" means your apps don't all share the same persistence layer, and likely use several)
What if I push 1000 records of data to the client, in the html for efficiency and network reasons, to power my fancy D3 chloropleth, and the client interface allows manipulation of that (local) data. How do I take advantage of Meteor's data binding system? This is something almost every serious application does -- updating a local data structure and propagating that change to several views.
Overhead of having all these sockets open? If I want to change 20 items and then update my view, do I now have to write the boilerplate to store up 20 changes, then package, then send in one shot down the Meteor pipe? Do I make 20 calls via client DB bindings? Probably the former. Which means I end up doing a lot of work that I wouldn't have if I had a local model with CRUD methods (like Ember, Backbone, etc).
Latency compensation. Sounds great. So I say "pay my bill" and my interface updates with an optimistic confirmation. Great! Now I can drive my kid to the soccer game. Hope I notice that the number got switched back (server fail). How will I notice? Hope the developer implemented a noticeable notification system...
Security? Anyone with a browser can change my DB? How do you secure that? Log in? Now logged-in people can change my DB from their browser? Whitepaper is necessary, so this aspect can be tested.
Great team. Good ideas. But it is quite early yet.