Hacker Newsnew | past | comments | ask | show | jobs | submit | dblotsky's commentslogin

Hard agree with the idea of greppability, but hard disagree about keeping names the same across boundaries.

I think the benefit of having one symbol exist in only one domain (e.g. “user_request” only showing up in the database-handling code, where it’s used 3 times, and not in the UI code, where it might’ve been used 30 times) reduces more cognitive load than is added by searching for 2 symbols instead of 1 common one.


I’ve also found that I sometimes really like when I grep for a symbol and hit some mapping code. Just knowing that some value goes through a specific mapping layer and then is never mentioned again until the spot where it’s read often answers the question I had by itself, while without the mapping code there’d just be no occurrences of the symbol in the current code base and I’d have no clue which external source it’s coming from.


Probably depends on how your system is structured. if you know you only want to look in the DB code, hopefully it is either all together or there is something about the folder naming pattern you can take advantage of when saying where to search to limit it.

The upside to doing it this way is it makes your grepping more flexible by allowing you to either only search the one part of the codebase to see say DB code or see all the DB and UI things using the concept.


I have mixed thoughts on this too. Fortunately grep (rg in my case) easily handles it:

rg -i ‘foo.?bar’ finds all of foo_bar, fooBar, and FooBar.


Not to mention the readability hit from identifiers like foo.user_request in JavaScript, which triggers both linters and my own sense of language convention.


Both of those are easy to fix. You'll adapt quickly if you pick a different convention.

Additionally, I find that in practice such "unusual" code is actually beneficial - it often makes it easy to see at a glance that the code is somehow in sync with some external spec. Especially when it comes to implicit usages such as in (de)serialization, noticing that quickly is quite valuable.

I'd much rather trash every languages' coding conventions than use subtly different names for objects serialized and shared across languages. It's just a pain.


Aren't these effectively lab tests of robots that can navigate a home?

[dancing] https://www.youtube.com/watch?v=fn3KWM1kuAw [cleaning clutter] https://www.youtube.com/watch?v=C8-w9eF24gU


The `./make` shell script kills me. Why not just use Make?


Make is designed to track build dependencies, not to execute programs, so I think the problem is the choice of name.

"just" is a utility designed to execute programs: https://github.com/casey/just#just


Is there any design decision of Make that makes it not suitable for executing programs?


Just knowing the commands and their arguments (environment vars) with make is difficult. Env vars can easily have typos. With just you can actually list out all the commands.


(not strong arguments but some possible explanations)

1) ./make can be more portable e.g. you might use some GNU Make syntax that does not run on BSD and have to insall gmake just to run that

2) Make is not designed to be a command runner and have to manually add .PHONY to everything

3) case looks minimal and flexible enough (although I'm sure sh's confusing syntax can cause a lot of pain for non sh experts like esac wtf)


Those all sound plausible! But...

1) I suspect GNU Make will be installed on any system where node code will run.

2) You don’t have to .PHONY anything that will never be a real file.

3) My bet is the Makefile will be shorter and clearer, but I am of course biased since I’m used to the syntax.


I'm working on the Makefile, but I'm stuck with the `time ./make-production.js` task.

`$ which time

time: shell reserved word`


You could use something like `sh -c 'time ...'` to work around that. (You might need bash instead of sh, it might depend on what sh is on your system.)


Thank you, that worked well. I added it to the post.


And as an added bonus, Make gives you tab autocompletion


How big does the project have to be in order for this system to have benefits over Make?


The problem with Make is that it doesn't know what it's building, so it can't do anything smart. For incremental builds to work at all, you have to supply the smarts. Having worked on a number of make-based projects (I am most scarred by buildroot), I can tell you that people make mistakes with build rules. The project then devolves to doing a clean build for every change, turning what could be a few milliseconds of CPU time into a 30 minute rebuild.

The idea of these build systems like Bazel is that the rules are correct so that you don't have to worry about writing correct rules, and you have a high probability of an incremental build producing a binary that's bit-for-bit identical to one from a full build. The result is that you don't do full builds anymore, and save the latency of waiting for things to build. (That latency shows up in the edit-build-test cycle, how long it takes to deploy software to production to fix an emergency bug, etc. So it's important!)


I am personally not convinced that any build system can be correct and general, but perhaps that’s my lack of experience speaking.

On that 30 minute note though: so, how big does the project need to be in order for Make not to be enough? And at that size, why wouldn’t the project invest the extra week it takes to get the Makefile correct?


It's easier to adopt a correct build system when your codebase is still simple. It can save you time, because it can prevent lots of unintentional errors over time (e.g. you might not notice when you get an incorrect result after an incremental build).

In my opinion, this is a bit similar to languages with static vs dynamic typing.

If your current setup works well for you, it makes sense to keep it, though.


I completely agree with you. To give the build system the understanding it needs, you have to give up the generality.

For me, I've found that the results are best when you use a build system and you use it exactly the way the author intends. For example, Go's built-in build system is so good that you don't even notice it's there. It automatically updates its configuration as you write code. It does a perfect incremental build every time, including tests. The developer experience is basically perfect, because it has a complete understanding of the mapping between source files, dependencies, and outputs. But, it is not extendable, so you're screwed when your Go project also needs to webpack some frontend code, or you need to generate protocol buffers (which involves a C++ binary that runs a Go binary), etc. So, people bolt those features on, and the build system becomes more general, but not quite as good. (Then there's make, which is as good or as bad as you want it to be.)

I think super small projects often do get their Makefiles right. But you can manually build small projects with something like "gcc foo.c bar.c -o foobar -lbaz", and so you don't really benefit from any improvement over the bare essentials. (Nothing wrong with keeping your projects tiny in scope, of course!)

But, sometimes you don't have the luxury of a super small project, and the Makefiles become quickly unfixable. Like I said, I am most scarred by a buildroot project I worked on (that's embedded Linux, basically). It never built what I expected it to build, and to test anything reliably I either had to yolo my own incremental build or wait a while for a full build. My productivity was minimal. I could switch between client-side and server-side tasks on that project, and so I really only touched the client if it was absolutely necessary. I would never be productive enough to undertake a major project that truly added value with that kind of build system, so I let others that didn't have the server-side experience write the client-side stuff. In that case, the poor build system silently cost our team productivity in terms of artificially splitting the team between people who could tolerate a shitty developer experience and those who couldn't.

I don't think anyone has fixed the buildroot problem, either. If you want to build a Linux image today, you are stuck with these problems. Nothing else is general enough to build Linux and the associated random C binaries that you're going to want to run.


Yeah, that tradeoff between generality and correctness really seems tyrannical.

It kind of feels like the best trajectory is to start small with a general build system, and upgrade as needed? And then if you are confident the project will grow, starting with the specific build system fine too.


Reminds me of Orwell’s Politics and the English Language: https://www.orwellfoundation.com/the-orwell-foundation/orwel....


Correct: Orwell was also very, very wrong.

This is not just personal distaste. Linguists who know the facts find it intolerable.


Can you elaborate, for the non-linguists? His prescription seems sensible:

    i. Never use a metaphor, simile or other figure of speech which you are used to seeing in print.

    ii. Never use a long word where a short one will do.

    iii. If it is possible to cut a word out, always cut it out.

    iv. Never use the passive where you can use the active.

    v. Never use a foreign phrase, a scientific word or a jargon word if you can think of an everyday English equivalent.

    vi. Break any of these rules sooner than say anything outright barbarous.
... but I'm well aware of how sensible an asinine prescription can seem to a layperson.


For example, "the passive" is rarely defined in a coherent way, or matches the actual definition of "passive voice". Worse, though, is that it is very bad advice. What are called passive senses are tools to correctly place emphasis which, done well, aids clarity.

Often a long word captures a nuance the short version can't. Its presence, by itself, calls the careful reader's attention to the distinction between it and the shorter word it displaced, without belaboring it.

Metaphors, similes, and figures of speech are the furniture of language. Most words, standing alone, embody one. Orwell certainly did not obey this stricture, or he would have been mute.

A word that could have been cut, but wasn't, calls attention to the choice made not to cut it, inviting curiosity why it wasn't, which you may then answer.

Foreign, technical, and jargon words tell the reader about your context. Substituting a word unfamiliar in that context generates confusion, and questions about what distinction you are trying to make by avoiding the usual word. Sometimes you are, in fact, making such a distinction.

Careful readers learn to recognize when writers are making their choices judiciously, and draw extra meaning from them.

So, better advice would tell you to put each such choice to work on the hard job of communicating.


It sounds like the last point, about disobeying the rules when appropriate (my reading of it), was meant exactly to cover these corner cases.


Can this all be explained by the cost of labor being uniform (since humans are humans in all industries)?

So if one industry demands more labor, all others will feel the rising price of labor?


It can be explained if you cost things in hours rather than money.

As we improve technology fewer and fewer hours are required to create an item. But a violin concerto takes the same amount of human time.

Which then leads to the conclusion that we really exchange time when trading. Each of us only has a finite amount of time after all.


The cost of labor is far from uniform.

A FAANG employee's labor is higher than a Uber driver's, driven by both supply (most people can drive an Uber) and demand (FAANG companies make a lot of money, something like >$1mm per engineer).

But getting back to the article, it only looks at the productivity of industries (which speaks to demand for certain types of labor), but the cost of training, say, a string quartet musician hasn't markedly decreased since the cost of education also hasn't fallen much.


If we talk about a highly skilled programmer requiring a lot of labor to produce, then we talk about something like "amortized labor" having uniform cost (the article mentions that it takes a lot of labor produce an orchestra player, for example). Even if some people are more skilled than other, if it takes a lot of labor to figure out who's really talented, you can price things this way.


That depends how you measure costs.

As a human, working for a for-profit tech company for 8 hours coats approximately the same amount of energy / effort as volunteering for a non-profit tech company, but it costs the organization vastly different amounts of money.


Kind of, but that's a roundabout way of saying "opportunity cost".


This looks like the Post Correspondence Problem at first glance: https://en.m.wikipedia.org/wiki/Post_correspondence_problem.


It’s similar, but in this case the decision is removed as it is given that a solution exists.


They inserted spaces though, between characters and words. In this problem, all spaces are removed.


You’re missing the huge “works incorrectly” space of outputs.


Just use Make.

Takes just as long to learn, and is transferrable everywhere.


These are not task runners. Make alone will give you about 2% of what you'll need to bundle a modern web app.


Make can run all the tools like babel, uglify, etc.


Are there any good examples of projects using Make for a web app?


Facetious answer: Are there any good examples of projects using Parcel for a web app? ;)

Real answer: probably, but a ton of production code is closed-source.

But why restrict to web apps? Most influential software projects in the world use Make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: