I think there's probably a 5th one that's new-ish. Code isn't where the value is now that agentic tools can whip out a solution to just about anything in no time, so the commentary provides semantic grounding that allows you to navigate generated code easily.
It's kind of like some of the existing reasons, but there is a difference there.
For those who build in Ruby on Rails, does htmx have an advantage over Turbo/Stimulus? For me, the sense that it doesn't is why I've been avoiding it. Prefer to stick with vanilla stack unless there's a very compelling reason not to.
I like to say that you can either learn to be fast at doing low quality work, or learn to be fast at doing high quality work. It’s your choice really. But the only way to learn the latter is to start by prioritizing quality over speed.
Funny how this exactly applies to instrument playing. Unearned speed only begets sloppiness. The only way to go past a certain velocity is to do meticulous metronome work from a perfectly manageable pace and build up with intention and synchrony. And even then it is not a linear increase, you will need to slow back down to integrate every now and then. (Stetina's "Speed Mechanics for Lead Guitar"; 8 bpm up, 4 bpm down)
At slow, manageable tempos, you can afford to use motions that don't scale to fast tempos. If you only ever play "what you can manage" with meticulous, tiny BPM increments, you'll never have to take the leap of faith and most likely will hit a wall, never getting past like 120-130 BPM 16ths comfortably. Don't ask how I know this.
What got me past that point was short bursts at BPMs way past my comfort zone and building synchrony _after_ I stumbled upon more efficient motions that scaled. IIRC, this is what Shawn Lane advocated as well.
I recommend checking out Troy Grady's (Cracking The Code) videos on YouTube if you're interested in guitar speed picking. Troy's content has cleared up many myths with an evidence-based approach and helped me get past the invisible wall. He recently uploaded a video pertaining to this very topic[0].
> What got me past that point was short bursts at BPMs way past my comfort zone and building synchrony _after_ I stumbled upon more efficient motions that scaled.
This is actually pretty close to what Stetina says. I just probably didn’t do a good job expressing it.
You’re oscillating above and below the comfort zone and that iteration like you say affords insights from both sides, and eventually the threshold grows.
Depends on the instrument. For wind instruments, the motions basically don’t change, and your focus is on synchronizing your mouth with your hands. Tonguing technique is different at high speed but you would typically practice with the same technique at low speed when learning a fast piece.
But the motions do change, at very slow tempos you can move basically one finger at a time, at faster tempos you have simultaneously overlapping motions.
On a trumpet? A clarinet? No, the motions don't simultaneously overlap. The fingering mechanics are slightly different at speed, but you would still start slow while using the higher speed mechanics and tonguing technique, not jump into high speed practice first.
No one is saying not to practice slow first. This advice is specifically for intermediate or advanced students who are putting a focus on developing speed specifically. Practice slow first, increase tempo slowly next, but when you hit a plateau, you need to add some repetitions that are well outside your comfort zone. You need to feel what it feels like to play fast, then clean it up.
It seems like this is a far more time efficient methodology to build speed on guitar, I do not know why it wouldn’t apply to other instruments like trumpet.
When I was in high school, a friend who played drums in a band would try to pull off these super complicated fast fills. He couldn't pull them off and I always thought, "why doesn't he play something he can get right?" Well, after months of practice, he was able to pull them off. He was a great drummer, but he worked at incredibly hard to get there. It's a little tangential to what you said, but it feels appropriately related.
I guess I'm agreeing while also saying that you can get there by failing a lot at full speed first. Maybe he practiced at half-speed when he was alone and I never saw that part.
One could argue that learned speed has the hours of practice "baked in" so it's actually much slower. And that's not a bad thing IMO.
I think this post only covers one side of the coin. Sure, getting things done fast achieves the outcome, but in the long run you retain and learn less. Learning new stuff takes time and effort.
> the exact same things the sloppy player is doing, but you do it in time and in tune.
It depends on the level we look at it, but I think there is fundamental difference in what excellent (professional grade?)players are doing compared to "sloppy" ones.
It is not just done with more precision and care, they will usually have a different mental model of what they're doing, and the means to achieve the better result is also not linear. Good form will give good results, but it won't lead to a professional level result. You'll need to reinvent how you apply the theory to your exact body, the exact instrument in your hand, what you can do and can't and adjust from there.
That's where veteran players are still stellar while you'd assume they don't have the muscle and precision a younger player obviously has.
PS: I left aside the obvious: playing in time and in tune is one thing, conveying an emotion is another. It is considerably hard to move from the former to the latter.
Absolutely. I can't tell you how many times I've been starting at a project that I basically know how to do, but it's got a bunch of moving parts that I need to account for. I'll just stare at the screen, drink more coffee, read HN, basically do anything besides actually work on it, because it's too big and unactionable. Some of this is actually useful brain-organization time, but some is just time wasting.
Eventually I'll get the bright idea to make a notes doc listing all the things that are bothering me. Just writing them down gets them out of my nebulous headspace into a permanent record, which inevitably makes it less scary - and before you know it's I'm either halfway to a runbook or technical design, and it turns out this project actually will only take a day or two once I have all the prep work done.
Some of you might remember an article from a while back titled "Don’t Build A General Purpose API To Power Your Own Front End". This is a follow up to share how things played out, answer additional questions, and help figure out, discuss, or debate anything else that might have come up based on my advice that not everyone agrees with.
This used to be a blog comment service. It looks like the domain name registration expired yesterday, and nobody seems to be responding to support (likely the email under the same domain is not even working). If the owner decided to abandon the project, I would at least like to get my data export (and ideally partial refund too).
I built portrayal[1] (a much simpler replacement for those dry libs), and was also experimenting[2] with runtime-enforced types based on this lib.
My general thoughts is that declaring types in Ruby is unnecessarily complicated, because you're basically just running values through pieces of boolean logic, and nothing else. Might as well make that explicit, which is what my experiment did. I didn't however publish the types library, but the concept was proven.
I wrote this controversial thought[1] once, but for what it's worth, it applied to me just as much as to anyone else. Projects like these type gems are incredibly fun and satisfying to build. Your vision is clear, you've seen types before, you're proficient enough with Ruby to do clever things. The work seems cut out for you. Go nuts!
Problem is, these kinds of solutions (I also see this in "service objects" world) take Ruby away from you, and offer you a whole new vocabulary. With time I started appreciating libraries that avoid solving a problem that plain Ruby can solve, because Ruby is incredibly clear and concise already. If you leave more opportunities for people to write plain Ruby, you will solve everything with much less library code.
I think that's where the fun of building goes against the end developer experience. Builders love "owning" everything. E.g, "No, don't write `&&`, I have a special way for you to compose types!"
These are general thoughts, but I'm happy to get concrete with specific examples too.
As a blogger who makes similar assumptions, I think we depend on how a lot of us from that time "grew up" similarly. Sockets came to relevance later in my career compared to everything else listed here.
As someone younger, ports and sockets appeared very early in my learning. I'd say they appeared in passing before programming even, as we had to deal with router issues to get some online games or p2p programs to work.
And conversely, some of the other topics are in the 'completely optional' category. Many of my colleagues work on IDEs from the start, and some may not even have used git in its command line form at all, though I think that extreme is more rare.
>The term socket dates to the publication of RFC 147 in 1971, when it was used in the ARPANET. Most modern implementations of sockets are based on Berkeley sockets (1983), and other stacks such as Winsock (1991).
I read RFC 147 the other day, and it turns out that by "socket" it means "port number", more or less (though maybe they were proposing to also include the host number in the 32-bit "socket", which was quietly dropped within the next few months). Also Berkeley sockets are from about 01979, which is a huge difference from 01983.
> Initially we intend to add the facilities described here to UNIX. We will then begin to implement portions of UNIX itself using the IPC [inter-process communication] as an implementation tool. This will involve layering structure on top of the IPC facilities. The eventual result will be a distributed UNIX kernel based on the IPC framework.
> The IPC mechanism is based on an abstraction of a space of communicating entities communicating through one or more sockets. Each socket has a type and an address. Information is transmitted between sockets by send and receive operations. Sockets of specific type may provide other control operations related to the specific protocol of the socket.
They did deliver sockets more or less as described in 4.1BSD later that year, but distributing the Unix kernel never materialized. The closest thing was what Joy would later bring about at Sun, NFS and YP (later NIS). They clarify that they had a prototype working already:
> A more complete description of the IPC architecture described here, measurements of a prototype implementation, comparisons with other work and a complete bibliography are given in CSRG TR/3: "An IPC Architecture for UNIX."
And they give a definition for struct in_addr, though not today's definition. Similarly they use SOCK_DG and SOCK_VC rather than today's SOCK_DGRAM and SOCK_STREAM, offering this sample bit of source:
In theory that's four months after the 4.1BSD release in http://bitsavers.trailing-edge.com/bits/UCB_CSRG/4.1_BSD_198..., linked from https://gunkies.org/wiki/4.1_BSD, which does seem to have sockets in some minimal form. I don't understand the tape image format, but the string "socket" occurs: "Protocol wrong type for socket^@Protocol not available^@Protocol not supported^@Socket type not supported^@Operation not supported on socket^@Protocol family not supported^@Address family not supported by protocol family^@Address already in use^@Can't assign requested address^@".
This is presumably compiled from lib/libc/gen/errlst.c or its moral equivalent (e.g., there was an earlier version that was part of the ex editor source code). But those messages were not added to the checked-in version of that file until Charlie Root checked in "get rid of mpx stuff" in February of 01982: https://github.com/robohack/ucb-csrg-bsd/commit/96df46d72642...
The 4.1 tape image I linked above does not contain man pages for sockets. Evidently those weren't added until 4.2! The file listings in burst/00002.txt mention finger and biff, but those could have been non-networked versions (although Finger was a documented service on the ARPANet for several years at that point, with no sign of growing into a networked hypertext platform with mobile code). Delivermail, the predecessor of sendmail, evidently had cmd/delivermail/arpa-mailer.8, cmd/delivermail/arpa.c, etc.
That release was actually the month before Joy and Fabry's proposal, so perhaps sockets were still a "prototype" in that release?
> When Rob Gurwitz released an early implementation of the TCP/IP protocols to Berkeley, Joy integrated it into the system and tuned its performance. During this work, it became clear to Joy and Leffler that the new system would need to provide support for more than just the DARPA standard network protocols. Thus, they redesigned the internal structuring of the software, refining the interfaces so that multiple network protocols could be used simultaneously.
> With the internal restructuring completed and the TCP/IP protocols integrated with the prototype IPC facilities, several simple applications were created to provide local users access to remote resources. These programs, rcp, rsh, rlogin, and rwho were intended to be temporary tools that would eventually be replaced by more reasonable facilities (hence the use of the distinguishing "r" prefix). This system, called 4.1a, was first distributed in April 1982 for local use; it was never intended that it would have wide circulation, though bootleg copies of the system proliferated as sites grew impatient waiting for the 4.2 release.
$200k urban vs $205k rural median offers to new doctors overall. But, in surgical practices, that flips well in favor of urban offers. But, that's just for new MDs. Career numbers skew even more to rural doctors
1. An odd business requirement (share the origin story)
2. It took research (summarize with links)
3. Multiple options were considered (justify decision)
4. Question in a code review (answer in a comment)
Important caveat for number 4: if your code can be restructured in a way that answers the question without a comment, do that instead.
This originally comes from an article[1] I wrote in 2021 titled "Writing Maintainable Code is a Communication Skill".
[1]: https://max.engineer/maintainable-code