Hacker Newsnew | past | comments | ask | show | jobs | submit | Dan42's commentslogin

This is really cute and heartwarming.

Back in the day, a lot of people including me reported feeling more comfortable in Ruby after one week than all their other languages with years of experience, as if Ruby just fits your mind like a glove naturally.

I'm glad new people are still having that "Ruby moment"


Ruby is still the first tool I reach for if I need to do something quickly and naturally. People are always surprised at how readable the code comes out even compared to python.


The three stages of ruby:

1. Someone scoffs at you for writing ruby instead of python / go / javascript / kotlin

2. Then they read the code

3. Then they install ruby

It's an amazing language and remains my go-to for anything that doesn't need protocol stuff.


Thank you for reading this. I'm having fun learning Ruby. I just started working at a company where I use it full time. It's great learning it and I have supportive colleagues who are excited for me. I'm going to write more about Ruby. I have planned about 6 articles in the next few weeks. I hope I get around to them all.


Wow. You are me.


I bet "I'm a lot of people". That's the point of the post. We exist, we contribute, some of us are critical. We just don't chase fame, don't care about (much) about recognition (beyond peer I guess) and have interests and ways to occupy our time other than software. :shrug:.

I accepted that I won't be a "name". Yet I have made suggestions that were adopted into Spring, I have commented on JCPs, I have talked with antirez (though not much contributed there, I'm still in awe of Redis' internal design). I just... don't care much about other people knowing me beyond what I need to pay the bills and make my immediate peers, manager chain and customers happy.


I see posts like this one pop up from time to time. I love it. Based on my 30y of exp that's also the workflow I converged on. It seems to me like every experienced and skilled developer is converging on this. jujutsu is entirely built to accommodate this workflow.

There are no silver bullets or magical solutions, but this is as close to one as I've ever seen. A true "best practice" distilled from the accumulated experience of our field, not from someone with something to sell.


This is cool, but missing a LOT of details between steps 4 and 5, which is the meat of the quicksort. Actually, the first and last elements of step 4 would be swapped, which means the order depicted in step 5 is incorrect.


Isn't that more of an implementation detail?

I'd guess if you care more about speed than memory it might be faster to just move elements into new array - sequence through old array appending to start/end of new array according to pivot comparison. You'd be moving every element vs leaving some in place with a swap approach, but the simplicity of the code & branch prediction might win out.


I'm pretty sure the swapping is a fundamental part of the quicksort algorithm, not a mere implementation detail. That's the reason quicksort is an in-place algorithm.


Actually you're right, it is an implementation detail. The original isn’t mistaken, it’s just showing the lo-to-hi partitioning pass rather than the from-both-ends version I had in mind when I implemented quicksort before.

shame, shame, I should have double-checked before posting.


This article really resonated with me. I've been trying to teach this way of thinking to juniors, but with mixed results. They tend to just whack at their code until it stops crashing, while I can often spot logic errors in a minute of reading. I don't think it's that hard, just a different mindset.

There's a well-known quote: "Make the program so simple, there are obviously no errors. Or make it so complicated, there are no obvious errors." A large application may not be considered "simple" but we can minimize errors by making it a sequence of small bug-free commits, each one so simple that there are obviously no errors. I first learned this as "micro-commits", but others call it "stacked diffs" or similar.

I think that's a really crucial part of this "read the code carefully" idea: it works best if the code is made readable first. Small readable diffs. Small self-contained subsystems. Because obviously a million-line pile of spaghetti does not lend itself to "read carefully".

Type systems certainly help, but there is no silver bullet. In this context, I think of type systems a bit like AI: they can improve productivity, but they should not be used as a crutch to avoid reading, reasoning, and building a mental model of the code.


Reading this, I couldn't help but think these guys really know where their towel is. The opposite of enshittification?


Isn't this because there isn't a for profit company running this? They don't have to enshittify to make money for investors.


Beautiful. I had that exact computer model 30 years ago.


> Don't we have decades of research about the improvements in productivity and correctness brought by static type checking?

Yes, we have decades of such research, and the aggregate result of all those studies is that no productivity gain can be significantly demonstrated for static over dynamic, and vice-versa.


Not sure what result you are referring to but in my experience, many of the academic research papers use “students” as test subjects. This is especially fucked up when you want to get Software Engineering results. Outside Google et al where you can get corporate sanctioned software engineering data at scale, I would be wary most academic results in the area could be garbage.


Ate you referring to a specific review?


No, please, just no.

The idea of using PUT, DELETE, or PATCH here is entirely misguided. Maybe it was a good idea, but history has gone in a different direction so now it's irrelevant. About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems. This inconsistency led to unpredictable failures, varying by website, network, and the specific proxy or caching software in use.

The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.

Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing. The entire internet infrastructure operates on these semantics, with little to no consideration for other HTTP verbs. Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.

Please let's just use what already works. GET for reading, POST for writing. That’s all we need to define transport behavior. Any further differentiation—like what kind of read or write—is application-specific and should be decided by the endpoints themselves.

Even the <form> element’s "action" attribute is built for this simplicity. For example, if your resource is /tea/genmaicha/, you could use <form method="post" action="brew">. Voilà, relative URLs in action! This approach is powerful, practical, and aligned with the infrastructure we already rely on.

Let’s not overcomplicate things for the sake of theoretical perfection. KISS.


> About 20 years ago, Firefox attempted to add PUT and DELETE support to the <form> element, only to roll it back. Why? Because the semantics of PUT and DELETE are not consistently implemented across all layers of the HTTP infrastructure—proxies, caches, and intermediary systems.

This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]

> The reality we live in, shaped by decades of organic evolution, is that only GET and POST are universally supported across all layers of internet infrastructure.

This is also incorrect. The organic evolution we actually have is that servers widely support the standardized method semantics in spite of the incomplete browser support. [1] When provided with the opportunity to take advantage of additional methods in the client (via libraries), developers user them, because they are useful. [2][3]

> Take a cue from the WHATWG HTML5 approach: create your RFC based on what is already the de facto standard: GET is for reading, and POST is for writing.

What you're describing isn't the de defacto standard, it is the actual standard. GET is for reading and POST is for writing. The actual standard also includes additional methods, namely PUT, PATCH, and DELETE, which describe useful subsets of writing, and our proposal adds them to the hypertext.

> Trying to push a theoretically "correct" standard ignores this reality and, as people jump into the hype train, will consume significant time and resources across the industry without delivering proportional value. It's going to be XHTML all over again, it's going to be IPv6 all over again.

You're not making an actual argument here, just asserting that takes time—I agree—and that it has no value—I disagree, and wrote a really long document about why.

[0] https://alexanderpetros.com/triptych/form-http-methods#ref-6

[1] https://alexanderpetros.com/triptych/form-http-methods#rest-...

[2] https://alexanderpetros.com/triptych/form-http-methods#usage...

[3] https://alexanderpetros.com/triptych/form-http-methods#appli...


> This is incorrect, according to this comment from the Firefox implementer who delayed the feature. He intended the roll back to be temporary. [0]

I see no such thing in the link you have there. #ref-6 starts with:

> [6] On 01/12/2011, at 9:57 PM, Julian Reschke wrote: "One thing I forgot earlier, and which was the reason

But the link you have there [1] does not contain any such comment. Wrong link?

[1] https://lists.w3.org/Archives/Public/public-html-comments/20...

(will reply to other points as time allows, but I wanted to point out this first)


You're right about the quote, thanks for pointing that out. And somehow I can't find the original one anymore, which is frustrating. I replaced it was a different quote from the same guy saying the same thing elsewhere in the discussion.


grep has a --line-buffered option that does the job fine in most cases. Just set in your aliases grep='grep --line-buffered', that way you get the correct behavior when you tail logs piped to a sequence of greps, and you avoid the performance penalty in scripts.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: