Hacker Newsnew | past | comments | ask | show | jobs | submit | 2sk21's commentslogin

Reviewing someone else's large pull request is like having a second task in parallel with what you are working on yourself!


So don't do it in parallel.

Completely park other tasks, spend time on the review and record that time appropriately.

There's nothing wrong with saying you spent the previous day doing a large review. It's a part of the job, it is "what you're working on".


You might as well go into HR. Everyone knows reviewing other people's PRs is like nurturing their kids at the expense of your own.


Well, then just don't play the game. Make a decision in the team, that everyone accepts everyone's PR immediately without any review. At least you won't have to wait.


I'm semi-retired now, but I spent most of my career at a Bell Labs-caliber place (I was the dumbest person there) before "PR" and "code review" became part of the lexicon, and yes, everyone was good enough not to mess things up too badly.


I don't understand what you're trying to say.


It's not "like" another task, it IS another task!


Yeah but it is just a quick look, "yep", "yep", "oh what about this"?, "wow we dodged a bullet there". Its like self managed error correction that the collective does on its own. Fast, simple and produces good results. The less software you write the more this resonates.


Telecom vendors were doing exactly this before the dotcom crash of 2000


I'm curious: Do you scrutinize every line of code that's generated?


At first I dd not. Now I have learned I have to.

You have to watch Claude Code like a hawk. Because it's inconsistent. It will cheat, give up, change directions, and not make it clear to you that is what it's doing.

So, while it's not "junior" in capabilities, it is definitely "junior" in terms of your need as a "senior" to thoroughly review everything it does.

Or you'll regret it later.


I read this point in the article with bafflement:

"Learn when a problem is best solved manually."

Sure, but how? This is like the vacuous advice for investors: buy low and sell high


By trying things and seeing what it’s good and bad at. For example, I no longer let it make data modelling decisions (both for client local data and database schemas), because it had a habit of coding itself into holes it had trouble getting back out of, eg duplicating data that it then has difficulty keeping in sync, where a better model from the start might have been a more normalised structure.

But I came to this conclusion by first letting it try to do everything and observing where it fell down.


I'm surprised that the article doesn't mention that one of the key factors that enabled deep learning was the use of RELU as the activation function in the early 2010s. RELU behaves a lot better than the logistic sigmoid that we used until then.


Geoffrey Hinton (now a Nobel Prize winner!) himself did a summary. I think it is the single best summary on this topic.

  Our labeled datasets were thousands of times too small.
  Our computers were millions of times too slow.
  We initialized the weights in a stupid way.
  We used the wrong type of non-linearity.


I'm curious and it's not obvious to me: what changed in terms of weight initialisation?


That is a pithier formulation of the widely accepted summary of "more data + more compute + algo improvements"


No, it isn't. It emphasizes importance of Glorot initialization and ReLU.


Also:

nets too small (not enough layers)

gradients not flowing (residual connections)

layer outputs not normalized

training algorithms and procedures not optimal (Adam, warm-up, etc)


As compute has outpaced memory bandwidth most recent stuff has moved away from ReLU. I think Llama 3.x uses SwiGLU. Still probably closer to ReLU than logistic sigmoid, but it's back to being something more smooth than ReLU.


Indeed, there have been so many new activation functions that I have stopped following the literature after I retired. I am glad to see that people are trying out new things.


I would be happy if it could search through my enormous reading list on Safari


I worked summers in a lab in the 1980s that had Silicon Graphics, Apollo and Sun workstations. The Sun was the easiest to program by far so it got the most use.


When it comes to certificates, mess-ups happen frequently in big companies. When I worked at IBM, I initially had the responsibility for obtaining and renewing the code signing certificate for a Java applet that we used in our product. I handed off the responsibility to another employee when I left that group. I was on vacation in the middle of nowhere when I got a panicked call from the product manager asking me to renew the certificate as the employee had quit. I had to spend half a day walking someone through the renewal process - ruined my day.


I feel for you, when I worked at IBM I was surprised they did not have a "known" central authority for creating/maintaining certs. We tried to find if one existed, but no luck.

In the Dept I was in, it was expected the downstream system people would create the certs, and of course I would say 85% of them did not even know what a cert is. When renewal time came, we got blank stares when we mentioned the private cert. Or the person who created the cert left and never trained their replacement. Crazy situation.

I hope things changed since I left.


Yeah - it was pretty chaotic when I was there too. We had to use the internal purchase order system to buy certificates from a CA. It was cumbersome as it required several levels of approval. I have long since quit IBM thankfully and had forgotten all about it until I saw this post today :-)


I have certificate related horror stories from projects in the past. It takes a long time to train people to care for all the details and renewals. I spent weeks trying to get an upset client to understand that they messed up CSRs and have to ask the CA to issue new ones. The client refused to accept the simple fact that they made a mistake an wanted an exception to how PKI works.


i'm sure that i've spent a large part of my time trying to figure out why certs weren't working, converting them from one format to another and randomly having to try things until they work.

just today i had to fix our internal certs because for the new ones someone forgot to include the intermediate cert in the chain, making it impossible to use a specific CLI tool. web browsers didn't complain, just the CLI sync tool :)


Cryptography is a cruel mistress. One bit off and you're out.


Article like this are why I keep coming back to HN: I was using an LDR (Light Dependent Resistor) for a similar task but this LED idea is fascinating. It would have never occurred to me!


the photoresistor is more linear and gives you built-in smoothing over a longer period of time, but (like other photodiodes) the led is much faster


Great work! Quite an interesting collection of hardware you have built!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: