Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Our AI code review bot (Codacy) is just an LLM that compiles all linter rules and might be the most annoying useless thing. For example it will ding your PR for not considering Opera browser limitations on a backend NodeJS PR.

Furthermore most of the code reviews I perform, rarely do I ever really leave commentary. There are so many frameworks and libraries today that solve whatever problem, unless someone adds complex code or puts a file in a goofy spot, it’s an instant approval. So an AI bot doesn’t help something which is a minimal non-problem task.



Our approach is to leave PR comments as pointers in the areas where the reviewer needs to look, where it is most important, and they can forget the rest. I want good code reviews but don’t want to waste anyone’s time.


To be honest, I tried to work with with GitHub Copilot to see if it could help junior devs focus in on the important parts of a PR and increase their efficiency and such, but... I've found it to be worse than useless at identifying where the real complexities and bug opportunities actually are.

When it does manage to identify the areas of a PR with the highest importance to review, other problematic parts of the PR will go effectively unreviewed, because the juniors are trusting that the AI tool was 100% correct in identifying the problematic spots and nothing else is concerning. This is partially a training issue, but these tools are being marketed as if you can trust them 100% and so new devs just... do.

In the worst cases I see, which happen a good 30% of the time at this point based on some rough napkin math, the AI directs junior engineers into time-wasting rabbit holes around things that are actually non-issues while actual issues with a PR go completely unnoticed. They spend ages writing defensive code and polyfills for things that the framework and its included polyfills already 100% cover. But if course, that code was usually AI generated too, and it's incomplete for the case they're trying to defend against anyway.

So IDK, I still think there's value in there somewhere, but extracting it has been an absolute nightmare for me.


I do this as well for my PRs, it’s a good way to leave a little explainer on the code especially with large merge requests.


Are the junior programmers that you hire that good that you don't need training or commentary? I find I spend a lot of time reviewing MRs and leaving commentary. For instance, this past week, I had a junior apply the same business logic across classes instead of creating a class/service and injecting it as a dependency.

This, improper handling of exceptions, missed testing cases or no tests at all, incomplete types, a misunderstanding of a nuanced business case, etc. An automatic approval would leave the codebase in such a dire state.


I still pair with juniors, sometimes even the code review is a pairing session. When did I say I don’t need training or commentary? I’m talking about useless AI tools, not day to day work.


I'm mostly asking to see how I can adjust my workflow with juniors because I sometimes find myself drowning trying to maintain a decent codebase - not trying to be accusatory.

I generally use MRs as an opportunity to give feedback like how you'd get feedback on a set of math problem statements. I inferred from "rarely do I ever really leave commentary" that you're not using MRs as a training tool. How else do you train junior engineers?

For context, I work in the financial industry where mistakes are costly and users are hostile so the "accept & merge" workflow may not be for me.


I've honestly considered just making a script that hits the checkbox automatically because we have to have "code review" for compliance but 99% of the time I simply don't care. Is the change you made gonna bring down prod, no, okay ship it.

At some level if I didn't trust you to not write shitty code you wouldn't be on our team. I don't think I want to go all the way and say code review is a smell but not really needing it is what code ownership, good integration tests, and good qa buys you.


Code review is a good onboarding on upskilling tool, as a reviewer.


> At some level if I didn't trust you to not write shitty code you wouldn't be on our team.

Code review isn't about preventing "shitty code" because you don't trust your coworkers.

Of course, if 99% of the time you don't care about the code review then it's not going to be an effective process, but that's a self-fulfilling prophecy.


How do you bootstrap the trust without code review? Always confused how teams without review onboard someone new.


I think you just accept the risk, same as not doing code review. If you're doing something Serious™ then you follow the serious rules to protect your serious business. If it doesn't actually matter that much if prod goes down or you release a critical vulnerability or leak client data or whatever, then you accept the risk. It's cheaper to push out code quickly and cleanup the mess as long as stakeholders and investors are happy.


Pair-programming is one option. It might seem expensive at first glance but usually both parties get a lot of value from it (new possible ideas for the teacher if the learner isn't entirely junior and obviously quick learning for the new person)


We do pair programming for new hires, works well. I'm currently frustrated because our code reviews are both mandatory due to security certifications but also completely worthless to our team structure except Slack spam asking someone to click the approve button.

* The work that is done is decided beforehand, the code in every PR corresponds to a card that's already been discussed.

* There's no incentive whatsoever to "sneak something in" and if you do so maliciously you'll be fired and maybe prosecuted depending on the damage.

* Your code goes through integration testing and QA so you'll never (immediately) take down prod.

* I have backups coming out my ears that assume the code running on our app servers is actively malicious so you couldn't cause data loss if you tried.

* Norms are all enforced in code, which makes discussions about style pointless. If it passes CI it's good enough.


To play devil's advocate: "* There's no incentive whatsoever to "sneak something in" and if you do so maliciously you'll be fired and maybe prosecuted depending on the damage."

How would you discover the snuck in code in a timely fashion?


Some have a smaller tighter codebase... where it's realistic to pursue consistent application of an internal style guide (which current AI seems well suited to help with (or take ownership of)).


Code style is something that should be enforceable with linters and formatters for the most part without any need for an AI.


That's not what style means.


A linter is well suited for this. If you have style rules too complex for a linter to handle I think the problem is the rules, not the enforcement mechanism.


My comment here got two downvotes. I was trying to contribute positively to the conversation. I've barely ever downvoted anyone. I upvoted the parent comment even though I disagreed with it (because I appreciated the perspective). The drive-by downvoters in HN bum me out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: