Hacker Newsnew | past | comments | ask | show | jobs | submit | walt_grata's commentslogin

Is that what I should be doing? I'm just encouraging the devs on my team to read designing data intensive apps and setting up time for group discussions. Aside from coding and meetings that is.

This is the bad side of things like OKRs. They push you away from user satisfaction since that harder to measure, coupled with go consequences for missing them. People just force adoption without taking the product signals that come from users rejecting your changes.

Come on Claude, making it not start isn't the same as fixing the bugs

Do not give a shit about how they excuse doing a bad job. If their tools make them that much more productive, and being the developer of those tools should allow you to make great use of them.

Use native for osx Use .Net framework for windows Use whatever on Linux.

Its just being lazy and ineffective. I also do not care about whatever "business" justification anyone can come up with for half assing it.


Fuck is there a way to have that degree and not be clueless and toxic to your colleagues and users.

It all comes from "if you can't measure it you can't improve it". The job of management is to improve things, and that means they need to measure it and in turn look for measures. When working on an assembly line there are lots of things to measure and improve, and improving many of those things have shown great value.

They want to expand that value into engineering and so are looking for something they can measure. I haven't seen anyone answer what can be measured to make a useful improvement though. I have a good "feeling" that some people I work with are better than others, but most are not so bad that we should fire them - but I don't know how to put that into something objective.


Yes, the problem of accurately measuring software "productivity" has stymied the entire industry for decades, but people keep trying. It's conceivable that you might be able to get some sort of more-usable metric out of some systematized AI analysis of code changes, which would be pretty ironic.

There’s this really awful MBA tool called a “9-box”…

All evidence continues to point towards NO.

They seem better at working in finance and managing money.

Most models of productivity look like factories with inputs, outputs, and processes. This is just not how engineering or craftsmanship happen.


It's because the purpose of engineering is to engineer a solution. Their purpose is to create profit, engineering gets in the way.

How do you create profit?

No man, it's in the title, master bullshit artist

Sounds like a decent into madness to me and I'm somewhat pro AI.

There being a few edge cases where it doesn't work in doesn't mean it doesn't work in the majority of cases and that we shouldn't try to fix the edge cases.

I don't know dude. Every time I've assumed good faith on my paying for something means ad free, I've been screwed by some asshole with an MBA getting into a leadership position high enough to push ads through. I'd rather it be explicit


I get the point, we've all been burnt. But if you're not trusting anybody anyway, why would explicity in a non-binding blog post / press release soothe you?

We're in the middle of an AI bubble propping up the whole friggin US economy all by itself, driven mostly by a company that claimed to be a non-profit until a few years ago.


Because I've been burned by every big tech company I can think of. As for why it would be soothing, well because it gives me hope that when I read any further legal docs they'll hold to the post.

What does ai have to do with this? The sooner that bubble bursts the better IMO.


The (Open)AI example was to illustrate the fact that companies can lie about their long-term plans or just change them whenever. They routinely do. And everybody who believed yesteryear's mission statement will then wind up feeling pretty stupid.

I'm considering Kagi a strategic ally in the fight against big tech right now.

It isn't a big tech company (yet). They don't have much of a moat either. Therefore, for the foreseeable future, they will be absolutely dependent on aligning their behaviour with their customer's interests, lest they lose them and go out of business.


Honestly IMO it's more that I ask for A, but don't strongly enough discourage B then I get both A, B and maybe C, generally implemented poorly. The base systems need to have more focus and doubt built in before they'll be truely useful for things aside from a greenfield apps or generating maintainable code.


Until AI labs have the equivalent of an SLA for giving accurate and helpful responses it don't get better. They've not even able to measure if the agents work correctly and consistently.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: