Hacker Newsnew | past | comments | ask | show | jobs | submit | bonesss's commentslogin

Generating CRUD is like solving cancer in mice, we already have a dizzying array of effective solutions… Ruby on Rails, Access 97, model first ORMs with GUI mappers. SharePoint lets anyone do all the things easily.

The issue is and always has been maintenance and evolution. Early missteps cause limitations, customer volume creates momentum, and suddenly real engineering is needed.

I’d be a lot more worried about our jobs if these systems were explaining to people how to solve all their problems with a little Emacs scripting. As is they’re like hyper aggressive tech sales people, happy just to see entanglements, not thinking about the whole business cycle.


Go with Laravel and some admin packages and you generate CRUD pages in minutes. And I think with Django, that is builtin.

But I don’t think I’ve seen pure CRUD on anything other than prototype. Add an Identity and Access Management subsystem and the complexity of requirements will explode. Then you add integration to external services and legacy systems, and that’s where the bulk of the work is. And there’s the scalability issue that is always looming.

Creating CRUD app is barely a level over starting a new project with the IDE wizard.


>Creating CRUD app is barely a level over starting a new project with the IDE wizard.

For you, maybe. But for a non-progrmamer who's starting a business or just needs a website it's the difference between hiring some web dev firm and doing it themselves.


I think the gap between people dealing with JavaScript cruft all day and backend large systems development is creating a massive conversational disconnect… like, this thread is plain-faced and seriously discussing reinventing date handling locally for funsies.

I also think that any company creating a reverse-centaur workforce of blind and dumb half baked devs ritualistically shaking chicken bones at their pay-as-you-go automaton has effectively outsourced their core business to OpenAI/MS while paying for the privilege. And, on the twenty year timeline as service and capital costs create crunches, those mega corps will literally be sitting on whole copies of internal business schematics and critical code of their subservient customers…

They say things, they do other things. Trusting Microsoft not to eat your sector through abusive partner programs and licensing entanglements backed with government capture? Surely the LLMs can explain how that has gone historically and how smart that is going forward.


In light of OpenAI confessing to shareholders there’s no there there (being shocked by and then using Anthropics MCP, being shocked by and then using Anthropics Skills, opening up a hosted dev platform to milk my awesome LLM business ideas, and now revealing that inline ads a-la Google is their best idea so far to make, you know, make money…), I was thinking about those LLM project statistics. Something like 5-10% of projects are seeing a nice productivity bump.

Standard distribution says some minority of IT projects are tragi-bad… I’ve worked with dudes who would copy and paste three different JavaScript frameworks onto the same page, as long as it worked…

AirFryers are great household tabletop appliances that help people cook extraordinary dishes their ovens normally wouldn’t faster and easier than ever before. A true revolution. A proper chef can use one to craft amazing food. They’re small and economical, awesome for students.

Chefs just call it “convection cooking” though. It’s been around for a minute. Chefs also know to go hot (when and how), and can use an actual deep fryer if and when they want.

The frozen food bags here have AirFryer instructions now. The Michelin star chefs are still focusing on shit you could buy books about 50 years ago…


In reflection heavy environments and with injection and reflection heavy frameworks the distinction is a bit more obvious and relevant (.Net, Java). In some cases the mock configuration blossoms to essentially parallel implementations, leading to the brittleness discussed earlier in the thread.

Technically creating a shim or stub object is mocking, but “faking” isn’t using a mocking framework to track incoming calls or internal behaviours. Done properly, IMO, you’re using inheritance and the opportunity through the TDD process to polish & refine the inheritance story and internal interface of key subsystems. Much like TDD helps design interfaces by giving you earlier external interface consumers, you also get early inheritors if you are, say, creating test services with fixed output.

In ideal implementations those stub or “fake” services answer the “given…” part of user stories leaving minimalistic focused tests. Delivering hardcoded dictionaries of test data built with appropriate helpers is minimal and easy to keep up to date, without undue extra work, and doing that kind of stub work often identifies early re-use needs/benefits in the code-base. The exact features needed to evolve the system as unexpected change requests roll in are there already, as QA/end-users are the systems second rodeo, not first.

The mocking antipatterns cluster around ORM misuse and tend to leak implementation details (leading to those brittle tests), and is often co-morbid with anemic domains and other cargo cult cruft. Needing intense mocking utility and frameworks on a system you own is a smell.

For corner cases and exhaustiveness I prefer to be able to do meaningful integration tests in memory as far as possible too (in conjunction with more comprehensive tests). Faster feedback means faster work.


Functional programming makes DSLs easier/possible, so you express your domain in natural domain language/constructs leading to easier comprehension, standardization, reuse, and testing, thereby improving reliability. This is the exact opposite of write-only code, DSLs are comprehensible/editable to non-programmers. With a strong enough type system these benefits accrue while ensuring the program in stays a subset of more correct programs than allowed by other compilers.

… I mean, if we’re just making global assertions :)

Gimme the “write only” compiler verified exhaustively pattern matched non-mutable clump in a language I can mould freely any day of the week. I will provide the necessary semantic layer for progress backed with compiler guarantees. That same poop pile in a lesser language may be unrecoverable.


Challenging my own LLM experiences cynically: for a period it really does feel like I’m interactively getting exactly what I need… but given that the end result is generated and I have to then learn it, I’m left in much the same situation you mentioned of looking at the developer docs where a better cleaner version exists.

Subjectively interacting with an LLM gives a sense of progress, but objectively downloading a sample project and tutorial gets me to the same point with higher quality materials much faster.

I keep thinking about research on file navigation via command line versus using a mouse. People’s subjective sense of speed and capability don’t necessarily line up with measurable outcomes.

LLMs can do some amazing things, but violently copy and pasting stack overflow & randomness from GitHub can too.


Right. This is how I feel. I can get the LLM to generate code that more or less does what I need, but if I objectively look at the result and the effort required to get there it's still not at the point where it's doing it faster and better than what I could have got manually (with exceptions for certain specific use cases that are not generally applicable to the work I want to do).

The time I save on typing out the program is lost to new activities I otherwise wouldn't be doing.


There’s more than one way to be a fungi.

They made Jimmy Carter sell his peanut farm…

That's the thing though -- no one made Jimmy Carter sell his farm[0].

But Jimmy Carter was an honorable human, and, well...there are fewer people fitting that description sitting behind the Resolute desk, today.

[0] He didn't sell it, he put it into a blind trust. He should have sold it. When he left office, the farm was $1MM in debt.


> LLMs is just a synthetic human

1) ‘human’ encompasses behaviours that include revenge cannibalism and recurrent sexual violence —- wish carefully.

2) not even a little bit, and if you want to pretend then pretend they’re a deranged delusional psych patient who will look you in the eye and say genuinely “oops, I guess I was lying, it won’t ever happen again” and then lie to you again, while making sure happens again.

3) don’t anthropomorphize LLMs, they don’t like it.


Context is king, too: in greenfield startups where you care little about maintenance and can accept redundant front end frameworks and backend languages? I believe agent swarms can poop out a lot lot lot of code relatively quick… Copy and paste is faster though. Downloading a repo is very quick.

In startups I’ve competed against companies with 10x and 100x the resources and manpower on the same systems we were building. The amount of code they theoretically could push wasn’t helping them, they were locked to the code they actually had shipped and were in a downwards hiring spiral because of it.


Here’s the thing - an awful lot of it doesn’t even compile/run, never mind do the right thing. My most recent example was asking it to use terraform to run an azure container app with an environment variable in an existing app environment. It repeatedly made up where the environment block goes, and and cursor kept putting the actual resource in random places in the file.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: