The cargo-cult shibboleth of "never put business logic in your database" certainly didn't help, since a lot of developers just turned that into "never use stored procedures or views, your database is a dumb store with indexes."
There's value in not having to hunt in several places for business logic, having it all in one language, etc. I was ambivalent on the topic until I encountered an 12 page query that contained a naive implementation of the knapsack problem. As with most things dogma comes with a whole host of issues, but in this case I think it's largely benign and likely did more good than harm.
But that is the result of having multiple applications needing to enforce valid states in the database.
"Business logic" is a loose term. The database is the effective store for state so it must enforce states, eg by views, triggers, and procedures.
Other "business logic" can happen outside of the db in different languages. When individual apps need to enforce valid states, then complexity, code, etc grows exponentially.
Other than a few ill-advised attempts to implement microservices infrastructure by well-intentioned co-workers I've not encountered situations where multiple applications needed to access a single data store. While I'm sure there are valid use cases there I suspect they're rare and should be treated like the outliers they are.
It was absolutely under version control and there was a full test suite. The guy that wrote it is easily in the top 3 smartest human beings I've ever met and an incredibly talented developer. Unfortunately a lot of his stuff required being at the same level on the IQ bell curve, which meant it was functionally unmaintainable by anyone else. If you're familiar with the Story of Mel, it was kinda like that.
A lot of people probably think it's better to keep database "easy to swap". Which is silly, its MUCH easier to change your application layer, than database.
genuinely curious, can you steel man stored procedures? views make intuitive sense to me, but stored procedures, much like meta-programming, needs to be sparingly used IMO.
At my new company, the use of stored procedures unchecked has really hurt part of the companies ability to build new features so I'm surprised to see what seems like sound advice, "don't use stored procedures", called out as a cargo cult.
My hunch is that the problems with stored procedures actually come down to version control, change management and automated tests.
If you don't have a good way to keep stored procedures in version control, test them and have them applied consistently across different environments (dev, staging, production) you quickly find yourself in a situation where only the high priests of the database know how anything works, and making changes is painful.
Once you have that stuff in git, with the ability to run automated tests and robust scripting to apply changes to all of your environments (I still think Django's migration system is the gold standard for this, though I've not seen that specifically used with stored procedures myself) their drawbacks are a lot less notable.
You give no reasons why you think it's a sound advice.
My experience is following
1) Tx are faster when they are executed a sql function since you cut down on network roundtrip between statements. Also prevents users from doing fancy shenanigans with network after calling startTransaction.
2) It keeps your business logic separated from your other code that does caching/authorization/etc.
3) Some people say it's hard to test sql functions, but since pglite it's a non issue IMO.
4) Logging is a little worse, but `raise notice` is your friend.
> At my new company, the use of stored procedures unchecked has really hurt part of the companies ability to build new features
Isn't it just because most engineers aren't as well versed in SQL as they are in other programming languages.
Stored procedures are great for bulk data processing. SQL natively operates on sets, so pretty silly to pass a dataset over the wire for processing it iteratively in a less efficient language, and then transfer the resultset back to the database.
Like any tool, you just have to understand when to use it and when not to.
It’s about what you want to tie to which system. Let’s say you keep some data in memory in your backend, would you forbid engineers from putting code there too, and force it a layer out to the front end - or make up a new layer in between the front end and this backend just because some blogs tell you to?
If not, why would you then avoid putting code alongside your data at the database layer?
There are definitely valid reasons to not do it for some cases, but as a blanket statement it feels odd.
Stored procedures can do things like smooth over transitions by having a query not actually know or care about an underlying structure. They can cut down on duplication or round trips to the database. They can also be a nightmare like most cases where logic lives in the wrong place.
You have the same problem that you have with legal LLMs; an LLM is incapable of providing legal or regulatory-involved advice, and anyone using an LLM for such purposes (even leaving aside hallucinations) forfeits any justifiable reliance defense. There's a role for LLMs, but no one with legal responsibility over reporting could or would possibly rely on an LLM for complex regulatory and rules analysis, not when there's the risk of your wardrobe being replaced with orange jumpsuits.
That’s not because of the FDA, that’s because of CEPS. If the USG negotiated drug prices the way France does, there’d be far less disparity in average pricing. (Given the continual litany of safety, efficacy, and dosage control issues with imported drugs, FDA isn’t regulating them enough, largely because the inspection budget just isn’t there.)
Even now, the newspaper's reporters do so as a matter of routine.
Reporting and editorial are separate units in newspapers; the point being made is that, while reporting continues to properly disclose potential ownership conflicts of interest, editorial and op-ed, following Bezos taking direct control of them, are not doing so.
Of course, the Post is Bezos' toy, and there's no law that says he can't use editorial as a megaphone for his personal interests without disclosing them (or, in fact, even use the reporting side for the same purpose!), but you can't do that and still claim that the paper has any of the Grahams' pedigree left in it, and this is very much a change from Bezos' earlier ownership, in which he largely stayed hands-off on editorial decisions.
Not only does gp seem to have a poor grasp on the differences between Opinion and news reporting, they also fail to correlate the problem with Bezos' ownership, so it seems to them like NPRs article is conflicting with itself when it isn't, in the slightest.
It’s basically sifting through ore; 99% of the people who see it aren’t the target, it’s the 1% of viewers who are buyers or funders who you otherwise couldn’t directly advertise to. Same reason you see defense contractors putting up ads for weapons systems in the DC metro.
That's really the core issue. Developer-signed packages (npm's current attack model is "Eve doing a man-in-the-middle attack between npm and you," which is not exactly the most common threat here) and a transparent key registry should be minimal kit for any package manager, even though all, or at least practically all, the ecosystems are bereft of that. Hardening API surfaces with additional MFA isn't enough; you have to divorce "API authentication" from "cryptographic authentication" so that compromising one doesn't affect the other.
How are users supposed to build and maintain a trust store?
In a hypothetical scenario where npm supports signed packages, let's say the user is in the middle of installing the latest signed left-pad. Suddenly, npm prints a warning that says the identity used to sign the package is not in the user's local database of trusted identities.
What exactly is the user supposed to do in response to this warning?
Imagine a hobbyist developer with a ~ $0 budget trying to publish their first package. How many thousands of km/miles are you expecting them to travel so they can get enough vouches for their package to be useful for even a single person?
Now imagine you're another developer who needs to install a specific NPM package published by someone overseas who has zero vouches by anyone in your web of trust. What exactly are you going to do?
In reality, forcing package publishers to sign packages would achieve absolutely nothing. 99.99 % of package consumers would not even bother to even begin building a web of trust, and just blindly trust any signature.
The remaining 0.01 % who actually try are either going to fail to gain any meaningful access to a WoT, or they're going to learn that most identities of package publishers are completely unreachable via any WoT whatsoever.
One thing is that a lot of economic activity was front-loaded to the first few quarters as businesses scrambled to get inventory on board ahead of tariffs; now we're seeing companies having burned through inventory, so inflationary impacts are going to start working their way through the supply chain now in earnest, and we're going to see a concomitant slowdown in economic activity as that acts as a persistent drag across multiple sectors. In practice you're looking at something equivalent to a 3-4% federal sales tax on all purchases, but keep an eye on where it falls on relatively inelastic goods, which will have an outsize effect on consumer finances.
As an FYI that might be helpful to some, in the case of sales, there's a positive legal obligation to maintain call recordings, so in the event of a courtroom dispute the failure to produce can lead to an adverse inference instruction.
Not gonna spend PACERbux on this to find out, but not sure how they’re arguing there’s an actual case or controversy to rule on, since no one’s trying to domesticate a judgment against them in the US. This is just an attempt to preemptively weaponize the US courts against the UK government, good chance it gets bounced for lack of jurisdiction.
Packer and ag consolidation is a huge problem, but the underlying issue here is climate change and long-lasting droughts; some of the issues with herd size — the smallest since about 1950 — come from COVID hangover when cows weren’t getting processed and price-per-head plummeted, but the immediate problem is that ranchers can’t support large herds due to lack of rain and cost of feed. We’re looking at long-term cost trends that are unlikely to reverse or even be significantly ameliorated anytime soon.
> the immediate problem is that ranchers can’t support large herds due to lack of rain and cost of feed
Ranchers that can support large herds (2,000+) are those who earn a net profit [0] and are consolidating because processors do not want to support small farms.
While environmental factors do play a role, saying it's the primary reason is greenwashing of the real oligopolies tendencies arising in American Ag industry.
reply