Hacker Newsnew | past | comments | ask | show | jobs | submit | polote's commentslogin

Rls and triggers dont scale either


Yeah, I'm going to remove triggers in next deploy of a POS system since they are adding 10-50ms to each insert.

Becomes a problem if you are inserting 40 items to order_items table.


> Yeah, I'm going to remove triggers in next deploy of a POS system since they are adding 10-50ms to each insert.

Do you expect it to be faster to do the trigger logic in the application? Wouldn't be slower to execute two statements from the application (even if they are in a transaction) than to rely on triggers?


How do you handle trigger logic that compares old/new without having a round trip back to the application?


Do it in a stored procedure not a trigger. Triggers have their place but a stored procedure is almost always better. Triggers can surprise you.


I don't follow how you would do that in a stored procedure outside of a trigger.


I think instead of performing an INSERT you call a stored proc that does the insert and some extra stuff.


Yes, we already have all of our business logic in postgres functions(create_order, create_partial_payment etc).

Doing the extra work in stored procedures is noticeably faster than relying on triggers.


Hmm, imho, triggers do scale, they are just slow. But as you add more connections, partitionss, and CPUs, the slowness per operation remains constant.


Triggers are not even particularly slow. They just hide the extra work that is being done and thus sometimes come back to bite programmers by adding a ton of work to statements that look like they should be quick.


that, and keeping your business logic in the database makes everything more opaque!


> that, and keeping your business logic in the database makes everything more opaque!

Opaque to who? If there's a piece of business logic that says "After this table's record is updated, you MUST update this other table", what advantages are there to putting that logic in the application?

When (not if) some other application updates that record you are going to have a broken database.

Some things are business constraints, and as such they should be moved into the database if at all possible. The application should never enforce constraints such as "either this column or that column is NULL, but at least one must be NULL and both must never be NULL at the same time".

Your database enforces constraints; what advantages are there to code the enforcement into every application that touches the database over simply coding the constraints into the database?


I think the dream is that business requirements are contained to one artifact and everything else responds to that driver. In an ideal world, it would be great to have databases care only about persistence and be able to swap them out based on persistence needs only. But you're right, in the real world the database is much better at enforcing constraints than applications.


you make good points; i'm overcorrecting from past trigger abuses :)


Have you tried deferring them?


Neither do foreign keys the moment you need to shard. Turns out that there's no free lunch when you ask your database to do "secret extra work" that's supposed to be transparent-ish to the user.


Does that only apply when you need to shard within tenants?

If each tenant gets an instance I would call that a “shard” but in that pattern there’s no need for cross-shard references.

Maybe in the analytics stack but that can be async and eventually consistent.


Bad guess :)


Do you know another model than gridformer to detect table that has an available implementation somewhere ?


We had to roll our own from research papers unfortunately.

The number one take away we got was to use much larger images than anything that anyone else ever mentioned to get good results. A rule of thumb was that if you print the png of the image it should be easily readable from 2m away.

The actual model is proprietary and stuck in corporate land forever.


Kinda got the the same conclusion than OP building in the same space. There is so much innovation going on currently that whatever you do today, two other people will do better tomorrow. Which is a good news for us but difficult time for builders.


Ditto. This was my conclusion after spending a bit of time building https://robocoder.app/

Coupled with the fact that devs prefer open source tools and are capable of (and often prefer) making their own tooling, it never seemed like a great market to go after. I also encountered a lot of hostility trying to share yet another AI developer tool.

(Note I am one of those developers who prefer open source tools — which should’ve been a hint…)


goodnews is the market is still too early, a lot of people still dont know these things exist. as long as you keep showing up you're gonna get a piece of the pie


Is the tech to do facial recognition at this accuracy available to public ?

Last time I checked there was deepface https://github.com/serengil/deepface/tree/master but it was far to work as well as that


This idea that Postgres should be used for everything really need to die in a professional context.

I was appointed in a company of 10 dev that did just that. All backend code was PostgreSQL functions, event queue was using Postgres, security was done with rls, frontend was using posgtraphile using graphql to expose these functions, triggers were being used to validate information on insert/update.

It was a mess. Postgres is a wonderful database, use it as a database. But don't do anything else with it.

Before some people come and say "things were not done the right way, people didn't know what they were doing". The dev were all fan of Postgres contributing to the projects around, there was a big review culture so people were really trying to the best.

The queue system was locking all the time between concurrent requests => so queue system with postgres works for a pet project

All the requests were 3 or 4 times longer due to fact that you have to check the rls on each row. We have also all pour API migrated now and each time the sql duration decrease by that factor ( and it is the exact same sql request ). And the db was locking all the time because of that as it feels likes rls breaks the deadlock detection Postgres algorithm

SQL is super verbose a language, you spend your time repeating the same line of code , it makes basic function about 100 lines long when they are 4-5 lines in nodes js

It is impossible to log things inside these functions to have to make sure things will work and if it doesn't you have no way to know where the code did go through

You can't make external API call, so you have to use a queue system to make any basic things there

There are not real lib , so everything need to be reimplemeted

It is absolutely not performant to code inside the db, you can't do a map so you O(n2) code all the time

API were needed for the external world , so there was actually another service in front of the database for some case and a lot of logic were reimplemeted inside it

There was a downtime at each deployment as we had to remove all the rls and recreate them ( despite the fact that all code was in insert if not update clauses) it worked at the beginning but at some point in time it stopped working and there was no way to find why, so drop all rls and recreate them

It is impossible to hire dev that wants to work on that stack and be business oriented , you would attract only purely tech people that care only about doing there own technical stuff

We are almost out of it now after 1 year of migration work and I don't see anything positive about this Postgres do everything culture compared to a regular node js + Postgres as a database + sqs stack

So to conclude, as a pet project it can be great to use Postgres like that, in a professional context you are going to kill the company with this technical choice


I agree. Postgres as the only piece of your data-layer? Yes. Postgres as your application and business-logic layer? No, thanks.


The vast majority of jobs could be eliminated today without AI and still they are not, so we are very safe. Even with AI getting better and better we will be



And how successful has Workplace been compared to Microsoft Teams or Slack?


Moving the goalposts?


For all the people that say this is easy. Try it ! That's not easy at all, I've tried it and spend a few weeks to get similar performance. Receiving thousands of request is not similar to making thousands of requests, you can saturate your network, saturated with latency of random websites, get site that never timeout, parse multi megabytes malformed html, get infinite redirections.

My fastest implementation in python was actually using threads and was much faster than any async variant


Well I do not feel any of the 3 advice he gives, good advice.

First the longer you stay in heroku ,the most complex is it to exit it the time you really need it and the less flexible you are in the time being.

Second, wish he had pay for a pit team sooner, but could this money better used investing in marketing or sales like he probably did ?

The guy has obviously succeeded as a business owner, would it still be the case if he had implemented these advice ? We will never know, but what we know for sure is not implementing these advice made him successful


> We will never know, but what we know for sure is not implementing these advice made him successful

"Made him successful" implies a causality that is a bit too strong.

Indeed, we will never know for sure.

Maybe implementing these advice would have impeded development of other critical areas of his business.

Maybe it would have would have helped make is business more successful as he would have had a more reliable product.

Or maybe the business impact would have been neutral, but would have resulted in better quality of life/less stress for him and his employees.

But in general, the way I read this article is: they made good decisions overall, but as everything in the world, it was not optimal (switching platform too early, making some big mistakes like the credit card one, etc).

It's a very interesting read nonetheless, with clear take away:

* chose boring tech you know and focus on your product, not the tech, specially in the early days

* grow your infrastructure and complexity with your product needs

* accept you will mess-up but properly learn from it, and grow your organizational knowledge, structure and processes accordingly.


I think he meant, to stay with managed services as long as possible. Which makes sense, as in the early stage, the focus has to be the functionality rather than cost/performance optimisation.


I think our success is despite a lot of decisions (specifically the ones in the post), not because of them.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: