Hacker Newsnew | past | comments | ask | show | jobs | submit | barryhennessy's commentslogin

"My calendar is a true nightmare."

Check

"calendar systems suck. All of them."

Double check

"I’m trying to break off of big tech as much as I can"

I wish I could check this more ...

I've had similar needs/desires/gripes with my calendars and the terrible state of calendaring apps for a while. So thank you for scratching your own itch and sharing it with us.

I'm curious when you say that "[CalDAV] is an area begging for disruption". Can you enlighten us as to what your wishlist could be for (a) better protocol/systems/ecosystem might be? (a rant about your pain points might work too).

Thanks again!


I've seen polylith over the years and it's always piqued my interest.

I'm curious as to what has been built (by yourselves or others) in the 4 (?) years since its release. Have the experiences held up to the initial goals and designs?


Congrats on the launch and hitting HN's front page. Do you mind if I ask how your example site (multiplayer.dev) scaled?

I'm super curious about realtime multiplayer solutions (and I don't think I'm alone). But I find a great lack of info on what running this kind of app would cost. I come from the old-school hold-no-state, request->response->gtfo mentality, and I always have the _feeling_ that it'd be expensive to scale. Not just holding the websocket open, but how much effort do you expend parsing the WAL? How chatty is that kind of persistence mechanism? What other 'gotchas' are there etc.

I'd love nothing more than to dispel that vague feeling with data. I know it wouldn't come close to a full performance analysis, but throwing a few datapoints on a chart would help get a ballpark idea and tune my hype->action conversion.

I could nail down a pile of questions, but I'm sure you know better than I how to measure your own systems. But roughly I'm wondering: - how many users did you have? - how much traffic did you get? - how much would/did it cost to host on supabase? - how much resources did the database/realtime components consume?

Congrats again on the launch and have a nice weekend.


I love the idea, and can definitely see the need.

But I always come to the same question with services that provide auth and user management: You pay a lot of money for someone _else_ to own critical information about your customers. What happens if you want to move away and use a different/your own/your customers own service?

Your customer data (at least login) lives in WorkOS' database. How do you get it out? How much does that cost? Are there contractual guarantees around that?

The same goes for your customers integration points. If the customer has to do any setup to integrate WorkOs for your app then moving away would involve them making changes. Not necessarily an easy thing to manage.

Not to be negative: I'd be happy to hear that WorkOS have great processes and guarantees around this.


WorkOS doesn't really own the user management database. It's more like an agnostic API to connect with multiple IdPs through protocols like SAML and OIDC. Identity providers such as Okta, OneLogin, and Azure AD are the ones responsible for storing that data.


Interesting. Perhaps I misunderstood it. So is this roughly a kind of managed Keycloak/CAS setup? With it's own API/well managed client libraries?


Is using their API really any easier than using Rails gems? Some gems are more mature than others but usually it’s easy to drop them in and configure.

They claim SSO takes months to implement without their product. Is that true?


It would probably take months to implement SSO with all of the flexibility and ease of use they offer, mainly just because of the built-in integrations with so many providers. The price is pretty steep though, so this would really only be used by the big bucks Enterprise Software™ guys.


It's not. Implementing the OIDC flow from scratch takes half a day to get working and maybe a week to polish. Using available libraries you can do it way faster of course.


Most enterprise customers require SAML authentication, which is much more complicated with lots of quirks.


Don't knock it - if it works and only costs 3MB that's a win.

The longer you can scale the product without having to scale the application, the better!


What kind of workflow and synchronization scripts?

These sound like the kind of hidden costs that could turn sqlite's simplicity quite complicated if you don't see them coming.


+1, would love to know more about these!


I'd be interested to know what kind of corruption you were facing.


I only very briefly looked into rqlite. It's very interesting, but if I understand it correctly it's also not geared toward a write heavy workflow. (all writes are routed to the same node)

I.e. it's leaning more toward the moderate, but reliable writes, and heavy read use cases?

Please let me know ~if I'm missing anything~ what use cases I'm missing.


That's correct. rqlite replicates SQLite for fault-tolerance and high-availability, not for performance. It fact performance takes a hit, for the reasons you state. But it's no worse (nor better) than something like, say, etcd or Consul.

https://github.com/rqlite/rqlite/blob/master/DOC/FAQ.md#what...


That's a good counter-case to keep in mind, thank you.

I guess the take away here is that this underscores that sqlite isn't for the 'large number of writers' scenario.

p.s. > I didn't want to be the guy who comes in and says "you are doing it wrong" month 1 Very wise


Impressive numbers, thanks for sharing.

Out of interest, were you running on bare metal/cloud? And what kind of CPU was behind those 24M face compares per second?


Running on bare metal, and those numbers come from a 3.4 GHz i9. The system is a fully integrated single executable, with embedded SQLite. Since I left the firm a year ago, new optimizations have the facial compares down to 40 nanoseconds per face.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: