> A problem with this approach is it's not monotonical
Whether or not that's bad fully depends on your platform and the number of writes you do. If you're using a massively distributed database like Datastore, Spanner etc, you want random keys as to avoid hot spots for writes. They produce contention.
Well, you'd still likely want psuedo-random keys. You'd rather not have the underlying database doing extra work to shuffle around records as the pages get jumbled.
One solution to that is having more complex keys. For example, in one of our more contentious tables the index includes an account id (32bit int) and then the id of the entity being inserted. This causes inserts for a given account to still be contiguous (resulting in less fragmentation) while not creating a writing hotspot since those writes are distributed across various clients.
Not disagreeing. Point is, you need to know your domain, your technology, your write patterns, your downstream systems, etc to decide if a specific key scheme works to your advantage or not. All the more reason not to use natural keys, as they lock you in in that regard.
I don't know how you can successfully maintain or develop software without developing an understanding of the underlying domain. I've seen devs try that route and the quality of their work has never been high.
Whether or not that's bad fully depends on your platform and the number of writes you do. If you're using a massively distributed database like Datastore, Spanner etc, you want random keys as to avoid hot spots for writes. They produce contention.