Hacker Newsnew | past | comments | ask | show | jobs | submit | andrioni's commentslogin

It's a ZDF documentary, I imagine DW only got the rights to show the translation outside of Germany.

Here's a link to the original: https://www.zdf.de/video/dokus/megacitys-wenn-es-nacht-wird-...


You can just pay extra for extended or infinite retention. https://www.backblaze.com/cloud-backup/features/extended-ver...


I did this, it's come in use a few times..


One use case where it would be beneficial is for tables that have batch-style patterns, where you (at least in my experience) need to provision a good enough baseline before autoscaling kicks in.

In some cases I've seen, autoscaling can manage going from ~500 to 10k WCU without heavy throttling (as the burst capacity can handle the five~ten minutes before autoscaling kicks in), but not with a smaller baseline. On tables with hot shards, the baseline usually has to be higher.


It's not entirely uncommon in mathematics to see citations to "personal communication with [insert name of other mathematician here]".


This kind of clause is usually known as a "DeWitt clause"[1], after a University of Wisconsin professor who benchmarked a couple databases, including Oracle (which performed poorly). Oracle/Larry Ellison didn't react well to that and decided to forbid benchmarks.

[1]: https://danluu.com/anon-benchmark/


Database performance depends on so many variables - thread pools, queue sizes, RAM allocations to different purposes, disk layout - the list goes on. There are whole consulting fields that are essentially doing a random walk through database configuration files looking for better performance - ascending the performance gradient if you will.

So in this world, its sensible to say "benchmarks are uninformative and misleading - your mileage will almost certainly vary"


Benchmarks are easily abused, misused, and misinterpreted. E.g., benchmarks looking at some very specific aspect of query performance being extrapolated to more complex/real-world queries.

Also trade-offs are rarely mentioned in benchmark numbers– e.g., great write throughput, at the expense of: ?.

It's fun to be cynical about stuff like this, but it's rarely as simple as "Ellison didn't react well to that and decided to forbid benchmarks".


> Oracle/Larry Ellison didn't react well to that and decided to forbid benchmarks.

So you kind of have to wonder why Cognitect is going Oracle on us..

The most obvious explanation is that Datomic just doesn't perform well and they don't want people to know.


Anyone who has done serious performance testing on a DB knows that there's a massive gap between initial findings and a well tuned system designed with the help of the database maintainers. I've seen some nasty performance out of Riak, Cassandra, SQL, ElasticSearch etc. But with each of those, once I talked to the DB owners and fully understood the limitations of the system it was possible to make massive gains in performance.

Databases are complex programs, and if I ever wrote one, it would be infuriating for someone to pick it up, assume it was "just like MySQL" and then write a blog post crapping on it because it failed to meet their expectations.


Yes, benchmarks can give a misleading impression of a database's performance.

So what? Somehow PostgreSQL is doing fine despite that.

Which is worse publicity for Cognitect: people publishing bad benchmarks or Cognitect forbidding benchmarks Oracle style?


Is EdgeDB going to be Postgres fork, extension or a service that uses just uses Postgres as a storage layer (like Datomic)?


It's closer to the storage layer: postgres is in the core and not directly exposed (i.e. you won't be able to access it).


Does it mean it will be possible to use EdgeDB with hosted Postgres databases, like RDS and maybe Amazon Aurora (on Postgres mode)?


Maybe. This is something that we'll have to figure out later.


It seems more similar in spirit to triplestores, which are pretty neat all things considered. It also reminds me a lot of Datomic, but without the Clojure-ness and immutability.


44 million, according to their paper, and they used 5000 TPUs, which are capable of 4.6×10^17 operations per second.

(The operations the TPU can run are far simpler than what supercomputers can do, but just for the sake of comparison, the current top supercomputer in the world can do 1.25×10^17 floating point operations per second)


>So, using sports as a proxy is not the same as management decisions where there are no well-defined criteria or expected value.

Wouldn't it make his result even stronger? If the thesis fails to hold even in better conditions (like when there is a well-defined expected value), it should also fail to hold in worse conditions.


Aren't the bitcoin transaction fees actually higher than what you would get with banks? I don't know how it works in the US, but I can do free almost-instant transfers inside Germany with no problems at all, including for online transactions (like Amazon).


For transfers under 500$ (roughly) the fees from a proper bank transfer will be more expensive.

SEPA has basically no fee and is usually next day, with SEPA SCT Inst it'll soon be 15 seconds (and no more than 20 seconds) with the same fees, after which bitcoin can't compete on that anymore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: