Hacker Newsnew | past | comments | ask | show | jobs | submit | oatsandsugar's commentslogin

Thesis is "crypto millionaires squeeze out other consumers from high demand goods, loss of crypto value reduces demand for those and thus benefits other consumers"

The "choose your date by selecting a substring of pi" is absolutely incredible.


I couldn't find my birthday in the first 10 or so pages, so I clicked "Give up" and searched the page for it. Said my pi index was in the 100,000s. Went back to the ui to select it manually and gave up after clicking fast for minutes and I hadn't even hit index 50,000.


How do they prove that it is indeed possible to select any date? :)


By search, since it's trivial to find any 8 digit string in the already-known digits of pi - in fact all 100 million combinations appear within the first ~2 billion digits.


But the site only supports up to ~10 million digits! This seems like serious defect. How am I supposed to select dates before 01/01/1970 or after 31/12/2069?


If you were born after 31/12/2069, I dare say you're the time traveler, so you can just go back in time and fix the UX yourself.


What if it's the date I plan to marry my AI love companion?


[This paper] show[s] that forecasters, on average, over-estimate treatment effects; however, the average forecast is quite predictive of the actual treatment effect.


When I was in Venture, I did a tonne of research into the Nix ecosystem.

Fast forward to now, a new hire at the startup I work at, on his own volition, implemented a Nix flake day one at the company. Within the week, a bunch of our engineers were using it.

Super cool to see, mainly because of the decreased frustration in setting up our dev environments.


Thank you mate! fixing


Author here: commented here about how you can use async inserts if that's your preferred ingest method (we recommend that for batch).

https://news.ycombinator.com/item?id=45651098

One of the reasons we streaming ingests is because we often modify the schema of the data in stream. Usually to conform w ClickHouse best practices that aren't adhered to in the source data (restrictive types, denormalization, default not nullable, etc).


Author here—this article was meant to highlight how you can optimize writes to CH with streams.

If you want to directly insert data into ClickHouse with MooseStack, we have a direct insert method that allows you to use ClickHouse's bulkload methods.

Here's the implementation: https://github.com/514-labs/moosestack/blob/43a2576de2e22743...

Documentation is here: https://docs.fiveonefour.com/moose/olap/insert-data#performa...

Would love to hear your thoughts on our direct insert implementation!


Timely! We're redesigning our blog, will keep you posted


Co-author here, we used Debezium here because it supports many different databases. Unfortunately, no sqlite support—my understanding is as an embedded db it lacks some of the prerequisites that Debezium relies on.

Have you run CDC from sqlite? would love to hear how you did it and to try build a demo with MooseStack


Hello, thank you for your answer. No I never ran CDC from sqlite, I am actually just investing sqlite, thus my question.


I think my favorite part was the ability to use the same Drizzle TS data models created for Postgres for creating tables in ClickHouse


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: