Hacker Newsnew | past | comments | ask | show | jobs | submit | anentropic's commentslogin

OTOH I feel a bit negative towards companies that stayed on X without at least also having either BlueSky or Mastodon

I'm glad the majority of companies aren't active nostr. I don't want even more corporate goo in my timeline and since only a very few companies offer decent social media support in case of issues with their product(s), I'd rather they stick with Bluesky/Mastodon/Threads so I can keep my peace. Got nothing against small shops/makers/artisans that are actively engaging with people and have a real personality. I'm following lots of them myself and purchase their products if shipping costs allow it.

I really hope nostr will have sufficient time to develop its own culture before people inevitably notice that freedom of speech is actually important. I guess people will have to burn a couple more accounts on X/B/M/T/FB before seeing the light.


Staying on X is just bad, regardless of what else you do.

Because access to the internet is inequitably distributed throughout society, it is inherently problematic for any privileged class members (e.g. men, white people) to stay on the internet at all.

What would be driving that trend?

I dropped X and adopted BlueSky & Mastodon, but must admit I find a bit annoying when projects don't use GitHub... I need to set up a new account to interact with them, if I star the repo my stars end up spread across multiple services.

I guess the ideal end goal would be if GitHub federated too and then some of that stuff would work.

The appeal of ditching X was obvious but I can't see the same for GitHub at the moment.


On that note there's also https://tangled.org built on atproto which (kind of?) solves that. You have one identity (the same one for all atproto apps) which you use to interact with any tangled repository (including those on self-hosted servers).

With its support for self-hosted CI runners it could also be a good alternative for people looking to move now that GitHub has decided to charge for those.


I really like this atproto model so much.

Having one account/sovereign Personal Data Store that can hold many different kinds of data. Then having many different clients and services that are decoupled from the data, offering all kinds of experiences is just night and day better than everything else. For everyone.

You account works everywhere & that's awesome. You also have credible edit & can take your account to a different server without disruption, baked in: amazing win for sovereign computing & digital rights far better than (basically) anything.

People can make cool connected online services, without having to figure out how to host all the data! That's so powerful, so cool (and ActivityPub maybe can decouple someday, but we don't see it yet. The data store and the app go hand in hand, & you end up with an account for each service). It makes it wildly easy to build incredible connected services with fantastically little effort and costs.

That said, I did try to get some of my git repos on https://Tangled.org just today, and alas found that the actual git data needs a "knit" server to do that. And afaik there are are no knot servers I can just use. I'd never seen that complexity for a atproto app before! Usually with something like the book reading social app https://bookhive.buzz or the annotation service https://seams.so , just having your regular account is all the data & service you need. Tangled was a surprising contrast, but I hope to be online there sometime soon-ish!


git-ssb was (now is again, really) one of those areas where ssb was vastly superior to atproto since all peers hosted the repos

Social movements don't need to be quantifiably better to take off.

When the relevant audience is bored enough to be open to something new, it only takes a few influential people to tip the scales.

People don't want to be truly revolutionary; that takes actual risk. They want the appearance of being revolutionary with minimal downside and social reassurance.

(w/r/t GitHub there's already enough buzz in the right circles and it will likely happen this year.)


> I find a bit annoying when projects don't use GitHub... I need to set up a new account to interact with them

The same is true in the other direction ("Ugh, this project is hosted on GitHub and I now need to set up an account"), with one major difference: compared to other sites which tend to just accept username + email + password for setup and username + password to log in, it's a huge PITA to set up a GitHub account in 2025 and to log in to an infrequently used account from a logged out state. GitHub won't let you get away with using it in such a simple way.


Yeah, now that github requires 2FA, which means I frequently have to do the phone dance, I'm always wary of clicking a github.com link.

Just installed it...

How are new agents added? Do you have to write a dedicated plugin for each one? Or there's some kind of discovery mechanism?

(I was looking for Copilot, but I guess that will depend on https://github.com/github/copilot-cli/issues/222 ?)


It’s stored statically in the Codebase. In the future, I suspect there will be enough compatible agents that there might be a web service to search them.

I think they are working in the Copilot ACP layer. Doubt it will take long.


It's more like just a simple toml file. https://github.com/batrachianai/toad/tree/main/src%2Ftoad%2F... gets you the currently supported ACP clients

And Copilot isn't supported for now because, well, there is no ACP support


I guess it didn't fit with the goal of 'walking' around the world, probably wanted to avoid motorised transport

Yeah I basically always use "web search" option in ChatGPT for this reason, if not using one of the more advanced modes.

Agreed... but it shouldn't need a frozendict for this

IMHO TypedDict in Python are essentially broken/useless as is

What is needed is TS style structural matching, like a Protocol for dicts


I am curious - why would you want to keep generating the same perlin noise every frame? why not pre-generate the terrain?



The funny thing about the last one is that those actions ultimately boil down to invoking their CLI tool (which is pre-installed on the runners) with "gh release create ...", so you can just do that yourself and ignore the third-party actions and the issues that come with them. Invoking an action isn't really any easier than invoking the CLI tool.


Yeah, what really needs to happen with that repo is to put that in the README to use the gh CLI instead of pointing to the third-party action with questionable security policies. If they were accepting PRs for that repo, it would be an easy PR to make.


Is this "super fast" as in "faster than previous Postgres" or as in comparable to duckdb etc?


toyed with pg_lake against our absolute dump of iceberg files (the nerds in my field call it a data lakehouse but engineering already has too many abstractions). it's pretty insane having postgres & the power of duckdb for mega aggregation, i threw a lot of wild windowed queries and aggregations at it and it seemed to really intuitively switch to using the duckdb jujutsu very well.

looking at migrating the rest of our catalog to iceberg now just to have the pg_lake option in our back pocket for future application development. it's so damn cool, as far as dbs go i haven't personally been involved in anything that needed more power than what postgres could deliver with writes. to be able to tack on bigboi analytics on top of it really consolidates a lot for us. im generally pretty cynical of these big saas peoples acquiring cool stuff but snowflake nabbing crunchydata here (crunchydata = guys who work on some pretty interesting postgres extensions) and helping them push this one to the a proverbial finish line and then open sourcing it was really great to see. i was worried when the acquisition went down because this was the major postgres thing i was really hoping someone would deliver, and crunchydata imo seemed to have the best plan outlined that understood the need.


that's good to hear, I'm planning to try pg_lake soon

would love to see more docs about operationalising it

so far it looks like it may be possible to use the Crunchy PGO k8s and have the duckdb part as a sidecar


it's faster than previous Postgres.

e.g. the gender_name example would already be optimized in duckdb via columnar execution and “aggregate first, join later” planning.


DuckDB and other specialized DBs benefit from much more optimized math, data structures and data storage/ memory lookups I'd assume.

But a 5x increase simply by optimizing the planner is nothing to be ashamed of.


if imports are slow one should probably look into pre-compiling .pyc files into the Lambda bundle


This is a well known issue, and the fix is not to create any boto3 clients at runtime. Instead, ensure they're created globally (even if you throw them away) as the work then gets done once during the init period. The init period gets additional CPU allocation, so this is essentially "free" CPU.

Source: I'm a former AWS employee.


Thanks for citing your sources, I think your source may be out if date, though! The “free init time hack” was killed in August (unless I’m missing something - never used it myself).

https://aws.amazon.com/blogs/compute/aws-lambda-standardizes...


Good callout that it's no longer free. However, you still get extra CPU, and assuming your execution environment isn't reloaded, that init time is amortized across all the invocations for the execution environment.

SnapStart is more widely available, which is the other option for shrinking the billed time spent in init (when I left, only Java SnapStart was available)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: