Hacker Newsnew | past | comments | ask | show | jobs | submit | osigurdson's commentslogin

https://nats.io

Not a drop in replacement, but worth looking at.


I've always found this sound, rational ROI driven approach to product management a little off the mark. Software isn't like real estate or investing in TBills - you don't invest $X in development and get a nice 10% annualized return on investment over the next 10 years or something like that despite how seductive such thinking can be.

It is largely a "hits" business where 1% of the activities that you do result in 99% of the revenues. The returns are non-linear so there should be almost no focus on the input estimation. If your feature only makes sense if it can be done in 3 months but doesn't make economic sense if it takes > 6 months - delete feature.


From 1950 - 2005(ish) there were a small number of sources due to the enormous moat required to become a broadcaster. From 2005 to 2021, you could mostly trust video as the costs of casual fakery were prohibitive. Now that the cost to produce fake videos are near zero, I suspect we will return to a much smaller number of sources (though not as small as in the pre YouTube era).

Some of the “smaller sources” also distorted facts.

We might even have fewer than before - between Internet commentators and loss of confidence from AI, real journalism may not be as highly valued as it was before the Internet…


It will entirely be about trust. I don't think fakery is worth it for any company with > $1B market cap as trust is such a valuable commodity. It isn't like we are just going to have a single state broadcaster or something like that (at least, I hope not). However it is going to favour larger / more established sources which is unfortunate as well.

We're also seeing a barrage of commercials featuring AI generated animals talking like people. It's getting old.

You’re seeing commercials?

There’s your problem.


OTOH product placement is your "friend".

There will be people who care about trusted and reliably accurate news sources, and at least some of them are willing to pay for it. Think 404 Media.

But there are people who don't want their news to be "reliably accurate", but who watch/read news to have their own opinions and prejudices validated no matter how misinformed they are. Think Fox News.

But there are way way more people who only consume "news" on algorithmically tweaked social media platforms, where driving "engagement" is the only metric that matters, and "truth" or "accuracy" is not just lower priorities but are completely irrelevant to the platform owners and hence their algorithms. Fake ragebait drives engagement which drives advertising profits.


Suppose that I care about trustworthy and reliably accurate news sources and am willing to pay. How can I distinguish which ones are trustworthy and reliable? No offense to the folks at 404 Media, but I've never met a single one of them, and I have no reason to believe that they wouldn't lie to me for money. You clearly have your own prejudices and biases about which media organizations are honorable and which are not, which you're wrapping up as if it's about a "truthfulness" that you couldn't possibly actually verify.

Thanks! Yes, if you open the side panel, there is a tags area, you can filter by remote-<region> or onsite-<region>. The LLM riffs a little with these. If there is something specific that you would like, happy to make a tighter instruction.

He mentioned in an interview Hashicorp was just a corporate entity that he used as a teenager to do some contracting here and there. He and the other founder weren't that keen on using it but the name stuck.

At least Go didn't take the dark path of having async / await keywords. In C# that is a real nightmare and necessary to use sync over async anti-patterns unless willing to re-write everything. I'm glad Zig took this "colorless" approach.

Where do you think the Io parameter comes from? If you change some function to do something async and now suddenly you require an Io instance. I don't see the difference between having to modify the call tree to be async vs modifying the call tree to pass in an Io token.

Synchronous Io also uses the Io instance now. The coloring is no longer "is it async?" it's "does it perform Io"?

This allows library authors to write their code in a manner that's agnostic to the Io runtime the user chooses, synchronous, threaded, evented with stackful coroutines, evented with stackless coroutines.


The interesting question was always “does it perform IO”.

Rust also allows writing async code that is agnostic to the async runtime used. Subsuming async under Io doesn't change much imo.

Except that now your library code lost context on how it runs. If you meant it to be sync and the caller gives you an multi threaded IO your code can fail in unexpected ways.

How so? Aside from regular old thread safety issues that is.

This is exactly the problem, thread safety. The function being supplied with std.Io needs to understand what implementation is being used to take precautions with thread safety, in case a std.Io.Threaded is used. What if this function was designed with synchrony in mind, how do you prevent it taking a penalty guarding against a threaded version of IO?

The function being called has to take into account thread safety anyway even if it doesn't do IO. This is an entirely orthogonal problem, so I can't really take it seriously as a criticism of Zig's approach. Libraries in general need to be designed to be thread-safe or document otherwise regardless of if the do IO, because a calling program could easily spin up a few threads and call it multiple times.

> What if this function was designed with synchrony in mind, how do you prevent it taking a penalty guarding against a threaded version of IO?

You document it and state that it will take a performance penalty in multithreaded mode? The same as any other library written before this point.


In addition to the search tools mentioned above, feel free to use https://nthesis.ai/public/hn-who-is-hiring. It has search (text / semantic), chat and extracts data from alternate viewpoints (e.g. business / role) and allows you to visualize a semantic map of those things. I hope it helps!

The worst usage of AI is “content dilution” where you take a few bullet points and generate 5 paragraphs of nauseating slop. These days, I would gladly take badly written content from humans filled with grammatical errors and spelling mistakes over that.

> generate 5 paragraphs of nauseating slop

Which then nobody will ever read, they'll just copy it into the AI bot to summarize into a few bullet points.

The amount of waste is quite staggering in this back and forth game.


> Which then nobody will ever read, they'll just copy it into the AI bot to summarize into a few bullet points.

Which more often than not will lose or distort the original intention behind the first 5 bullet points.

Which is why I avoid using LLMs for writing.


It's pretty awesome that we now have nondeterministic .Zip

/dev/yolo

We have non-deterministic compression at home

I suspect the text alone would be a lot smaller. Embeddings add a lot - 4K or more regardless of the size of the text.

At first, I was thinking the same but then realized this is over a full page of code. It isn't an insane rule of thumb at all.

At least we aren't talking about "clean code" level of absurdity here: 5-20 lines with 0 - 2 parameters.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: