Hacker Newsnew | past | comments | ask | show | jobs | submit | Erethon's commentslogin

Reticulum supports multiple interfaces to transport data, TCP is just one of them. Other are ethernet, packet radio TNCs (think ham radio), LoRa, stdio/pipes, I2p, etc. More details on some of the supported interfaces http://reticulum.network/manual/interfaces.html


None of the given descriptions have been too clear about what it is, though.

It appears to not be a drop-in solution for communication like Briar, so why make a comparison here in the first place?

Instead, it appears to be physical layer-agnostic (it doesn't care if it's run over internet or HAM-radio) infrastructure to build tools on top of. So,

* Is it an end-to-end encrypted overlay network like corporate VPN/Tailscale/Hamachi?

* Is it an end-to-end encrypted protocol between two or more endpoints like SSH?

* Is it an end-to-end encrypted messaging protocol between two or more users like OTRv3?

The entire documentation returned zero results for "Tor" or "onion" (routing), so what's the improvement over Briar+Tor?


This is the https://en.wikipedia.org/wiki/Jevons_paradox and it's what always happens in these cases.


It does happen, but not always.

For example, food got cheaper and consumption has increased to the extent that obesity is a major problem, but this is much less than you might conclude from the degree to which productivity has increased per farmer.

For image generation, the energy needed to create an image is rapidly approaching the energy cost of a human noticing that they've seen an image — once it gets cheap enough (and good enough) to have it replace game rendering engines, we can't really spend meaningfully more on it.

(Probably. By that point they may be good enough to be trainers for other AI, or we might not need any better AI — impossible to know at this point).

For text generation, difficult to tell because e.g. source code and legal code have a lot of text.


Food may be a bit of an outlier, the number of consumers won't change quickly in response and each person can only eat so much.

When it comes to converting electricity into images and text, there really is no upper bound in sight. People are happy to load the internet up with as much content as they can churn out.


If we assume that text and images are made for human consumption then there is a limit in how much we can consume. In fact I doubt there is much room for our society's per-person media consumption to increase. There is obviously room for growth in fewer people seeing the same content, and room for some "waste" (i.e. content nobody ever sees). The upper bound (ignoring waste) would be if everybody only saw and read content that nobody else has ever seen and will ever see. But if we assume society continues to function as it does the real limit will be a lot lower.

Now maybe waste is a bigger issue with content than with food. I'm not sure. Both have some nonzero cost to waste. It might depend on how content is distributed


Mm.

I'd would say that text is capable of being extremely useful even when no human reads it, because of source code, maths proofs, etc.

But I'm curious: 238 wpm * 0.75 words per token * 16 (waking) hours per day * 83 years * $10.00 / 1M output tokens (current API cost for 4o without batching) means the current cost of making as many tokens as a human can read in a lifetime is $92,300: https://www.wolframalpha.com/input?i=238+words+per+minute+%2...

With these numbers, a well-written project with even a billion lines of code would be a rounding error even if only a thousand people used any specific such software and none of that was ever shared with what other people wanted to get done.


Its an interesting question for sure. Anecdotally it seems to me like there's a ton of content thrown online that is rarely, if ever, consumed. From bot-generated blog posts to social media posts, surely some of it is never seen or viewed only a few times before it gets buried and never seen again.

Market dynamics should push people to stop generating that content if they don't enough value to justify the cost. In practice, though, it hasn't seemed to happen yet and we must be pass a threshold where there's more content created online than we could ever value.

It'd make for an interesting study, but short of having verifiable data I have to assume we'll continue increasing the rate at which content is created whether the value is there or not.


Yandex image search works really well at finding similar images but it also leads you to some very strange parts of the internet that are exactly what you are describing: bot generated pages that almost no one reads.


The simplest example of what you describe regarding fully bespoke content would be realtime generation of VR feeds. Of course even in VR people would be consuming still more 2D content: the environments are built out of 3d models textured by 2d content at higher resolutions than most viewers will ever closely inspect.

You'd most likely categorize all of the unseen textures or higher-than-needed resolution in your "waste" bucket, and I can't argue with that. But VR still clearly means that there is at least theoretical room for "realtime video generated custom for every viewer, which in turn is composed of even more content sources".


You don't see an end at the level of everyone having it at 60fps (or so) in each eye?


I'm not quite sure what you mean, 60fps would have something to do with output displays but nothing to do with the content. There's no upper bound to how much content people would have LLMs make, whether that content is being consumed on cell phone screens or some kind of in-eye display.


If you generate a new image 60 times per second, that's reasonably described as "60 fps", this is how the output of video game engines has been described for at least 25 years*.

If everyone's doing that all day every day on each eye, that's a reasonable guess of an upper bound: you as a human cannot actually consume more even if you make it.

GANs can already do that speed, but any given GAN is a specialist AI and not a general model; diffusion models are general, but they're slower (best generation speed I've seen is 4-5 frames per second on unknown hardware). LLMs aren't really suited to doing images at all, but can control other models (this is what ChatGPT does when "it makes an image" — it calls out to DALL•E).

* how long I've been paying attention to that, not a detailed historical analysis


Sure, I supposed you could calculate a limit by looking at how many human eyes there are, how many frames per second they can see, and max resolution visible. That still isn't actually a limit on how many images could be made, only how many could be consumed.

That said, if we got to such a massive scale I'd expect us to hit other limits first (electricity available, best produced, storage space, network transmission, etc.).

Or did I totally misunderstand your example here? I may have misread it completely, if so sorry about that!


> Sure, I supposed you could calculate a limit by looking at how many human eyes there are, how many frames per second they can see, and max resolution visible. That still isn't actually a limit on how many images could be made, only how many could be consumed.

Sure, absolutely. But I can say the same of food, which is why I drew the analogy between them previously.

> That said, if we got to such a massive scale I'd expect us to hit other limits first (electricity available, best produced, storage space, network transmission, etc.).

Difficult to guess when the quality isn't yet at the right threshold: GANs are already this speed on phone hardware*, so we're not bounded on that specific combination with available electrical energy; on the other hand, 2 years ago I was seeing images for about 3 kJ, which is in the region of hundreds of kilowatts for 2 eyes at 60 fps, which is absolutely going to be a problem… if they were limited to that hardware and with that model (though both are moving targets, I'd be very surprised if the unknown hardware that I've seen doing 4-5 fps was burning 12-15 kW, but it's not strictly speaking impossible it really was that power hungry).

* Specifically: on an iPhone 11, BlazeStyleGAN model was generating images in 12.14 ms, which is just over 82 fps — https://research.google/blog/mediapipe-facestylizer-on-devic...


The ultimate end-game for image-gen AI is a closed-loop system where a computer can monitor sexual arousal levels and generate the most arousing porn possible for the subject. This would be VERY addictive. Unless people can just become completely immune to all pornographic stimuli.


I'd say the Matrix rather than that; most of us have a refractory period where that would at best do nothing and at worst be actively undesirable.


You're assuming people will create content to consume it, and not just to spam various platforms, competing for attention. Most of it might only be ever consumed by crawlers, if at all.


I think you're missing the broader analogy here; Cheap LLMs == LLMs everywhere. Cheap food == People everywhere.

I'm no Malthusian, but the paradox holds here pretty well.


The population indeed went up, and at the same time the fertility rate is declining. What Malthus was expecting is that more food would just lead to more people on the knife-edge of famine, and we're wildly far from that in most of the world. (What is paradoxical is that the USA is simultaneously very rich, has high obesity, and somehow manages to also have a huge problem with kids going hungry).

The very specific point I'm claiming is that the increased consumption isn't always unbounded.


Why is fertility declining? I posit we are hitting non-food constraints. Political ones. Land use constraints. If you build millions of homes fertility will go up.


In wealthier, modern economies:

* More women work more and invest in their own education and fewer spend time alone at home as they might in poorer countries which would facilitate giving birth and investing time on childcare that way.

* More men and women derive their primary income from work that children cannot easily participate in. EG: office work, work from home computer work, vs farming or working with one's hands. In many poorer countries it is common practice to have more children at least partially to bolster the labor force around the house.

* Wealthier nations have better access to family planning: contraception, abortion, pasttimes that can meaningfully compete against getting laid in the first place.

Sources: Colleran, H., Snopkowski, K. Variation in wealth and educational drivers of fertility decline across 45 countries. Popul Ecol 60, 155–169 (2018). https://doi.org/10.1007/s10144-018-0626-5 https://link.springer.com/article/10.1007/s10144-018-0626-5

More Work, Fewer Babies: What Does Workism Have to Do with Falling Fertility? - Laurie DeRose and Lyman Stone https://ifstudies.org/ifs-admin/resources/reports/ifs-workis...


There are millions of empty homes in this world.

I'd assume environmental, but there's also more subtle answers than will fit in a comment box — whatever the cause, it has to be near-global.

China's building loads more houses, still has a fertility decline.


Surely, the reasons are multivariate with all kinds of interactions and feedback mechanisms between the variables.

It is really a good example of what natural dimension reducers we are, even when we know it makes no sense. It is like we can't but help ourselves to reduce things to one explanatory variable.

My favorite is the news headline "The market went up today because of X".


They never say that.

They say: Tesla shares up as revealations surface that the wind is blowing east.


Yes I forgot to mention the implied: Homes, that meet code, with connected utilities in places people want to live that are not being landbanked.


The fertility rate trends are missing the core point here. Your obesity and hunger examples actually reinforce the Jevons paradox - when a resource becomes cheap enough, we find ways to use it even beyond what seems rational. But more importantly, you're still not getting the original Malthusian comparison: Malthus wasn't predicting that cheaper food would make people eat more (obesity) - he was predicting that cheaper food would lead to more total people. Similarly, cheaper AI won't just make individual AIs consume more - it means AI will be deployed everywhere possible. The parallel is about multiplication of instances, not increased individual consumption.


Image generation isn't cheap enough until we have sites that work like Google Image search, filling the page with image variations nearly instantly and available for free.


We're not a huge distance from that already.

https://arxiv.org/abs/2408.14837

Also TIL this is generated at 20 frames per second, the best I've used myself was "only" 4-5; does anyone know the performance and power consumption of a Google TPU?


Bitcoin is a pure exanple thay shows the limit to energy consumption is how much money people have to throw at it. And if that money is thrown into generating more energy it is a cycle. There is no stomach size and human reproduction constraints. We can waste power as quickly as we can generate more.

The only hope is to generate this power greenly.


The existence of examples where it happens by design does not say anything either way about if it must happen all the time.


Yeah I am not saying all the time, but I am saying when it happens it can br less bounded than "human population growth in the early 21st century."


You can get a rough estimate based on unique IPs hitting the RSS feed. Moreover, some of the online feed readers report the number of subscribers of your feed as part of their User-Agent. An example from my blog logs: `"Feedbin feed-id:2688376 - 9 subscribers"`


https://blog.erethon.com/

I try to blog about things that I feel I have a good understanding and get into details. Examples:

- https://blog.erethon.com/blog/2023/06/21/what-happens-when-a... I recently had my Matrix server die on me and this documents my journey on bringing it back from the dead.

- https://blog.erethon.com/blog/2022/07/13/what-a-malicious-ma... An exploration on the powers of a malicious admin in Matrix

- https://blog.erethon.com/blog/2019/11/06/infrastructure-as-c... Old blog post that needs updating on how I manage my physical servers and spawn VMs using Terraform and Ansible to have an IaC setup without the "cloud".


For receiving FM radio a lot of them do, definitely most of the feature phones that have a 3.5mm audio jack (the audio cable doubles as the antenna).

Some phones released around the 2010s also had FM transmitters, for example the [Nokia N8](https://www.gsmarena.com/nokia_n8-3252.php). The use case for these was to be able to listen to the music you had in your phone via your car stereo, since AUX ports and BT sharing wasn't common back then.


As others have said, you can pick any video, start watching and you'll just be "up-to-date". Having said that, one of the first videos I watched from start to finish and found easy to follow was the one about implementing the `pledge` syscall https://www.youtube.com/watch?v=-a5hLBuW6tY. There's also an accompanying blog post https://awesomekling.github.io/pledge-and-unveil-in-Serenity...



Wikimedia does this https://grafana.wikimedia.org/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: