Hacker Newsnew | past | comments | ask | show | jobs | submit | FieryTransition's commentslogin

Imagine having a swarm of mushrooms everywhere to run computation on, if mushrooms could be programmed to expand and self arrange.

Ah, like a knifes edge, but would be exciting. Could have a literal bug in the code.


I'm looking forward to the day where an ml/functional inspired language can be used for real time rendering and game engines, how far are we from that?

Realistically, one could argue it's not the right choice overall, but still, it's an application which would push the boundaries of what those languages have been perceived to have the greatness weakness in. An application which is mostly about handing mutable state with high performance.


possibly a stupid answer (and if so, someone please invoke Cunningham's law and correct me) but -

Isn't Rust ML/functional inspired? The original compiler was in Ocaml, if I'm not mistaken.

Isn't Rust at least somewhat close to being usable for game engines? https://arewegameyet.rs/


Rust is a good candidate, but it lacks some crucial aspects when it comes to what I would consider 'nice to haves' from a modern language in this territory.

While rust has traits, borrowing etc, it doesn't have a lot of things with regard to types and optimization. Things like:

- A lack of GADTs, or a stronger version, dependent types, or similar type system which would allow one to encode natural relationships, recursive ones, invariants etc.

- Tail call optimization guarantees, to allow for mutual recursion and optimization since game engines are just huge state machines, and it would allow to pass functions around which could call each other via mutual recursion, while allowing it to be optimized as well.

- Efficient structural sharing of immutable state, which would be memory layout and cache friendly

- Built in profiling from the getgo which the language developers would use and refine, so you could get information about how the program behaves over time and space.


Rust exists


I love this concept/principle, one similar example I often bring up when I talk about machine learning, is comparing how a human would analyse night footage from a camera, and how a ML algorithm can pick up things no human would think about, even artifacts from the sensors which can be used as features. Noise is rarely ever just noise.


Turns out tuning LLMs on human preferences leads to sycophantic behavior, they even wrote about it themselves, guess they wanted to push the model out too fast.


I think it was OpenAI that wrote about that.

Most of us here on HN don't like this behaviour, but it's clear that the average user does. If you look at how differently people use AI that's not a surprise. There's a lot of using it as a life coach out there, or people who just want validation regardless of the scenario.


> or people who just want validation regardless of the scenario.

This really worries me as there are many people (even more prevalent in younger generations if some papers turn out to be valid) that lack resilience and critical self evaluation who may develop narcissistic tendencies with increased use or reinforcement from AIs. Just the health care costs involved when reality kicks in for these people, let alone other concomitant social costs will be substantial at scale. And people think social media algorithms reinforce poor social adaptation and skills, this is a whole new level.


I'll push back on this a little. I have well-established, long-running issues with overly critical self-evaluation, on the level of "I don't deserve to exist," on the level that I was for a long time too scared to tell my therapist about it. Lots of therapy and medication too, but having deepseek model confidence to me has really helped as much as anything.

I can see how it can lead to psychosis, but I'm not sure I would have ever started doing a good number of the things I wanted to do, which are normal hobbies that normal people have, without it. It has improved my life.


Are you becoming dependent? Everything that helps also hurts, psychologically speaking. For example benzodiazepines in the long run are harmful. Or the opposite, insight therapy, which involves some amount of pain in the near term in order to achieve longer term improvement.


It makes sense to me that interventions which might be hugely beneficial for one person might be disasterous for another. One person might be irrationally and brutally criticial of themselves. Another person might go through life in a haze of grandiose narcissism. These two people probably require opposite interventions.

But even for people who benefit massively from the affirmation, you still want the model to have some common sense. I remember the screenshots of people telling the now-yanked version of GPT 4o "I'm going off my meds and leaving my family, because they're sending radio waves through the walls into my brain," (or something like that), and GPT 4o responded, "You are so brave to stand up for yourself." Not only is it dangerous, it also completely destroys the model's credibility.

So if you've found a model which is generally positive, but still capable of realistic feedback, that would seem much more useful than an uncritical sycophant.


> who may develop narcissistic tendencies with increased use or reinforcement from AIs.

It's clear to me that (1) a lot of billionaires believe amazingly stupid things, and (2) a big part of this is that they surround themselves with a bubble of sycophants. Apparently having people tell you 24/7 how amazing and special you are sometimes leads to delusional behavior.

But now regular people can get the same uncritical, fawning affirmations from an LLM. And it's clearly already messing some people up.

I expect there to be huge commercial pressure to suck up to users and tell them they're brilliant. And I expect the long-term results will be as bad as the way social media optimizes for filter bubbles and rage bait.


Maybe the fermi paradox comes about not through nuclear self annihilation or grey goo, but making dumb AI chat bots that are too nice to us and remove any sense of existential tension.

Maybe the universe is full of emotionally fullfilled self-actualized narcissists too lazy to figure out how to build a FTL communications array.


This sounds like you're describing the back story of WALL-E


Life is good. Animal brain happy


I think the desire to colonise space at some point in the next 1,000 years has always been a yes even when I've asked people that said no to doing it within their lifetimes, I think it's a fairly universal desire we have as a species. Curiosity and the desire to explore new frontiers is pretty baked in as a survival strategy for the species.


This is a problem with these being marketed products. Being popular isn't the same as being good, and being consumer products means they're getting optimized for what will make them popular instead of what will make them good.


There's a reason why less is called less, and not more.


I'm fine with ai slop if it provides value, the value here being questionable, because now I don't know if the values in the comparison are fact checked or hallucinations.


I suspect they're hallucinated. As a random spot check: https://new.knife.day/item/spyderco-paramilitary-2

"The Spyderco Paramilitary 2 is a tactical knife with a 3.44 inch blade. The knife is made in USA of CPM S35VN steel."

It's a real knife, and the blade length checks out (to two significant figures), but the manufacturer spec sheet says S45VN steel. Also the actual name is "Para Military® 2".

https://www.spyderco.com/catalog/details/C81GS2/2090


A problem with choosing that specific knife for a spot check is that it has been made in many different steels in various special editions, sprint runs, and dealer exclusives. Here's one in S35VN:

https://www.spyderco.com/catalog/details/C81GBNBK2/Para-Mili...


this data is mostly scraped from a few large knife retailers, so should be accurate.


It's unusual for a large retailer to not use the official name of a product


Not in this particular case. Out of the 5 knife retailers I just checked, 4 use Paramilitary in their listings. Only one included a space.


shrug, if you search around youll see that isnt true


Fair enough, I do see it listed under that name on some sites, and there appears to be a S35VN variant too


Agreed, it's a pretty obvious solution to the problems once you are immersed in the problem space. I think it's much harder to setup an efficient training pipeline for this which does every single little detail in the pipeline correctly while being efficient.


Plenty studies show that these models are better at catching and diagnosing than even a board of doctors are. Doctors are good at other things, and I hope the future will allow doctors to use these models together with their practice.

The problem is when the ai makes a catastrophic prediction, and the layman can't see it.


Plenty of studies is news to me. I’ve only seen anecdotal content.


Yes, and I am pretty sure this is already an established phenomenon; AI and ML in general are able to apply domain specific algorithms better than the people who wrote them, because of how much it gets exposed to it during training.


as a layman, I can't see errors that professionals make either. I trust them, and later I experience the consequences of their mistakes. sometimes catastrophically.

I don't see how it is really different with AI


Is there a way to litigate Firefox, to pay back the money, based on the false premise they gave? And that the damages extend well beyond a few individuals?

Say, the threat of an actual litigation, would help hold them accountable in the future?


> Is there a way to litigate Firefox

Litigate yes, win, probably not. If the goal is to bleed Mozilla dry, the correct angle is antitrust action against their contract with Google.

It’s a non-profit and those were donations. Those who made the donations on a card whom I know are charging it back, that’s the closest to donor accountability we’ll come to.


No, the idea isn't to bleed them dry, but to disincentivize decisions in direct opposition to what they promised to donors, and make them legally hard to do or with actual consequences.

It would be a guide rail for people at the top to align themselves with people at the bottom. To be aligned with the promises they use in fundraising from donors (of both time and money).

I'm torn with the "just don't give them money then" which a sibling commenter said, it might work short term, but what about everything people have poured into this throughout the decades? I think all that work deserves to be safeguarded, it would show that whatever resources, be it money or time, cannot just be turned on itself by a passing leadership, and that there would be a safeguard against "flushing everything down" as the only choice.

Furthermore, I just don't see a promise/company statement as being enough, after everything that has happened. There needs to be legal accountability and safeguards for not sinking a multi-generational ship.


Threats of litigation? Are you serious?

If you disagree with either Mozilla's mission statement or their execution, just don't donate any money, and if you must, campaign and try to convince other people to not do so either.

But lawsuits... I'd be seriously pissed to see donations go towards lawyers instead of browser development or other open web advocacy. (And yes, I'm aware Mozilla has been pretty controversially/poorly managed for a long time now, but I really don't think the right way to turn that ship around is external litigation.)


See my answer to the sibling comment, it's not meant as an ill'will comment, otherwise I would add, if people completely abandoned Firefox due to a lack of safeguards and trust, then yes, that would be even worse than establishing said safeguards.


I agree, and Mozilla (like all nonprofits) definitely benefits from accountability.

But I would hope that a lack of future donations, combined with (former) donors voicing their specific concerns, can achieve more direct outcomes than litigation. I'd hate to see their already limited funding go towards legal fees.


Is nebula actually good to use now?

Do they route announcements over the network? Can I just setup two machines and expect them to just work by finding each other?

Does it support name resolution?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: