Do people really buy this nonsense? I mean just this week Sora 2 is creating videos that were unimaginable a few months ago. People writing these screeds at this point to me seem like they’re going through some kind of coping mechanism that has nothing to do with the financials of AI companies and everything to do with their own personal fears around what’s happening with machine intelligence.
So, wait, you're saying that these guys just aren't impressed by the AI technology, and that is blinding them to the fact that the AI companies' economics look really good?
That is a laughable take.
The AI technology is very very impressive. But that doesn't mean you can recover the hundreds of billions of dollars that you invested in it.
World-changing new technology excites everyone and leads to overinvestment. It's a tale as old as time.
I’m saying that seeing dubious economics is blinding people from accepting what’s actually going on with neural networks, and it leads to them having a profoundly miscalibrated mental model. This is not like analyzing a typical tech cycle. We are dealing with something here that we don’t really understand and transcends basic models like “it’s just a really good tool.”
I've followed the human level intelligence stuff for about 45 years, back before it was called AGI and the basic thesis is kind of anti religious. It's that human intelligence is basically the result of a biologically constructed computing device, not some god given spirit, and as human built computing devices continue their Moore's law like progression they will overtake at some point.
It's been true and kind of inevitable since Turing et all started talking about it in the 1950s and Crick and Watson discovered the DNA basis of life. It's not religious, not a mania, not far fetched.
The angle currently is the opposite. They're positing that the machine has some sort of spirit - see the other poster talking about "unexplained emergent intelligence".
Saying we don’t understand why LLMs are intelligent is both true and completely unrelated to religion. You inserted the word “spirit” so perhaps you are the one conflating the two.
Well it's sort of true in that people stick these LLMs together and they produce intelligent seeming outputs in ways that the people building them don't fully understand. Kind of like how evolution stuck a bunch of biological neurons together without needing to fully understand how it works.
It’s not a religious angle, we literally don’t know how or why these models work.
Yes we know how to grow them, but we don’t know what is actually going on inside of them. This is why Anthropic’s CEO wrote the post he did about the need for massive investment in interpretability.
It should rattle you that deep learning has these emergent capabilities. I don’t see any reason to think we will see another winter.
Correct, my opinions have nothing to do with financials. If I was around for the discovery of fire I wouldn’t be wondering about the impact on the bottom line.
(To be clear, I do agree that AI is going to drastically change the world, but I don't agree that that means the economics of it magically make sense. The internet drastically changed the world but we still had a dotcom bubble.)
I tried that and it made the gameplay worse since it harmed your ability to control the camera precisely with your hand. The goal of this is primarily to enable gameplay, and secondarily to deliver a portal effect.
That’s not the main app, that’s the controller app you install on a phone to use with the main app. The main app must be side loaded, because Google will not approve it in the Play store since it installs third party apps.
I video it and demo it on a tablet since it helps make it clear how it works, but my preferred display device is a Galaxy Fold. Pockets great, and is a nice comfortable size when unfolded.
I was going to name it Portal, but I named it PortalVR to minimize confusion.
It’s called PortalVR because there is no better name to describe what it’s like to play VR this way. It is like playing games by looking through a portal you hold in your hand.
"Whatever it is" == the same neurophysical/genetic factors that cause high development of "STEM" intelligence is associated with inability to process social cues in a dynamic fashion, somewhat related to high function autism. A great preference for clear classification that is not self-contradicting.
There are plenty of spaces that have the type of people the thread is describing and yet almost none of them have the type of behavior that the audience of this website loves to exhibit.
Right. Yes, you are correct. I assumed C++ was the language they replaced with Rust because they wanted to write lower level stuff, like their own engines (and they mention doing that).
It makes even less sense to use Rust to replace one of the higher level languages like C#.
I wonder why they didn’t go with the file system permissions on the .auto.conf file, and gracefully handling that as a way to disable the feature. Seems like an obvious solution that doesn’t lead to surprising semantics. The container use case would work fine.
But in this scenario, wouldn't you want to break those tools precisely because they are going around the centralized config from which .conf is supposed to be generated?
But that file gets read at startup. So making changes there is a valid way of making changes.
What you really want is to prevent postgres from writing to that file.
That’s more complicated than just making it write only for everyone. Adding an option to stop postgres from doing what you don’t want it to do makes sense to me.
pg applying the change and failing to persist it (in case of a write failure) is the acceptable behaviour probably? Write failures leading to alter failures would count as a surprise.
But after disabling alter system, you can use permissions to avoid local overrides, so this is probably worth adding in the docs.
I mean kinda. But there's a good chance this is also misleading. Lots of people have been fooled into thinking LLMs are inherently stupid because they have had bad experiences with GPT-3.5. The whole point is that the mistakes they make and even more fundamentally what they're doing changes as you scale them up.
Sounds like you understand WebRTC yet take issue with describing it as a p2p-enabling technology. It is literally technology which enabled peer to peer connectivity in the browser. Without it, browsers would not be able to directly connect to one another over the Internet. These kinds of definition-focused posts on HN are always so tiresome.