The only thing good about it in my experience is agents+tools+out-of-the-box wrappers for various LLM providers and such, but if you need to support multiple why not just roll your own adapter pattern?
It adds a cognitive burden, and hides the "magic" and prompts three layers deep with agents, to something already cognitively burdensome if you're looping inputs on themselves or calling multiple agents. In my opinion, of course.
I have a Valve Index and I’ve watched movies in a virtual cinema many times, most of those with friends. It’s immersive but becomes a bit of a sweat box after a few hours. The FOV and resolution are lacking and I can’t use it for virtual monitors. I can see this device fixing most of those problems if the specs are true. But I don’t see myself taking it out in public. That’s laughable. And I’d only drop that price if I was sure that the ecosystem is relatively open.
This may be "unfair", but how does a social media company that more than doubled its headcount almost 3 years ago, to more than 2,000 employees, not have a WCAG/accessibility team or such that accessibility seemed to be the one thing that made spez take two steps back? Hell, just by existing you’ve probably gotten an ADA shakedown or two in California. So what the hell? Is it true that the mobile app is so bad that it outright breaks VoiceOver?
Love pglogical. I was thinking, it'd be nice to orchestrate this setup using pglogical too. Would attempt it if there is interest. Especially with bi-directional replication setup.
It’s not your content, it’s theirs. Here comes the part where they sell access to the firehose and historical data instead of selling NFTs and making the mobile app worse like they have for the last few years, because the “value” is in that now. The money taps ran dry, but not for AI, sorry.
It's OP's content that they licensed to Reddit under the terms of their TOS. I'm curious if the generic "we have the right to do anything we want with your content" variation on most social media sites' TOS has really been challenged legally? To an extent it's fair use, but it can also be seen as a privacy overreach depending on the content that was deleted.
I think they have, and they just don’t give a shit. The TikTok/Twitter/political hot take repost sewer of /r/all can practically be automated. The value they see is advertisements to those who consume that drivel, and the high value of enterprise (firehose access), not (say) modifying the API to serve ads to users of /r/AskHistorians.
If Spez had an honest bone in his body, he’d have just said they don’t want the API for third party clients anymore, for the plebs or the “Landed gentry”, and that they want to shift towards the enterprise focus of reselling that firehose of garbage because apparently you can make an LLM out of it! The money has stopped, and we're not profitable, so the time and capital required to add advertisements and targeting to the public API versus whatever crap they’ve built out over the last few years doesn’t make any financial sense.
When they can just sell the firehose of garbage instead. Of course.
A “biological neural network” in a petri dish that has reorganized (been trained) to play Pong by means of electrical stimuli is not conscious. A slime mold that moves away from the light and “solves mazes” is also not conscious.
It is also my (relatively uninformed) understanding that a perceptron can’t really approximate a “neuron” outside of being inspired by how neurons in the visual cortex operate. For that, you need a DNN, thus human neurons are orders of magnitude more complex than “artificial neurons” and they only share a name and a slight inspiration.
All of this is just regression based function approximation, in the end, there’s no need to grasp for a ghost in the machine or anything spooky. It’s just statistics, not consciousness.
You say it is not conscious, that's fine. I am asking you to provide evidence why it is not, when conscious life is an emergent system like these systems. I am looking for an argument or a reasoned response about what is different.
Because regression based function approximators can only "fit the data." That's the difference. They are mathematical constructs that do not have experiences, preferences, or any form of sentience. To assume that such architectures can, and potentially do, or that those things could just emerge out of them given enough weights or layers, that's anthropomorphizing the model. Which humans love to do.
Human or animal consciousness is an emergent phenomenon that entails the ability to experience subjective states: emotions, self-awareness, etc.. It is not just about processing information but involves the qualitative experiences and the “what it is like” aspect of being.
When humans or animals feel pain, there is a subjective experience of suffering that is inherently tied to consciousness. The importance we assign to events, objects, or experiences is inherently based on how they impact our conscious experiences. The worth of things big or small is contingent upon the emotions or feelings they evoke in us.
In contrast, a regression-based function approximator does not have preferences, emotions, or experiences.
When you decide to lift your hand, there is a conscious experience involved. You have an intention and a subjective experience associated with that action. On the other hand, a regression-based function approximator does not “decide” anything in the experiential sense. It simply produces outputs based on inputs and pre-training and maybe RLHF that adjusted its weights. There is no intention, no subjective experience, and no consciousness involved.
There is no qualia. To put it simply: a LLM could output some text that makes you "believe" it has preferences, and subjective experiences. But there's nothing there. Just cognitive artifacts of human beings from its corpus. Does an LLM have recursive self-improvement? Does it have self-directed goals? Does it have any of that? No. It's a predictor. LLMs are not sentient. They have no agency. They are not conscious.
Okay, do you have numbers for that? (honest question)
Most "statistics" I have seen were done via a reddit survey, which 3rd party apps can't do due to API-limitations. So all the users sying "yes I use a 3rd party app" did so either via their mobile or workstation browser (where you have to auth again) which is a hassle most people (me included) are not willing to take for some random reddit survey.
They pretty much force you to use the app if you are on mobile (nag screens at best). Maybe there are ways to get around that but your average andy won’t bother
It adds a cognitive burden, and hides the "magic" and prompts three layers deep with agents, to something already cognitively burdensome if you're looping inputs on themselves or calling multiple agents. In my opinion, of course.