I wouldn't be surprised if they don't. Valve don't want to sell hardware, they want to sell games. They only make hardware as flagships for new markets, then they want other hardware manufacturers to take over.
the legion go is more powerful and a has a nice screen, but is heavier, boxier, and has a worse batteyr life than the steam deck
Valves moving into hardware more than ever right now, not moving away from it. They've already sand multiple times a deck 2 is on the cards, but only when theres enough of a hardware bump to make it make sense as a product. Slapping a tiny bit newer cpu in there and calling it a Steam Deck 2 isn't what Valve are about.
They definitely are working on it. They announced the steam machine, steam controller, and the valve frame (standalone vr headset with seamless screen sharing from a PC), and in their reveal video the first thing they rather coyly say is “we’d love to share information about our next Steam deck, but that’s for another day!” and announce a bunch of other cool stuff.
I feel like when I talk to someone and they tell me a fact, that fact goes into a kind of holding space, where I apply a filter of 'who is this person that is telling me this thing to know what the thing they are telling me is'. There's how well I know them, there's the other beleifs I know they have, there's their professional experience and their personal experience. That fact then gets marked as 'probably a true fact' or 'mark beleives in aliens'.
When I use chatGPT I do the same before I've asked for the fact: how common is this problem? how well known is it? How likely is that chatgpt both knows it and can surface it? Afterwards I don't feel like I know something, I feel like I've got a faster broad idea of what facts might exist and where to look for them, a good set of things to investigate, etc.
The important part of this is the "I feel like" bit. There's a fair but growing bit of research that the "fact" is more durable in your memory than the context, and over time, across a lot of information, you will lose some of the mappings and integrate things you "know" to be false into model of the world.
This more closely fits our models of cognition anyway. There is nothing really very like a filter in the human mind, though there are things that feel like them.
Maybe but then thats the same wether I talk to chatGPT or a human isnt it? except with chatgpt i instantly verify what im looking for, whereas with a human i cant do that.
I wouldn't assume that it's the same, no. For all we knock them unconscious biases seem to get a lot of work done, we do all know real things that we learned from other unreliable humans, somehow. Not a perfect process at all but one we are experienced at and have lifetimes of intuition for.
The fact that LLMs seem like people but aren't, specifically have a lot of the signals of a reliable source in some ways, I'm not sure how these processes will map. I'm skeptical of anyone who is confident about it in either way, in fact.
> The mental motion of “I didn’t really parse that paragraph, but sure, whatever, I’ll take the author’s word for it” is, in my introspective experience, absolutely identical to “I didn’t really parse that paragraph because it was bot-generated and didn’t make any sense so I couldn’t possibly have parsed it”, except that in the first case, I assume that the error lies with me rather than the text. This is not a safe assumption in a post-GPT2 world. Instead of “default to humility” (assume that when you don’t understand a passage, the passage is true and you’re just missing something) the ideal mental action in a world full of bots is “default to null” (if you don’t understand a passage, assume you’re in the same epistemic state as if you’d never read it at all.)
> Afterwards I don't feel like I know something, I feel like I've got a faster broad idea of what facts might exist and where to look for them, a good set of things to investigate, etc.
Can you cite a specific example where this happened for you? I'm interested in how you think you went from "broad idea" to building actual knowledge.
Sure. I wanted to tile my bathroom, from chatgpt i learned about laser levels, ledger boards, and levelling spacers (id only seen those cross corner ones before).
I guess. I also used it to check the side effects of coming off prednisolone, and it gave me some areas to look at. I've used it a bunch to check out things around kidney transplants and everything ive verified has been correct.
I spent a year in a bunch of airbnbs and every time there was an induction hob it had at least one of these issues. I really like them otherwise but the buttons are just so bad.
For me this is the other way round. When I was a student (physics) I had a very, for a lack of a better word, "practical" visualization in my head - what I needed to understand what I was studying. There was a lot of maths too, visualized.
Today, 30 years later, I have vivid representations of calligraphy or art, especially when I fall asleep. I fall asleep within at worst minutes so I cannot really take full pleasure of watching these ilages and during the day I am too surrounded by sources of sound, images etc. to meaningfully repeat the exercise.
The _absence_ of visual imagery is binary: you cannot see images at all or, to whatever extent, you can. Those who do have any mental imagery at all, however, fall on a scale. There are numerous studies of certain real downsides to aphantasia, notably tied to episodic memory, which don't seem to be present in those simply with diminished visual imagery.
No. I’m remembering the Eiffel Tower as a very specific moment when I saw it the last time I went to Paris, but it’s more like a description of the scene.
Not really a description though, that seems… slow? The elements are all there just not in visual form.
A simple test I've seen mentioned is, ask someone this: “imagine a car, a fast car, zipping through a windy road… ok? (pause) now, what color was the car you saw?”
If you even need to think about it, you hadn't seen it.
As a non-aphantasia person, this just seems like a really, really bad "test".
Famously, there's a psychology experiment where a person in a gorilla costume walks through the middle of a scene and beats their chest before walking off the other side of the screen, but people who've been given a challenge of tracking a ball being passed around will completely miss the gorilla. They'll laugh in shock on watching the same video a second time, amazed that they didn't "see" the gorilla on first viewing when their attention was on the ball.
In your simple test, focus is going to be drawn to other components - "fast", "zipping" and "windy" make me pay attention to the curves of the road, the wheels, the trees or cliffs causing the road to wind. The color of the car is irrelevant, so I don't pay attention to it.
I can't tell you what color the car was, but when I watched the gorilla video (without knowing in advance about it) I didn't know a gorilla had walked through the video either.
I believe both that aphantasia may be a real thing, and that the vast majority of discussion about it online is plagued by so much imprecision and variety in use of language that it can be hard to say how many people who think they may have it, actually do.
Consider attempts in this very thread to compare conscious visualization to visualization in dreaming. Someone who isn't in a critical frame of mind or doesn't know about the limitations of vision in dreams and how our brains trick us about dream-sight (or the fairly different limitations of real vision and how our brains also trick us about that, as you mention) may follow a train of thought like, "well, I 'see' just fine in dreams, and my conscious 'mind's eye' is very similar to that, so sure, by the transitive property, I can 'see' about as well when I visualize as I actually see things with my real eyes"
Me, I go "well dream vision for approximately everyone is total shit but with a layer of trickery on top, and my 'inner eye' is similar to that except with the trickery dialed way down so I can tell where the seams are and if I try I can be aware of when I've just invented some detail that was 'always there' but actually wasn't a moment earlier and I can tell that I'm not actually seeing with my eyes (unlike a dream, where I think I'm 'seeing'), so yeah those two are pretty close for me, and the ways in which they differ are basically just how much my brain's lying to me so arguably aren't 'real' differences anyway, but both are entirely unlike actually seeing, so no, I don't 'see' when I visualize the same way as I 'see' with my eyes, though it is close to how I 'see' in a dream except I'm less-fooled about how bad it is"
... and I propose that these two responses could come from people with identical actual capacity for mental visualization.
When one of the former meet the latter, it might end in the latter thinking they have aphantasia or at least lean farther that direction, without any difference in their actual experience of or capacity for visualization.
....
I've seen a supposed set of autism test questions (I don't know if they're really used in autism diagnostics) that include something like "would you rather go to a party, or stay home and read a book?" and supposedly the "autistic" indicator is asking follow up questions or excessive hesitation. Meanwhile I'm very sure you could find people who instantly answered "go to a party" but actually choose that far less often when presented with the real choice involving those two things (necessarily with a lot more details and context filled in). I don't think they're lying or deceiving themselves! I think they're regarding the question very differently from how some others do. I think something similar is going on here, with two "tribes" with different perspectives on the question itself trying to communicate and talking right past one another, leading to much confusion.
(Meanwhile, I do think it's entirely possible aphantasia is real, I just also strongly suspect a lot of the people who've been led, by online discussion, to believe they're far from the median in this regard, actually aren't)
As mentioned elsewhere, researchers have done brain scans while asking people to imagine something, and for the majority of people the visual cortex lights up, but for a small number of people the visual parts of the brain are not so active.
This is very much a real thing, but largely goes unnoticed because it doesn’t really affect anything, except for people going about their lives thinking that the word ‘visualise’ is a metaphor.
That does not match my experience. I can imagine things, but details are limited to properties i intentionally think that the imagined object should have.
It's more like it's in a different plane, you can see it but it's from another source, like how I can hear things but it doesn't effect my site. If I imagine a candle I "see" a candle in front of a black background, with a flickering flame and a bit of wax dripping down the side. Like how you can have a song in your head but still listen to people
I wouldn't say "hear", but I do have an inner monologue. When I read, I have an experience of the words in my mind. But similarly, when I look at the world, I have an experience of what I'm looking at, while I'm looking.
The difference comes when I close my eyes vs. block my ears. When I close my eyes, I don't see images, I can't voluntarily make images appear. But with my eyes and ears blocked, I can still think words - my inner monologue - which I experience in much the same way as I do when I'm reading. I can't conjure other sounds though, which is why I don't really consider that equivalent to "hearing" - it's not sound, it's the concept of words. I don't have any analogue of that for images.
Ordinary aphantasia doesn't imply anything about lack of inner monologue. Some people apparently do lack an inner monologue, and if they're also aphantasic, that's been described by some authors as "deep aphantasia". But there's no evidence that the two conditions are related, except in a kind of conceptual sense.
It's like hearing a song in your head, you can listen to it and maybe keep time roughly but if someone asks you what instruments there are you might not be able to get all of them, or might not remember the drums or the baseline. It's all much more vague. If you asked me to remember my childhood home I can visualise 'all of it' in my head, but maybe not what the type of bricks are like, or where all of the windows were.
This actually highlights to me what may be different about mental images for other people. Because I can much more clearly hear music in my head than I can see images in my head. So if it's much more vague for others, that must be kind of what images are like for me.
Not quite. I have had a lot of musical training and have a very good musical memory. I can write down songs from my head or hear a song and write it down later, depending on how complicated it is, usually with only 1-2 listens, or play it back, etc. I can visualize things in my head but it is a lot more abstract, or rather, harder to explain.
I think the person you're replying to didn't describe it exactly. It's not really about how good your memory is, I think. It's that no matter what, "replaying" the song in your head isn't going to bring about the same reaction as actually physically hearing music. It's like a simulation, a higher-order perception, thinking of yourself hearing it rather than willing yourself to really hear it in the same way as usual.
I worked in a medium data center and there was no noise outside, no noise from the office in the same building, no noise outside the airlock, very loud inside.
Maybe huge datacenters are very different somehow?
They have(in just this one example) - 35 turbines at 16MW each - that's half a gigawatt of power. Having the kind of battery storage that could provide this amount of power for more than a few minutes is.....well, not impossible, but extremely expensive, especially for something that will just sit there unused(hopefully). Gas generators are comparatively very cheap, easily available, and if fuel is being fed into the system can operate for days on end.
And you'd need an insane amount of solar panels to actually recharge those batteries in any kind of reasonable time too, so you expose yourself to a massive risk if you had two out of power events within say 12 hours. So you'd probably build all of those batteries and solar panels but you'd still need to have emergency generators ready to go anyway.
They don’t have room for another building the size of the data center to store batteries in and thousands of acres of land for solar panels (wild ass guesses on battery building footprint and solar field footprint)
Critical loads require generators, batteries don’t cut it. Data centers want the most reliable backup power they can with the longest runtime. Battery storage density is not high enough to back up a 500MW+ data center for any length of time without a comical amount of batteries.
When the NEC allows critical, equipment, and life safety branch at hospitals to be backed up with batteries and solar panels, battery storage will be at a point where battery backup of data centers is feasible. Right now it isn’t.
Fair point, but I did not attempt to integrate google search into my processes or workflow ( shows what I know about future predictions ), because while it was useful and did provide access to information, it was obviously limited in a sense that it could only take a mule like me to the water.
I don't want to delve into specifics, because it is a public forum. But the difference between learning google syntax and llm handling ( which I suppose would include prompt engineering ) should not be understated.
the legion go is more powerful and a has a nice screen, but is heavier, boxier, and has a worse batteyr life than the steam deck