> there's strong reasons to believe that energy is required to represent all information in the physical universe
You simply do not need to believe this. The universe doesn't need to be "stored" somewhere.
> Quantum Computing is firmly based on pretending that this isn't how it is, that somehow you can squeeze 2^n bits of information out of a system with 'n' parts to it.
Quantum computing does not believe this. It is a theorem that you can only get n bits out of n qubits, and quantum computing speedups do not rely otherwise.
Noise is hard, but error correction is a mathematically sound response.
> The universe doesn't need to be "stored" somewhere.
Information that can have any effect on anything physical must have an energy associated with it, because that's basically the definition of energy: the property of physical systems that can cause change over time. No energy, no change in state. These are practically axioms.
Information that has zero energy can have only zero effect on observables in (our) universe.
> It is a theorem that you can only get n bits out of n qubits, and quantum computing speedups do not rely otherwise.
I'm pretty sure you've misunderstood something somewhere, because the 2^n states represented by n qubits is mentioned in practically all QC materials.
There are two points. You got the first one, which is controllability. The components are controllable and programmable. But second it's important to appreciate the difference between simulating 10^23 classical billiard balls with a computer (very hard, C * 10^23 work for some C) and simulating 10^23 quantum mechanical atoms (C * d^(10^23) work for some C and some d). Those numbers are very different.
No, not quite, it's about the error-per-gate. RCS has very loose requirements on the error per gate, since all they need is enough gates to build up some arbitrary entangled state (a hundred or so gates on this system). Other algorithms have very tight requirements on the error-per-gate, since they must perform a very long series of operations without error.
It's different from your hourglass in that the computer is controllable. Each sampled random circuit requires choosing all of the operations that the computer will perform. You have no control over what operation the hourglass does.
It won't be factoring large numbers yet because that computation requires the ability to perform millions of operations on thousands of qubits without any errors. You need very good error correction to do that, but luckily that's the other thing they demonstrated. Only when they do error correction, they are basically combining their system down into one effective qubit. They'll need to scale by several orders of magnitude to have hundreds of error corrected qubits to do factoring.
The ones dropped into the macerator are the lucky ones. The ones who live suffer for their short lives, until their economic utility curve crosses a threshold value and they are also slaughtered.
I wouldn't say lucky - I'd rather go through the rest of life suffering than die at this moment, as there is a joy to be found in that itself - but I understand your sentiments. Very few would choose not being born over life, as you can't predict how life will be, even for these chickens. The chickens that live in my neighbour's garden had a horrible 2 years of life in cages, followed by a great 10 or so.
Reducing pain and suffering is better than not doing anything at all, and it's childish to assume that we can do anything more than regulate our food habits, and perhaps a couple of those around us. I'm vegan; I'm not deluded enough to think that my eating habits are anywhere near mainstream. There doesn't seem to be any end in sight for animal agriculture, whatever my personal feelings may be about that, so the next best thing one can do is reduce suffering where possible, and in-ovo sexing in hatcheries works a small way towards that.
I under that factory farming is absolutely ugly and there is a lot of low hanging fruit in how to improve it. But I never understand the extremist philosophy that no one should eat chickens or raise them in their back yard for eggs. Chickens simply don’t exist in the wild and are very far removed from the jungle fowl from which they came so long ago. If we stopped eating and raising them, they would go extinct. Is it better for a species to live in often poor conditions or to not exist entirely?
Is it better for a species to live in often poor conditions or to not exist entirely?
if those are the two choices, then i'd prefer the later.
but isn't there a third choice? to raise chicken in good conditions?
i suppose maybe that doing so would reduce the amount of chicken we can consume, and also raise the price, but i think that is preferable to letting them suffer.
While I get your overall point, I'd just like to point out the island of Kauai (and the rest of the islands of Hawaii). They have a massive wild chicken population.
> Very few would choose not being born over life ...
Err, Moksha, or something like it is pretty much the goal of a number of religions practised by a significant chunk of the planet.
I agree with the premise that "nobody deserves to be a millionaire" (although maybe with inflation we should say deca-millionaire), but a hard cap seems like a crude tool with many potential downsides. A UBI would be a better tool for raising living standards, and a progressive wealth tax would be a better guard against hereditary fortunes, without removing the incentives for high earners to continue working.
You don't need incentives for high earners to continue working.
1) We can afford for them to stop working. Really, it's okay. People can just chill. We're more than efficient enough for it.
2) People don't need monetary incentives to work. Most of us get bored just chilling. We want to feel useful. We need community. We need something to engage our brains. The vast majority of people, left to their own devices, will find some sort of work to do.
Yes. They do now. Not because of boredom but for ideological reasons. The pay and working conditions at the end of the road certainly aren't worth the debt and stress required to get there. Not everyone's a specialist surgeon catering to the elite.
I think most people go for the money. If you were to walk into any undergrad bio 101 class and announce: "sorry guys, but you can only make 80k a year," 95% of the students would leave immediately.
I’m not a doctor, but work with lots. I’m extrapolating what I see and that’s always dangerous, but there isn’t a single colleague whose primary motivation is money. Maybe someone did come in and state an earnings cap, and I’ve been left with the 5%?
It's possible that once people start, they gain satisfaction from the job by helping people. But that doesn't mean that they began that profession because of that. Also consider that many doctors were pushed by their parents into become so. I don't think students spend their childhood chasing "A"s and extra circulars in order to help people.
I don’t think children chase high grades for money.
I’ve been hunting around but can’t find any decent journal article with new students polled on motivations - it would be interesting to see.
The things I’m readying are broadly around a want to help people, often after a health scare of their own or a close family member/friend. I’d prefer something a bit more objective than the puff pieces I’m finding.
Habitable? There is plenty of water for it to be habitable, residential use is a drop in the bucket compared with agricultural use. Without a technological solution, we'll have to scale back agriculture, but that's not a regional problem, since the market for food is an international one.
Also, we don't have to get all of the water from desalination. Southern California can use desalination, which makes sense to the ocean, short distances, and solar power. That frees up Colorado River water that Nevada and Arizona can use.
Desalination will probably never be worth it from agriculture, but agriculture should survive on the water they can access after people.
Ok, I was incorrect there. I was thinking more about industrial application potential. There are some cuprates which are SC above -196C, there are some materials which are SC there under some enormous pressure and so on. They are not quite useful for us today, and the research in them is slow. As soon as someone will create industrially applicable SC with LN temps it will be a world revolution, in medicine first of all, then in research and maybe electronics etc.
But my point was not about the old inventions, but about this new PCPOSOP material. Researchers claim an insane jump to -23C (maybe) superconductor which is highly suspicious on it's own. It's like some researchers will claim they have discovered some material with a melting point of +10000C at 1 atmosphere. Sure, maybe. But the fact that materials with melting points between +4000C and +10000C are missing is super suspicious.
Don't get me wrong, their (QuEra's) demonstration is incredibly impressive, but it seems you've been misled by inconsistent nomenclature around the phrase "logical qubit". They've demonstrated a 5/1 encoding scheme, yes, but that scheme is not anywhere close to being sufficiently redundant to allow for deep quantum circuits. When people talk about needing 1000 physical qubits, they mean to make a logical qubit with sufficiently low error rate to run interesting algorithms. In the QuEra device, when they say they "made 48 logical qubits out of 240 physical qubits", they simply meant that they used an encoding, and made no claim about the error rate on those qubits being low enough. There is no hope (that I know of) for a 5-1 encoding scheme to make error rates low enough. The QuEra device would just as well need many more physical qubits per logical qubit.
I want to point out that the experiment was at Harvard in the Lukin group. There is a proposal for constant-rate encodings using large quantum low-density parity check codes via atom rearrangement which could in principle achieve such high encoding rate. That said, it's certainly not mainstream yet. https://arxiv.org/abs/2308.08648
Yes, good point (apologies to the Lukin group). That's an interesting proposal, but it seems from a cursory read that you would need still need very many physical qubits to approach that asymptotic rate, and also you would be forced to take a very large slow down due to serializing all of your logical operations through a smaller set of conventionally encoded logical qubits. That said, I'm not current on SOA LDPC QEC proposals, so I'll moderate my claim a bit to "the first actually useful logical qubits will almost certainly have an encoding rate lower than 1/5".
You simply do not need to believe this. The universe doesn't need to be "stored" somewhere.
> Quantum Computing is firmly based on pretending that this isn't how it is, that somehow you can squeeze 2^n bits of information out of a system with 'n' parts to it.
Quantum computing does not believe this. It is a theorem that you can only get n bits out of n qubits, and quantum computing speedups do not rely otherwise.
Noise is hard, but error correction is a mathematically sound response.