For many publications you could be critisizing, I'd agree with you, but Quanta usually reaches a higher standard that I feel they deserve credit for. Here's the Quanta article on the same thing [1]. It goes into much more detail, it shows a picture of the perfect sofa, and links to the actual research paper. They're aimed at a level above "finished high school", and I appreciate that; it gives me a chance to learn from the solution to a problem, and encourages me to think about it independently.
I agree with you that Quanta doesn't always "allow specialists to understand exactly what's being claimed", which is a problem; but linking to the research papers greatly mitigates that sin.
And here's how they clearly explain the proof strategy.
> First, he showed that for any sofa in his space, the output of Q would be at least as big as the sofa’s area. It essentially measured the area of a shape that contained the sofa. That meant that if Baek could find the maximum value of Q, it would give him a good upper bound on the area of the optimal sofa.
> This alone wasn’t enough to resolve the moving sofa problem. But Baek also defined Q so that for Gerver’s sofa, the function didn’t just give an upper bound. Its output was exactly equal to the sofa’s area. Baek therefore just had to prove that Q hit its maximum value when its input was Gerver’s sofa. That would mean that Gerver’s sofa had the biggest area of all the potential sofas, making it the solution to the moving sofa problem.
Setting A4 to zero (or anything below 80) doesn't work. This doesn't improve if the constants in the A4 formula are moved a short distance away from 100.
In case you can't tell from that last example, I think being able to fix the intended values of multiple outputs simultaneously would be interesting. If you were to give more details about the solver's internals, I'd be keen to hear them.
I believe that the important part of a brain is the computation it's carrying out. I would call this computation thinking and say it's responsible for consciousness. I think we agree that this computation would be identical if it were simulated on a computer or paper.
If you pushed me on what exactly it means for a computation to physically happen and create consciousness, I would have to move to statements I'd call dubious conjectures rather than beliefs - your points in other threads about relying on interpretation have made me think more carefully about this.
Thanks for stating your views clearly. I have some questions to try and understand them better:
Would you say you're sure that you aren't in a simulation while acknowledging that a simulated version of you would say the same?
What do you think happens to someone whose neurons get replaced by small computers one by one (if you're happy to assume for the sake of argument that such a thing is possible without changing the person's behavior)?
> Why would anyone pick the flexible/potentially-insecure option?
Because having a connection that's encrypted between a user and Cloudflare, then unencrypted between Cloudflare and your server is often better than unencrypted all the way. Sketchy ISPs could insert/replace ads, and anyone hosting a free wifi hotspot could learn things your users wouldn't want them to know (e.g. their address if they order a delivery).
Setting up TLS properly on your server is harder than using Cloudflare (disclaimer: I have not used Cloudflare, though I have sorted out a certificate for an https server).
The problem is that users can't tell if their connection is encrypted all the way to your server. Visiting an https url might lead someone to assume that no-one can eavesdrop on their connection by tapping a cross-ocean cable (TLS can deliver this property). Cloudflare breaks that assumption.
Cloudflare's marketing on this is deceptive: https://www.cloudflare.com/application-services/products/ssl... says "TLS ensures data passing between users and servers is encrypted". This is true, but the servers it's talking about are Cloudflare's, not the website owner's.
Going through to "compare plans", the description of "Universal SSL Certificate" says "If you do not currently use SSL, Cloudflare can provide you with SSL capabilities — no configuration required." This could mislead users and server operators into thinking that they are more secure than they actually are. You cannot get the full benefits of TLS without a private key on your web server.
Despite this, I would guess that Cloudflare's "encryption remover" improves security compared to a world where Cloudflare did not offer this. I might feel differently about this if I knew more about people who interact with traffic between Cloudflare's servers and the servers of Cloudflare's customers.
Let's encrypt and ACME hasn't always been available. Lots of companies also use appliances for the reverse proxy/Ingress.
If they don't support ACME, it's actually quite the chore to do - at least it was the last time I had to before acme was a thing (which is admittedly over 10 yrs ago)
To me it reads like there was a gradual rollout of the faulty software responsible for generating the config files, but those files are generated on approximately one machine, then propogated across the whole network every 5 minutes.
> Bad data was only generated if the query ran on a part of the cluster which had been updated. As a result, every five minutes there was a chance of either a good or a bad set of configuration files being generated and rapidly propagated across the network.
There's a whole paragraph in the article which says basically the same as your point 3 ( "glass bouncing, instead of shattering, and ropes defying physics" is literally a quote from the article). I don't see how you can claim the article missed it.
From the article, it looks like the problem is partially caused by significant parts of the transmission network being temporarily shut down due to ongoing upgrades. These could probably have been started slightly sooner, but they are already underway, so I don't think your point is weel supported.
Except that if those upgrades had been started 10 years earlier then there would have been lower demand (10 years less growth in demand). The reductions in capacity would have had a much lower effect on prices.
Fonts can be complicated. Using nonsense like [1] (specifically contextual alternates), you could have the glyph for the first letter of a word depend on the last letter. I don't think you could get that to work for all letters in an arbitrary length word, but making a font that works for all words shorter than say 20 characters should be doable.
Angular size is proportional to size/distance, so the calculation you're trying to do is correct; however, 50700 km is more than 1e7 meters so the angular sizes differ by about 3 orders of magnitude.
I agree with you that Quanta doesn't always "allow specialists to understand exactly what's being claimed", which is a problem; but linking to the research papers greatly mitigates that sin.
[1] https://www.quantamagazine.org/the-largest-sofa-you-can-move...
And here's how they clearly explain the proof strategy.
> First, he showed that for any sofa in his space, the output of Q would be at least as big as the sofa’s area. It essentially measured the area of a shape that contained the sofa. That meant that if Baek could find the maximum value of Q, it would give him a good upper bound on the area of the optimal sofa.
> This alone wasn’t enough to resolve the moving sofa problem. But Baek also defined Q so that for Gerver’s sofa, the function didn’t just give an upper bound. Its output was exactly equal to the sofa’s area. Baek therefore just had to prove that Q hit its maximum value when its input was Gerver’s sofa. That would mean that Gerver’s sofa had the biggest area of all the potential sofas, making it the solution to the moving sofa problem.