Well, if one had enough time and resources, this would make for an interesting metric. Could it figure it out with cut-off of 1900? If so, what about 1899? 1898? What context from the marginal year was key to the change in outcome?
FYI there is a long-standing bug with ProMotion and switching spaces. Their length is tied to refresh rate somehow. Switching to static 60hz makes them faster (but what an annoying choice to have to make!)
While I sympathize with the feeling, it’s a stretch to say “obligated by law”. You pay taxes, which your legally-elected representatives decide how to spend. We elect them to speak and choose on our behalf. It isn’t a “loophole” when this runs afoul of an individual’s values. It is simply that we have a representative government that makes decisions by majority votes. I don’t agree with most defense spending, but I acknowledge that a majority of this country wants it. This is the purpose of compromise. If there had been a good-faith proposal to reform CPB [1], we could have made it better. The collateral damage from destroying the good parts (e.g., PBS) due to our failure to compromise should not be celebrated.
[1] Such a proposal isn’t hard to imagine. A key purpose of local stations is to give a platform to the voices of local people. Simply shifting funding from national programming to local programming (without changing the total) would have accomplished this “debiasing” and empowered the (tragically endangered) local news.
I’m not sure if you are intentionally trying to miss the point. The comment was claiming they are obligated by law to support media they don’t agree with. We are all equivalently obligated by law to not steal or commit other crimes. We pay taxes. They are part of the contract of our society. What our representatives decide to spend them on doesn’t change that.
"We are all [...] obligated by law to not steal or commit other crimes" is NOT equivalent to being "obligated by law to support media [one does not] agree with". Not even remotely. Negative obligations != positive obligations.
you can either pay taxes at gunpoint or you can pay tribute/protection/insurance/ransom/bribes at gunpoint. not sure there are (or have ever been) many places in the world where you don't owe some debt of obligation to a larger organization, be it a government, organized crime, or something else.
> While I sympathize with the feeling, it’s a stretch to say “obligated by law”. You pay taxes, which your legally-elected representatives decide how to spend.
Without limit? If Trump and the Congressional GOP force a bunch of tax-funded in-your-face right-wing propaganda that would be ok with you because "[y]ou pay taxes, which your legally-elected representatives decide how to spend"?
Well they're already doing this and much worse, so no need to appeal to something fake. There's plenty of actual awful things our elected officials are doing which you can point to.
But, the idea is, that we're not out here proposing that we stop paying taxes. We're not the ones committing a Jan 6th, are we?
The solution to Trump isn't assassinating him or whatever, it's legally impeaching him or not voting for him. That's how a democracy works.
This is a helpful way to think about it. Though, is the problem of having an incomplete definition the same as lacking a unique name? Maybe the solutions are different, and the author points out that made-up names don't cut it. Is it instead more of a catch-22 situation?
The other dimension implied here is the timing of when variety is introduced. If all the options were known when the system was originally designed, the issue may have been more clear. So then the other problem is that, when the need to distinguish between cases first came up, the solution was designed for only 2 cases. And if there were only ever those two cases, the solution presented would probably be the most efficient.
The solution to this incremental scope problem is a lot less clear to me and something I struggle with personally in scientific computing projects. When variety #2 pops up, how does one decide whether to restructure everything in case of a variety #3, or take the quicker (potentially more efficient) path and avoid unnecessary complexity? Assuming I've done as much as I can to anticipate and constrain my needed functionality. It somehow feels like I end up losing either way.
> The solution to this incremental scope problem is a lot less clear to me and something I struggle with personally in scientific computing projects. When variety #2 pops up, how does one decide whether to restructure everything in case of a variety #3, or take the quicker (potentially more efficient) path and avoid unnecessary complexity?
Short-answer rule-of-three (but really it depends).
I get what you mean, but this is more about implementation choices than identification of variety. Varieties can be grouped different ways that makes sense for implementation challenges or development timelines and later refactored to be more effective. The identification of the varieties themselves is distinct from the choice of grouping/implementation. Things that have little effect on behaviors (eg. color only affects one aspect of rendering) can be a property of a thing. More complex/varied characteristics may be better served by separating into distinct things.
Sometimes a 'bag of properties' can serve for all things (similar to JS Object). That's how I believe reddit's data model evolved at one time. I think HN isn't much different as it seems that id's of posts, comments, etc share numbering.
Think about how physicists talk about particles, they can pick any name, rename them but everyone knows what's what because the names are defined by the set of properties not the other way around. If they find distinct sets of properties they want to talk about then they know they need another name.
I ran into this same scenario talking about names for different states of inventory. Every person/company I talked to had different names or meanings of special 'reserved'/set-aside states. I kept telling everyone to define what they mean by that name, list the expected behaviors of it. Then it doesn't matter what anyone calls anything, I can match them up by properties/behaviors or find out they're distinct things.
Things became clear you can't trust names when I saw the formula: on-hand = available + unavailable + committed. Shouldn't available + unavailable form some complete total?
As someone in grad school for related topics, this stuff will undoubtedly surpass physical weather models (climate change is a different story). As I chart my own research direction, I wonder what kinds of problems meteorology will still be working on in a few decades? AI models like this will render many physical models obsolete. And with the leaps made recently by cube satellites and other remote sensing tech, direct observations may soon no longer be sparse. So what does that leave? Understanding the physical principles? Yes, and doing so will augment these advances. But what else? And what kind of applied problems could the physical principles solve? At some point, AI weather forecasts may cross the threshold of being "good enough" for 99% of use cases. I don't doubt that there is much more important science to be done (I recall someone famously claiming science was "done" in the 1800s), but admittedly it takes some creative thought to imagine. The field will have to evolve. I don't know if it is possible to know how it will evolve before it does. Just something I wonder about.
Also, I certainly don't mean to detract from this achievement . This is great work and it is truly awesome to see these kinds of leaps. There is plenty of vanity in science and the urge for self-preservation, but such advances in weather forecasting will save so many lives and do a lot of good in the world. I wholeheartedly cheer on the scientists behind this work and am excited to see what is around the corner.
Pre-industrial CO2 concentration is synonymous with the "natural concentration", at least in the recent past. We made a very large change that has thrown Earth's systems out of equilibrium. Returning to pre-industrial CO2 levels would undo that change and bring things back towards equilibrium.
“Natural concentration” is not the right way to look at it because there are higher concentrations that predate the industrial revolution and humans. The all time high (that we know of) is from about 350,000 years ago. This was by all means natural and pre industrial revolution.
"The only known natural concentration empirically compatible with long-term human civilisation".
"The planet did exist/will exist just fine without us" is a pretty worn truism. You might as well wryly note that water isn't natural because everything was hydrogen once.
> empirically compatible with long-term human civilisation
Empirically observed, atmospheric CO2 went from ~320ppm to ~410ppm from 1970 to 2020[0], during which period the human population more than doubled from ~3.7B to ~7.8B and yet deaths caused by climate dropped threefold[1] (not 1/3 the rate; 1/3 in absolute number).
On the scale of human civilisation, 50 years is hardly "long term".
Polonium by that ultra-short-relative-term reckoning is not only harmless as it you still feel fine 10 minutes later, but actually healthsome as you rather feel refreshed by the delicious green tea you just drank in that 5-star hotel bar.
They are dynamical systems, there is no equilibrium. See also: climate charts for the last few ice age cycles.[1] In the bigger picture we want to modify Earth's climate and definitely do not want to end the current interglacial period, to be fair we've already done that, but returning to a "natural" pre-human climate cycle on the 10,000 year scale is not desirable.
Dynamical systems can have equilibrium points —- e.g. an inverted pendulum is stable when hanging straight down. If you deviate too far from an equilibrium point, the system may find another equilibrium that is less desirable for the user. I’m not an expert in climate change, but those things certainly happen for engines, robots, and other systems.
I like to think of it as scrappy terraforming because we aren't even sure we could handle any of the naturally occurring variation.
Scrappy because, well the planet doesn't quite become uninhabitable and we're starting from the end-game. Science fiction also had me expecting some very cool terraforming infrastructure, not psy-ops to get the serfs to eat bugs.
I'm not totally sure how systems work for drilling this deep, but typically ice core setups attach the coring apparatus to the surface via a cable that is spooled by a winch. The cable itself ends up being the heaviest part of the system.
As the hole gets deeper, the amount of time to bring up core sections and send the drill back down become significant. That combined with the previously mentioned short field season. Drilling more than a few hundred meters becomes very difficult logistically as well, especially in such a remote setting.
(you need to pull the drill out periodically to let the dust out, and the distance of that pull increases with depth. But it is O(K1 * N^2 + K2 * N) where K1/K2 are pull-out and drilling-in (both seconds per mm), and for short holes most of the time will be drilling not removing dust.
It's not O(N^2) is it? It can be a continuous line of ice being pushed up. Depending on the weight bearing ability of the lift and digging capacity, you would figure out a fixed distance after which you would place the buckets to carry up the ice.
Its an interesting interview question at the very least. (More complications arise as and how you get deeper into the ice).
You can't have a continuous line of ice coming up, unless you're digging for slush. Each intact X-meter core must be hauled up on its own, and then the drill has to go back down. The deeper you are, the longer it takes to haul up one core and send the drill back down. So, retrieving the cores is clearly O(N^2).
Drilling the core itself is O(N), but as you go deeper the core retrieval dominates. Not to mention everything getting more complex the deeper you go.