You're wrong. The mistake could have been unfixable. That happens quite frequently (see: countless retracted claimed proofs of major results by professional mathematicians).
The thought police already arrived, see Columbia grant cancellations and Mahmoud Khalil [1].
[1] "Khalil is a “threat to the foreign policy and national security interests of the United States,” said the official, noting that this calculation was the driving force behind the arrest. “The allegation here is not that he was breaking the law,” said the official." https://www.thefp.com/p/the-ice-detention-of-a-columbia-stud...
It's nice to live in a world where actions have consequences. When the media coverage got too much, Marc Tessier-Lavigne finally had to resign as president of Stanford, so he could focus on his job as a Stanford professor.
I can't tell whether your post is a joke. Yes, Tessier-Lavigne was forced to resign. But Stanford let him stay on as a professor. That was terrible: they should have kicked him out of the university.
I'm no expert, but I suspect it is a longer process to remove someone from a tenured professor position, than to remove them as President. We don't know that they won't eventually happen.
There are betrayals so severe that a grindingly slow due process is actually itself an addition betrayal. Not arguing for a kangaroo court, but tenure should not be a defense for blatant cheating.
Interestingly, the asymptotically fastest known algorithm for minimum weight bipartite matching [A] uses an interior point method, which means it's also doing Riemannian optimization in some sense.
>>>
Jonathan Friedman, Sy Syms director of PEN America’s U.S. Free Expression programs, said:
“The irony cannot be lost here: government officials have used their positions to muscle out a scholar of authoritarianism from a prestigious lecture,"
<<<
That doesn't really change the fact that it's exhausting (and worse, "commercially offputting") to be reminded that we're careening towards the worst futures literally imagined. I stayed away from Soylent and I'll probably stay away from this, but thanks for the head's up. rimshot
As big PKD fans, that definitely flew over our heads a bit. Can def understand that view and understand why its is commercially exhausting especially because we agree that we are heading toward some of the worst futures possible, so did PKD. We definitely build with this in mind!
But, the starting point of Neural Networks in the ML/AI sense, is cybernetics + Rosenblatt's perceptron, research done mathematicians (who became early computer scientists)
That's why I wrote that it was unexpected.I'm not taking position of if this was deserved or undeserved, but this was clearly in the realm of physics and inspired by it.
Accepting wrong arguments in support of positions you have is not good way to live your life. It leads to constipation.
These problems are literally already solved? Of course, the IMO problem designers make sure the problems have solutions before the use them. That's very different than math research, where it's not known in advance what the answer is, or even that there is good answer.
I'm saying they weren't solved until the problem composer (created and) solved them. They're not, in general, problems for which solutions have been lying around. So "these are problems that are already solved" isn't introducing anything interesting or useful into the discussion. The post I was replying to was trying to draw a contrast with chess moves, presumably on the grounds that (after the opening) each position in a chess game is novel, but IMO problems are equally novel.
It's true that IMO problems are vetted as being solvable, but that still doesn't really shed any information on how the difficulty of an IMO problem compares to the difficulty of chess play.
Interestingly, the same guy also works on making 'theory-only' algorithms work well in practice [1]. But, it seems like that takes another 20 years -- [1] is building on a theory breakthrough from 2004 [2], but these algorithms are only starting to work in practice in 2024, IIUC. I guess that means there's hope for practical min-cost flow algorithms in 2044.