I do see how it does, in a way. That something the designer thought is "invalid state" turns out a valid and possible state in real world. In terms or UI/UX, it's the uncomfortable long pause before something happens and screen renders (lack of feedback, feeling that system hangs). Or, content flicker when window is resized or dragged. Just because somebody thought "oh, this clearly is invalid state and can be ignored".
The real world and user experience requirements have a way of intruding on these underspecified models of how the world "should" be.
That’s still a poorly designed system. For UI there should be a ‘view model’ that augments your model, that view model should be able to represent every state your UI can be in, which includes any ‘intermediate’ states. If you don’t do this with a concrete and well constrained model then you’re still doing it, but with arbitrary UI logic, and other ad-hoc state that is much harder to understand and manage.
Ultimately you need to make your own pragmatic decisions about where you think that state should be and how it should be managed. But the ad-hoc approach is more open to inconsistencies and therefore bugs.
It sounds like some dynamic gain is happening every time he starts talking, and then after ~1 second it gets better. I don't think it's a "missing hardware" issue, just turning down the gain would probably be enough, or tuning the software dynamics if he's using that. Could also be that the podcaster tried doing some normalization across the entire podcast while mastering and fucked it up that way.
It's cute when somebody very smart pretends to be "grug brain" but loses it's appeal when somebody pretty obviously not smart at all is exposed for what they are.
If think the attention is in proportion, people are reacting just like when a body has an inflammation. If a person of _this_ calibre can be fired, and not even fired, disposed as a used-up tissue, it is a sign of times.
I find the comments Copilot proposes is better than average comment quality for the code base I routinely work on: maybe you are so great you don't need any help ever, but that's not true for the average software dev.
> I find the comments Copilot proposes is better than average comment quality for the code base I routinely work on
Maybe the average is just so bad? The completions I get for comments are document what the code is doing, which is not something that I ever put into comments. It's always:
a) A highlevel (prefixed to the function/block/scope definition) list of steps, input expectations and output expectations for the forthcoming function/block/scope
Or
b) A note explaining why the code does what it does.
A comment repeating the code but in English is useless.
It's not that my comments are infallible, but if I write something wrong/silly it'll be caught during code review. similarly if there's a comment missing before some arcane nonsense nobody will remember in 3 years, then i'd expect a PR reviewer to tell the dev to add one.
Copilot just likes to puke very useless comments whenever I type "//", especially in autocomplete mode (I don't really use the chat mode).
The real world and user experience requirements have a way of intruding on these underspecified models of how the world "should" be.
reply