I've bought a ton of movies in the past. The vast majority I've sold second hand or thrown away because I just didn't care to watch again and I didn't feel like storing something I'd never use forever.
Same goes for a lot of other media. Some amount of it I'll want to keep but most is practically disposable to me. Even most videogames.
No, I do own some (actually it was more in the VHS days so tapes) and I just found that I never really watched them again. So I stopped buying movies. I'm the same with books. Once I read it, I've read it. I would rarely read a novel twice. I know what's going to happen, so what's the point? Reference books are different of course.
Some of us just consume media differently I suppose. I'm a big fan of going back to re-read/re-watch a lot of my favorite media. Sometimes it's because it got a new volume/season/movie came out years later so I'll take time to re-experience the original media to get ready for it. Never really had an issue experiencing something again and having it feel fresh because its been a few years.
I will admit that re-reading books has become less of a habit the older I get because it is time consuming to get through a longer series again.
I'm mostly the same, I don't watch movies twice. But there are exceptions. Some movies are just beautiful or I like how they make me feel, so I want to rewatch them. Groundhog Day is an example.
You're not really thinking this through enough. The exact same logic you used can be applied to music: once you've listened to the album once, you know how it will go, so what's the point of listening again? Presumably you do get something out of listening to music again (since you said you do listen to it more than once), so whatever that "something" is... you can infer that others get similar value out of rereading books/rewatching movies, even if you personally don't.
For myself, the answer is "because the story is still enjoyable even if I know how it will end". And often enough, on a second reading/viewing I will discover nuances to the work I didn't the first time. Some works are so well made that even having enjoyed it 10+ times, I can discover something new about it! So yes, the pleasure of experiencing the story the first time can only be had once. But that is by no means the only pleasure to be had.
> The exact same logic you used can be applied to music: once you've listened to the album once, you know how it will go, so what's the point of listening again?
Most music doesn't have the same kind of narrative and strong plot that stories like novels and movies do, this is a massive difference. And even if it does, it doesn't usually take a half hour or more to do such a change. That's a pretty big difference about the types of art.
How I interpret his comment about the distance:
The benefit of switching from C/C++ to Rust is higher than switching from C++ to Go (in the similar use-cases) or from Java to Kotlin.
Another argument offered for Rust is that it's high-level enough that you can also use it for the web (see how many web frameworks it has). So I think that Rust's proponents see it as this universal language that could be good for everything.
> The benefit of switching from C/C++ to Rust is higher than switching from C++ to Go
Ten years ago the memory model was a compelling benefit, sure, but nowadays we have Fil-C, that C++ static analyzer posted here yesterday, etc. There is some remaining marginal benefit that C and C++ still haven't quite caught up with yet, but is that significantly smaller and continually shrinking gap sufficient to explain things as they stand today?
You are right that the aforementioned assumption did not play out in the end. It turns out that C++ developers did, in fact, choose C++ because of C++ and would have never selected Python even if Python was the fastest language out there. Although, funnily enough, a "faster Python" ended up being appealing to Python developers so Go does ultimately have the same story, except around Python (and Ruby) instead of C++.
> Another argument offered for Rust is that it's high-level enough that you can also use it for the web
It was able to do that ten years ago just as well. That doesn't really explain things either.
I think that is because when you start learning Haskell, you are not typically told about state monads, `IORefs` and likes that enables safe mutability.
It might be because Monads could have a tad bit advanced type machinery. But IORefs are straightforward, but typically one does not come across it until a bit too late into their Haskell journey.
I would never expect a Western European country to not accept Visa and Mastercard. I say this as an Eastern European. But I do remember that in Germany (and Austria) it's not that accepted to pay by card.
I wasn't sure how encipher is in Romanian (it's not common), it's "a cifra". The infinitive in Romanian puts "a" in front of the verb, so it's very close to Spanish.
People see LLMs and tons of tests tests written in the same sentence, and think that shows how models love writing pointless tests. Rather than realizing that the tests are standard and people written to show that the model wrote code that is validated by a currently trusted source.
Shows the importance for us to always write comments that humans are going to read with the right context is _very_ similar to how we need to interact with LLMs. And if we fail to communicate with humans, clearly we're going to fail with models.
> No one claims that good type systems prevent buggy software.
That's exactly what languages with advanced type systems claim. To be more precise, they claim to eliminate entire classes of bugs. So they reduce bugs, they don't eliminate them completely.
I hate this meme. Null indicates something. If you disallow null that same state gets encoded in some other way. And if you don't properly check for that state you get the exact same class of bug. The desirable type system feature here is the ability to statically verify that such a check has occurred every time a variable is accessed.
Another example is bounds checking. Languages that stash the array length somewhere and verify against it on access eliminate yet another class of bug without introducing any programmer overhead (although there generally is some runtime overhead).
The whole point of "no nullability bombs" is to make it obvious in the type system when the value might be not present, and force that to be handled.
Javascript:
let x = foo();
if (x.bar) { ... } // might blow up
Typescript:
let x = foo(); // type of x is Foo | undefined
if (x === undefined) { ...; return; } // I am forced to handle this
if (x.bar) { ... } // this is now safe, as Typescript knows x can only be a Foo now
(Of course, languages like Rust do that cleaner, since they don't have to be backwards-compatible with old Javascript. But I'm using Typescript in hopes of a larger audience.)
If you eliminate the odd integers from consideration, you've eliminated an entire class of integers. yet, the set of remaining integers is of the same size as the original.
Programs are not limited; the number of Turing machines is countably infinite.
When you say things like "eliminate a class of bugs", that is played out in the abstraction: an infinite subset of that infinity of machines is eliminated, leaving an infinity.
How you then sample from that infinity in order to have something which fits on your actual machine is a separate question.
How do you count how many bugs a program has? If I replace the Clang code base by a program that always outputs a binary that prints hello world, how many bugs is that? Or if I replace it with a program that exits immediately?
Maybe another example is compiler optimisations: if we say that an optimising compiler is correct if it outputs the most efficient (in number of executed CPU instructions) output program for the every input program, then every optimising compiler is buggy. You can always make it less buggy by making more of the outputs correct, but you can never satisfy the specification on ALL inputs because of undecidability.
Because the number of state where a program can be is so huge (when you consider everything that can influence how a program runs and the context where and when it runs) it is for the current computation power practically infinite but yes it is theoretically finite and can even be calculated.
reply