Years ago, I introduced Flow gradual typing (JS) to a team. It has explicit annotations for type variance which came up when building bindings to JS libraries, especially in the early days.
I had a loose grasp on variance then, didn't teach it well, and the team didn't understand it either. Among other things, it made even very early and unsound TypeScript pretty attractive just because we didn't have to annotate type variance!
I'm happy with Rust's solution here! Lifetimes and Fn types (especially together) seem to be the main place where variance comes up as a concept that you have to explicitly think about.
Note that this works because Rust doesn't have inheritance, so variance only comes up with respect to lifetimes, which don't directly affect behavior/codegen. In an object-oriented language with inheritance, the only type-safe way to do generics is with variance annotations.
This is very disingenuous: we don't know how much spare time Sascha spent, and much of that time was likely spent learning, experimenting, and reporting issues to Slang.
It's not disingenuous, we have his and my git commit logs. I also had to deal with issues with rust-gpu. He is an expert in both the space and his project, I had never seen his project nor written any graphics shaders before. You can never get apples to apples comparisons but this is the closest I have personally experienced.
Once upon a time, Flash, Java, Silverlight, ActiveX, etc. ruled the web.
I think the world is _much_ better off today, with a common language and platform. I don't think those big third party runtimes could survive in the browser in today's threat environment.
Unfortunately "common" means being what Google wants, and they abuse their market position to push whatever through (advertising support in an HTML client... what?)
You might be getting “sampled textures in a single call” with “total textures loaded” mixed up. Sampled texture limits affect complexity of your shader and have nothing to do with loading content from elsewhere.
The CPU-specific intrinsic stabilized a little while ago for cases where you need SIMD on a specific platform at least. A lot of crates like glam seem to only support SIMD on x86/x86-64 for that reason.
I know that `wide`, `faster`, and `simdeez` are options for crates to do cross-platform SIMD in safe Rust. Eventually we'll have `std::simd`, but I don't know what the timeline is or what the blockers are.
Algebraic effects use delimited continuations (and this appears to match toss/return). Call/cc captures an undelimited continuation. Totally different.
These three comments as a chain are hilariously close to the paper ‘On the Expressive Power of User-Defined Effects: Effect Handlers, Monadic Reflection, Delimited Control’ by Forster,Kammar,Lindley,Pretnar. It’s relatively new and evaluates the expressive power of the three concepts, which barring preserving typability during translation, are equivalent.
There's nothing on this site using calc as part of scrolling the content. It looks like the background image is fixed in place with some JS, which is probably the source of the performance issues.
The questionable stuff is definitely that everything is a table and that the links on the bottom of the page are divs with onclick handlers... and that selection is turned off?
This has nothing to do with scrolling. They're fixed CSS rules. They remind me of the tricks we used to have to use for vertically centering things.
It seems like these rules are from the site's page load animation. You can see it if you refresh the page part way down. It's not a great way to achieve that effect.
I had a loose grasp on variance then, didn't teach it well, and the team didn't understand it either. Among other things, it made even very early and unsound TypeScript pretty attractive just because we didn't have to annotate type variance!
I'm happy with Rust's solution here! Lifetimes and Fn types (especially together) seem to be the main place where variance comes up as a concept that you have to explicitly think about.