I’m curious why the author chose to model this as an assertion stack. The developer must still remember to consume the assertion within the loop. Could the original example not be rewritten more simply as:
const result: ast.Expression[] = [];
p.expect("(");
while (!p.eof() && !p.at(")")) {
subexpr = expression(p);
assert(p !== undefined); // << here
result.push(subexpr);
if (!p.at(")")) p.expect(",");
}
p.expect(")");
return result;
I assume you ment to write `assert(subexpression != undefined)`?
This is resilient parsing --- we are parsing source code with syntax errors, but still want to produce a best-effort syntax tree. Although expression is required by the grammar, the `expression` function might still return nothing if the user typed some garbage there instead of a valid expression.
However, even if we return nothing due to garbage, there are two possible behaviors:
* We can consume no tokens, making a guess that what looks like "garbage" from the perspective of expression parser is actually a start of next larger syntax construct:
```
function f() {
let x = foo(1,
let not_garbage = 92;
}
```
In this example, it would be smart to _not_ consume `let` when parsing `foo(`'s arglist.
* Alternatively, we can consume some tokens, guessing that the user _meant_ to write an expression there
```
function f() {
let x = foo(1, /);
}
```
In the above example, it would be smart to skip over `/`.
The golf swing is extremely non-intuitive for several reasons, not the least of which is the physics of trying to swing a hunk of metal at the end of a 3-foot rod around your body at 100mph. Fixing one thing will often send something else out of whack. Improving the golf swing requires system-level analysis, trying new things to see what else is affected, and then fixing the regressions.
"Reverse every natural instinct and do the opposite of what you are inclined to do, and you will probably come very close to having a perfect golf swing.”
Given the Safari team has been the major driving force behind support for wide gamut in web browsers (for the very obvious reason that all Apple devices ship with wide-gamut displays), I am extremely suspicious of the author’s assertion that Safari is ignoring an embedded ICC profile while Chrome and Firefox are doing the right thing.
I think it’s far more likely that whatever chain of open-source image modification tools the author is using has written out pixel values in a different colorspace than the one named in the embedded ICC profile.
But if the author is absolutely confident in their analysis, they are welcome to file a bug report: https://bugs.webkit.org/
I seem to recall Safari doing the right thing for JPEGs when I was experimenting with publishing red-cyan stereograms [1] where an sRGB (255,0,0) becomes something like (187, 16, 17) on a high gamut display because the sRGB is less saturated than the P3 red. It looks right for a normal photo but for the anaglyph that little bit of blue and green leaks through and makes ghosts. Emedding a P3 profile in JPEG solves the problem. You run into the same problem with print and I ended up addressing those by applying the color profile to the R and L images, doing the anaglyph blend, then attaching the native color profile of the printer -- it probably isn't quite right but the colors are always going to be off for an anaglyph anyway and if I go back to that project I'll design a color grade that pushes the scene away from pure-red and pure-cyan-spectrum colors.
What gets me is that the image he's publishing is not really a PNG kind of image.
Another reader is claiming the gamma correction value is inverted and Firefox is ignoring it. Which seems plausible. I know when I implemented png I had some issues wrapping my head around the gamma correction function.
This might be true but I would hope the web standard is defined enough that browsers can also fail in the same way. Regardless of which browser is the most "correct" here
"correct" in fail, in the same way across browsers? that's hilarious. I forget that throughout the history of the internet the one thing we've been able to depend on is different browsers behaving the same way
Experimental is just the unreleased versions of the browsers. Like a beta or alpha for the next release. So ofc they're usually gonna be ahead.
Every year the major browsers get together and agree on a set of "focus areas" that are paint points for browser interoperability. They've been doing it since 2021. I posted the 2025 results. All browsers reached about 99% support for the selected features
While not disputing in any way the truth of what you're saying: I'm a product manager who leans toward Safari, while the devs I work with use Chrome almost exclusively, and we have an unwritten agreement that when it comes to display/layout issues, I'll double check in Chrome before filing the bug. It's only been Safari-exclusive a handful of times, but that's enough that it's annoying for everyone when it's just me.
I guess the question here has to be: is this actually a bug in Safari? Or is it a bug in Chrome, that people (whether that's your people or third parties) have just been working around in a way that doesn't work in Safari?
And this is a big part of the problem with having Chrome become such a dominant force on the web: people assume that it's correct when Safari displays something differently. And people give instructions and documentation for how to do various things "in HTML/CSS/JS" when they've never tested them in anything other than Chrome, so if Chrome's behavior deviates from the spec there, someone implementing those instructions on Safari will see them fail, and assume incorrectly that it's Safari that's wrong.
Note that I am not saying this is what is happening in any specific case—but because Chrome is so dominant, enough people treat it as the de-facto standard that over time, it becomes a near-inevitability that this will happen in some cases.
I've run into Safari exclusive issues before as well around color transparency, but tbh I'm surprised it comes up that often. Modern IDEs support linters that warn you whenever you are using a CSS feature that isn't supported by all modern browsers. You can even set the year you wanna support (e.g. all major browser versions since 2023). Between that awesome tooling and rapidly improving browser support for web standards, these kinds of issues feel extremely rare.
Except for printing. Printing has and seemingly always will f'n suck. Unfortunately WPT doesn't have a good way of testing for print-related features
If I’m not mistaken, they only select a couple of features to work on every year—the ones they already agree on to begin with—and the high interoperability shown in the link only concerns those few select features.
For example, JPEG XL has been proposed for Interop a few times before, but never selected. Therefore, Safari remains the only major browser to support it so far.
And yes JPEG XL support is often the most requested feature and the major browsers have responded. Google and Firefox are both willing to take it on but Firefox's biggest concern is with the reference decoder which has some major security flaws. They basically want to wait until libjxl/jxl-rs is performant enough
It took Mozilla two years to decide they were 'neutral', citing a range of vague considerations. It took them another year to say that actually their 'primary concern has long been the increased attack surface of the reference decoder', without pointing out any specific major security flaws, and that they're okay with an implementation in Rust… which had already been proposed by the JXL team many months ago. And let's not even talk about Chrome.
Hopefully they've both finally settled on a reasoning and will stick with it until the end.
Yeah I guess I wasn't following that closely but I know their previous position was that there wasn't enough benefits to .jxl to justify supporting another raster image standard.
There's been a lot of major changes since then though. Like Apple fully adopting JPEG XL and PDF announcing support as well.
I know many in the industry also think that AV2 might be a huge game changer and wanna wait and see how that ends up before choosing what standards to adopt.
You can try. I'm on Safari Version 18.6 (20621.3.11.11.3) [Seqouia 15.6.1] on my Mac, unsure of the version in my iPhone and iPad, but all of them ignore the ICC profile.
Again, my suspicion is that you are actually seeing the ICC profile being applied correctly, and it is the pixel values in your image that are incorrect.
A good test would be to run a single 100% sRGB red pixel through your image processing pipeline, and then inspecting the resulting PNG file in a hex editor to see what value is encoded.
Interesting: for me, the image quadrants display correctly in Safari, but there is a horizontal white line between the top and bottom left quadrants. You're not seeing that?
I see the white line on mobile, but not on desktop, though my OS versions are wildly different too, so hard to narrow down exactly what it might be there.
Apple essentially invented color matching on personal computers back in the classic Mac OS days; it's hard to believe after all this time, they're not dealing with color correctly.
The WebKit blog from 2016:
WebKit color-matches all images on both iOS and macOS. This means that if the image has a color profile, we will make sure the colors in the image are accurately represented on the display, whether it is normal or wide gamut. This is useful since many digital cameras don’t use sRGB in their raw format, so simply interpreting the red, green and blue values as such is unlikely to produce the correct color. Typically, you won’t have to do anything to get this color-matching. Nearly all image processing software allows you to tag an image with a color profile, and many do it by default.
It’s absolutely debunked, unless you think there’s somehow a chance that the guy who was classmates with the murdered M.I.T. professor, attended Brown University for postgrad, and was seen on CCTV at Brown is somehow not the Brown university shooter.
It was posited baselessly and then debunked by evidence.
My understanding from the episode is that the plan was never for the public lands to grow in value. The private lands were given away or sold as incentives, and the owners could choose to capitalize off them immediately (e.g. as soon as the railroad reached nearby) or hold onto them for profit.
But this is verbatim what the article saud, you should have read it:
"This checkerboard pattern allowed the government to keep all the undeveloped sections in between and wait for them to go up in value before turning around and selling them to developers."
99pi articles are transcripts of the podcast episodes. I said what I remembered hearing when I listened to the podcast episode last week, which I apparently misremembered.
I believe the removal of the "experimental" nomenclature is just an indication that Rust is "here to stay" in the kernel (which essentially means that developers can have confidence investing in writing Rust based drivers).
The overall rules haven't changed.
Strictly speaking they've always been obligated to not break the Rust code, but the R4L developers have agreed to fix it on some subsystems behalf IIRC so Rust can be broken in the individual subsystem trees. But I think it's been the case all along that you can't send it to Linus if it breaks the Rust build, and you probably shouldn't send it to linux-next either.
That was way way blown out of proportion. https://lwn.net/Articles/991062/ has a much less somber quote from just three months later:
> Ted Ts'o said that the Rust developers have been trying to avoid scaring kernel maintainers, and have been saying that "all you need is to learn a little Rust". But a little Rust is not enough to understand filesystem abstractions, which have to deal with that subsystem's complex locking rules. There is a need for documentation and tutorials on how to write filesystem code in idiomatic Rust. He said that he has a lot to learn; he is willing to do that, but needs help on what to learn
Ted was being an asshole, nobody was asking him to learn Rust, he completely misinterpreted the point being made and then proceeded to go on an angry rant for like 5 minutes in the middle of a presentation which just is kind of disrespectful.
Linus Torvalds greenlit Rust in the kernel, and as a BDFL, he is the one to decide. He has no reason to be upset with any decision because ultimately, all decisions are his own.
If he didn't want Rust in the kernel, he would have said it, and there would have been no Rust in the kernel. It is also the reason why there is no C++ in the kernel, Linus doesn't like C++. It is that simple.
And I respect that, Linux is hugely successful under Linus direction, so I trust his decisions, whatever opinion I have about them.
I only typed Linus Torvald not realizing the person was asking for a "movie character" quite literally, lol. I thought "you mean the guy famous for (among many things..) his behavior?"
This is the first time I've had a comment hit -3 which, I mean, I get it!!
I can't find the movie, but there was a guy angry that stated that he should not have to learn Rust, nobody should. And Rust should not be in the kernel.
Its in a talk about File systems in Rust for Linux. Basically the rust maintainer who I think stepped down was talking about how the C-code base for VFS has a lot of documented but complex orderings where you have to call a lock, or pin before accessing an Inode(or something) one way but not the other. They made a bunch of Rust Types so you basically could not produce an illegal ordering and got heckled pretty hard by the "bearded guy". They basically run out the presentation time with heckling and I think the Rust maintainer quit a few months later (over many similar instances of this. don't quote me time line here)
IIRC, the point was actually that they were undocumented in many cases, and that the Rust developers were willing to take on a lot of work, but they would need help with understanding all of the hidden and implicit "rules", but that they had received pushback for simply asking questions or asking for documentation to be comprehensive.
Okay I felt that they were undocumented but I was trying to be charitable to the bearded man :D. I didn't have time to watch the video again haha. But Yeah the push back at the suggestion was very surprising.
From kernel side, I meant - I wasn't clear. Now I understand what's the meaning of "don't break rust code". Happy that rust's journey in the kernel is successful so far. The pace seems strong.
What they mean is that the Linux kernel has a long-standing policy to keep the whole kernel compilable on every commit, so any commit that changes an internal API must also fix up _all_ the places where that internal API is used.
While Rust in the kernel was experimental, this rule was relaxed somewhat to avoid introducing a barrier for programmers who didn't know Rust, so their work could proceed unimpeded while the experiment ran. In other words, the Rust code was allowed to be temporarily broken while the Rust maintainers fixed up uses of APIs that were changed in C code.
I guess in practice you'd want to have Rust installed as part of your local build and test environment. But I don't think you have to learn Rust any more (or any less) than you have to learn Perl or how the config script works.
As long as you can detect if/when you break it, you can then either quickly pick up enough to get by (if it's trivial), or you ask around.
The proof of the pudding will be in the eating, the rust community better step up in terms of long term commitment to the code they produce because that is the thing that will keep this code in the kernel. This is just first base.
No matter how hard you try to paint it as such, Rust is not a tribe. This is such a weird characterization.
Rust contributions to the Linux kernel were made by individuals, and are very obviously subject to the exact same expectations as other kernel contributions. Maintainers have responsibilities, not “communities”.
Not only that, those individuals were already Linux kernel contributors. This is not an amorphous external group forcing their will on Linux, it's Linux itself choosing to use Rust.
Learn rust to a level where all cross language implications are understood, which includes all `unsafe` behaviour (...because you're interfacing with C).
Rust's borrowing rules might force you to make different architecture choices than you would with C. But that's not what I was thinking about.
For a given rust function, where you might expect a C programmer to need to interact due to a change in the the C code, most of the lifetime rules will have already been hammered out before the needed updates to the rust code. It's possible, but unlikely, that the C programmer is going to need to significantly change what is being allocated and how.
There is, I understand, an expectation that if you do make breaking changes to kernel APIs, you fix the callers of such APIs. Which has been a point of contention, that if a maintainer doesn't know Rust, how would they fix Rust users of an API?
The Rust for Linux folks have offered that they would fix up such changes, at least during the experimental period. I guess what this arrangement looks like long term will be discussed ~now.
Without a very hard commitment that is going to be a huge hurdle to continued adoption, and kernel work is really the one place where rust has an actual place. Everywhere else you are most likely better off using either Go or Java.
Rust is being used a lot in video and audio processing where C and C++ had been the main players. Fixed-latency streaming is not really the best place for Go, Java, or Python.
I'd say no, access to a larger pool of programmers is an important ingredient in the decision of what you want to write something in. Netscape pre-dated Java which is why it was written in C/C++ and that is why we have rust in the first place. But today we do have Java which has all of the rust safety guarantees and then some, is insanely performant for network code and has a massive amount of mindshare and available programmers. For Go the situation is a bit less good but if you really need that extra bit of performance (and you almost never really do) then it might be a good choice.
> But today we do have Java which has all of the rust safety guarantees and then some, is insanely performant for network code and has a massive amount of mindshare and available programmers.
I'm not entirely convinced that Java has much mindshare among system programmers. During my 15 years career in the field I haven't heard "Boy I wish we wrote this <network driver | user-space offload networking application | NIC firmware> in Java" once.
I've seen plenty of networking code in Java, including very large scale video delivery platforms, near real time packet analysis and all kinds of stuff that I would have bet were not even possible in Java. If there is one thing that I'm really impressed by then it is how far Java has come performance wise, from absolutely dog slow to being an insignificant fraction away from low level languages. And I'm not a fan (to put it mildly) so you can take that as 'grudging respect'.
The 'factory factory' era of Java spoiled the language thoroughly for me.
I never said it is due to performance considerations (although developers in a projects like you described tend to always become experts in things like tuning Java GC). It is more like "If we wanted to write this in a shitty verbose OOP language we would just use C++".
While it might be possible to get the performance required for a web rendering engine out of Java, I think you'd miserable writing Java in the style you'd need to get that kind of performance. And you'd likely still end up with issues around memory usage, app bundle size (remember that browsers ship to client devices), GC pauses, and integrating with the JavaScript engine.
Absolutely. Again, not a fan of the language. But you can transcode 10's of video streams on a single machine into a whole pile of different output resolutions and saturate two NICs while you're at it. Java has been 'fast enough' for almost all purposes for the last 10 years at least if not longer.
The weak point will always be the startup time of the JVM which is why you won't see it be used for short lived processes. But for a long lived process like a browser I see no obstacles.
It’s only fast enough if it is as fast or faster than the alternatives. Otherwise it is wasting battery power, making older/cheaper computers feel sluggish and increasingly inefficient.
I'm using it for frontend web development and it's perfect. Much better than Go or Java would be. It's pretty wild that the language I use in the browser is also in the kernel.
reply