This is amazing! Seen a lot of work around Rust and JS / JSX / TS / TSX (notably SWC[0] is moving to be a Rust based version of Babel) not so much around CSS.
My genunine hope is this will replace PostCSS sooner rather than later. PostCSS isn't the most performant thing I've worked with, and a lot of plugins for PostCSS rely on doing multiple passes at the AST, they can slow down significantly as a result
For that however, we will need some kind of plugin system. I know the SWC[0] project has had some struggles with this as their current JS API relies on serializing the AST back and forth. I wonder if this forgoes a JS API entirely or if that is something planned for say 1.5?
I have been thinking about implementing the CSS OM spec as the JS API. This is the same API that browsers expose for manipulating stylesheets. The advantage of this is that we don't need to invent something custom. Still thinking through options. https://github.com/parcel-bundler/parcel-css/issues/5
That said, I think I'd want to keep the use cases for JS plugins limited, because it will affect performance significantly. So, if it's something that exists in an official CSS spec somewhere (even if draft), or a really common transformation, it should be implemented in Rust. An example of this is the support for the CSS nesting spec. Ideally, we'd try to keep the amount of custom syntax being invented around CSS to a minimum and stick to standard CSS syntax wherever possible though.
Separate question, that may be seemingly unrelated however I'm curious.
How did you kinda "learn" how to read these specs effectively? I can read them, and I think I can reasonably understand them, however I feel like I have little confidence in saying yes I get this.
Is there anything you point to that helps? Tips or recommendations?
I'm trying to learn how to parse these standards docs better myself.
For a spec about a browser feature, "getting it" can mean a few different things.
1. Understanding the purpose of the feature ("why/when would I use this?")
2. Understanding how to implement the feature
3. Understanding how to use the feature
4. Understanding the feature's "corner cases" (surprising implications, cases where it doesn't do what you'd expect, etc.)
5. Understanding why the feature works the way it does (instead of some other way)
Most of the web specs really only explain how to implement a feature, and even then, they're not great at that, because they do such a poor job at explaining the purpose of the feature.
Assuming that you, like most of us, aren't working on implementing a browser, that means that web specs are mostly unhelpful to you. It's almost completely beyond the purpose of a spec to teach you how to use a feature, what its corner cases would be (which are often unknown at the time a spec was written), and why the specification says what it says.
This is an area where the web spec community has made some improvements in recent years. Nowadays, it's understood that new proposed specifications shouldn't just provide a specification, but also a separate "explainer" document to explain the purpose of the feature, and also persuade the other browser vendors to implement the feature. ("This will be really cool, and here's why…")
At a minimum, specs nowadays often include a non-normative "Motivation" section, as the CSS Nesting spec does. https://www.w3.org/TR/css-nesting-1/ I you'll find that you can "get" that spec much better than you can the CSS OM spec https://www.w3.org/TR/cssom-1/ which is old enough to buy alcohol and doesn't include a "Motivation" section.
In addition to dfabulich's excellent comment, I'd also suggest: read specs of tech you're most familiar with first, rather than specs of new cutting edge stuff you're curious about. Doing that will give you huge amounts of extra context because you're just reading something you already know expressed in a different format.
When a site's CSS is well written, with a capable web developer knowing, what they are doing, such a tool would not be necessary. People pump out megabytes of JavaScript, but then they worry about a few kilobytes of CSS being saved by compressing it, at the same time making it less readable by minifying it. (We are not yet shipping hundreds of kilobytes of CSS, are we?!)
When there is a need for a tool that minifies CSS, then people seriously need to ask themselves, how it can be, that they have that much CSS. How much redundant CSS can you accumulate? Was there no coherent styling idea or strategy, so that every thing on the page needs separate styling?
What many forget is, that data is often sent gzip compressed anyway, which naturally takes advantage of repeated parts to compress. Text usually compresses pretty well. Especially something like a description language with many repeating parts. It is great, that Parcel CSS is faster than some other tool. However, for me, these kind of tools are merely treating the symptoms of not doing styling properly. I'm glad, that I got to know the web, when you simply could look at all the source of a page and learn from it, instead of facing a multi megabyte "minified" script and "minified" CSS.
Oh my ... Does one need all of that? Isn't there a process of reducing it to only what one needs? I think there was something like that in the past. Not sure how well that works.
I am more talking about "Only ever include the CSS rules we need." instead of "Lets take the whole of bootstrap! Oh damn, we have to somehow reduce it!".
If memory serves me right, you could compile the Bootstrap library and choose the parts you needed. Not sure if that is still true for Bootstrap 5. If it is not possible, it stands to reason, that perhaps one should not use Bootstrap 5 and instead write good styling oneself, without all the stuff, that will never be used on the site anyway.
CSS can be tricky, but in the end, reasonable styling is usually not that hard to do. Even responsiveness can be done mostly without media queries these day, unless you want content to change, if layout changes. It used to be much harder, when our browser standards were not that far. Nowadays we have flex and grid and more standardized interpretations of CSS rules in browsers. We can make navigations, menus, tiles and whatever all responsive without that much work put in.
From the Parcel CSS page : it does 'tree shaking', which automatically eliminates unused code. Not details on what that implies, though. Tools like PurgeCSS (used by Tailwind pre-v3) comb through the HTML for matching selectors, but I'm not sure if Parcel does the same thing.
Anyway, as others comments said, these kind of tools nowadays are more 'transformers' than mere 'minifiers'.
That's really nice, but to me still sounds like the wrong way to go. Instead of reducing something that is too much, that which is too much should not have been added in the first place. We are doing work here, removing, what we have previously added.
I wonder how much ineffective styling still remains after "tree shaking". I guess the fancy tree shaking is out of luck, if we add unnecessary styling to elements, which are indeed used. It seems a really hard problem to solve automatically, to transform CSS in a way, that takes care of redundancies and unnecessary styling, while still maintaining the readability, which well crafted CSS can have.
> that which is too much should not have been added in the first place.
The problem is, with CSS you don't know if you've added too much. As the project evolves, and code and styles change there's literally no way of knowing if a certain CSS rule is still in use Well, except regexp'ing the whole project looking for matches.
You can switch to WindiCSS or Tailwind and use the optimized output. That is usually a lot shorter.
Tailwind actually has its own jit compiler for development since the old, full css was extremely long. Windi always creates small css as far as I remember.
As someone who regularly adds custom CSS to sites to fix various UI issues/annoyances, I can say that I'm definitely not a fan of minified CSS classes.
> In addition to minification, Parcel CSS handles compiling CSS modules, tree shaking, automatically adding and removing vendor prefixes for your browser targets, and transpiling modern CSS features like nesting, logical properties, level 4 color syntax, and much more.
Elitism about the term "compiler" aside, I've been reading through The History of the FORTRAN Programming Language and funnily enough compile in those historic CS contexts just meant what "to compile" means in English: merging things together. It's an interesting read whatever the case.
This is not about elitism about terminology, but about sloppiness in using well established words in a specific context to mean something new. It happens rather frequently in the frontend world - which I’m sometimes a part of - and it’s incredibly confusing to all of us.
I write frontend code and backend code and do compiler stuff for fun and I don't find it confusing. "Compiler" means completely separate things in so many contexts that I get the general picture and always need to look deeper to truly understand. Just because it's a generic term doesn't mean anyone should not be allowed to use it IMO.
You are right and I phrased my comment the wrong way.
The confusing part is not calling this process compilation, the confusing part is now calling something a compiler for which the consensus previously was calling it a pre-processor. Now I started to wonder if there is something new that I wasn't aware off, like a browser support optimized binary style format that we can compile into.
> Parcel CSS handles compiling CSS modules, tree shaking, automatically adding and removing vendor prefixes for your browser targets, and transpiling modern CSS features like nesting, logical properties, level 4 color syntax, and much more.
Had you taken the time to open the link, the 2nd paragraph states their capabilities succinctly:
> Parcel CSS has significantly better performance than existing tools, while also improving minification quality. In addition to minification, Parcel CSS handles compiling CSS modules, tree shaking, automatically adding and removing vendor prefixes for your browser targets, and transpiling modern CSS features like nesting, logical properties, level 4 color syntax, and much more.
I've obviously read that. Most of those features are what we used to call CSS pre-processors, but I'll happily call that compilation if that's a thing now.
Looking forward to an scss compiler written in Rust, compiled to wasm and exported via a node module that no longer requires node version/platform specific binaries to be either downloaded or compiled.
Use Vite instead, it has very nice developer experience. Webpack is a "legacy" bundler, while it is supported, is shouldn't really be used in new projects if you really value your sanity.
Webpack is under active development though, nothing legacy about it as far as I'm aware. Are you referring to abandoned third-party plugins maybe?
Vite and all the other bundlers are really awesome, but Webpack's strenght - and I think this may also be the root of the issues you seem to be having with the tool - is that there's almost no magic and barely any handholding. Freedom and reliability at the cost of having to build your own configuration (it's just dumping plugin config objects in an array, really not difficult).
I have done manual configs from scratch in Rollup, even wrote bunch of plugins for stuff like custom css class minifying and web worker support, so no I don't need handholding :) Setting it all up was even fun, everything was well documented and clear, with Webpack on the other hand, you only have fun if you like pain.
That depends entirely on your definition of “magic”. Needing a specific incantation of seven different plugins and rules, that are often conflicting and incomprehensible, to get a basic project to build, is the same as magic to me.
I assume you mean magic == conventions or implicit behavior. I’ll take that over the Webpack mess any day.
Magic as in runtime hacks and precompiled semi-proprietary rust binaries.
Neither of which are necessarily negatives of course!
Only thing is that for every minute gained from those speedy rust builds, I lost five more on debugging stuff that used to work fine with Babel + having to learn this new language to write replacements for now unsupported plugins :)
I seriously don't get the hatred Webpack receives. Calling it legacy when it introduced stuff like module federation is unfair to say the least. Just because a lot of people use CRA or some "zero config" bundler, or because esbuild etc are picking up momentum, doesn't mean it's bad. The documentation is actually really good IMO.
The problem with using something like webpack is that you can't use it for something quick.
You can't dip into the documentation for the one thing you're trying to do, you have to spend time reading a large chunk of the documentation, then going to forums and such to try an understand the history of how that one thing you're trying to do has changed over webpack's history.
Then 5 months later when you're doing it for another project, you have to re-read all that documentation again because one other thing is different.
Compare that waste of time and effort to:
A developer is now unavailable and you are taking over the project. The project uses a modern build-system/module-loader/etc... You need to change one thing to the way to project is built.
You don't have to learn an uber nested spaghetti config file that looks like a domain-specific language. You just read the miminal configuration the previous developer left you and now you know what to change, or how to add the one thing you need to do. You've now saved days of work.
I agree that hatred is probably somewhat unwarranted, and flexibility and support Webpack has is immense - but, based on my experience, I understand some of the people and where it's coming from.
I just replaced Webpack with Vite, in a 2yo React project, with relatively standard webpack config (scss, fonts, assets copying, minify etc). I was able to remove 24 total devDependencies packages, including the whole of babel and its related ones, bunch of webpack plugins that broke our build on every packages upgrade (e.g. to Webpack 4, then again for Webpack 5) etc etc.
We now have nothing to do with babel, hard-to-understand plugins that someone added since it was solving a build error, nothing to do with webpack, just 1 almost non-existent config file, instant HMR, and 3x faster build...
One thing that always put me off about webpack is the default way it compiles your code. One of the ways is(was?) compiling code as _eval_(!!) statements and code as string. It is absolutely impossible to debug such code.
You were meant to rely on sourcemaps to get something on your debugger, but despite using the latest Chrome and developer tools at the time, I could never get it to work to actually debug sites.
I know Rollup, Vite and etc had a much easier time providing an easier developer experience because they rely on the browser's native ESM support, but I never could understand why webpack decided to mangle the code so badly.
It won't go away, it will still be supported and used for many years, but majority of new projects today won't use it.
Brief history: Rollup dramatically improved bundler plugin and ES modules situation, then came new super fast bundlers like Esbuild and finally solutions combining both like Vite. Webpack is still playing catch up with that. There is literally zero reasons to choose clunky, slow Webpack configs, while alternatives exist, unless you have a legacy codebase and migrating is not an option.
I'd argue that Vite's still pretty immature. It's great if you can stick within what works with it, but webpack is still the gold standard in terms of compatibility and features, and I suspect that will be the case for another couple of years while the native code bundlers (esbuild, swc, etc) catch up with their JS counterparts.
Parcel has been around for awhile, and the project currently has a lot of activity - I've made the call to migrate an old create-react-app project to Parcel 2 and in general it is going well.
The nice thing about Parcel is that it is intentionally low-config with relatively intuitive defaults, which tells me that it wouldn't be terribly difficult to move to a different bundler in the future. Webpack has gotten better, but you can quickly end up with a _lot_ of Webpack-specific config that makes future migrations hard.
That said, optimizing for future migrations shouldn't generally be your #1 priority - I also like Parcel's speed, and new things like ParcelCSS just validate that further.
Snowpack, ESBuild, ..., all promised us nice frontend compiles, yet I still turn to Webpack every time I actually want everything to work - you always hit roadblocks you can't solve in these other tools.
Parcel is becoming my go-to choice most of the time now. I’ve migrated some apps out of CRA, pure Webpack and Snowpack to it and I need to say it was incredible.
Some minor issues with HMR on one of those projects I still can’t figure out. Anyway having something that works, with JSX, with TS, really fast, in just 2/3 really simple steps is amazing! You don’t even need to turn your brain on.
Just a stab in the dark, but I was having HMR issues as well and it turned out to be an obscure transitive dependency conflict - details and solutions here on the outside chance it's the same thing affecting your project https://github.com/parcel-bundler/parcel/issues/6685
Sadly in my use case it is not a dependency conflict, but I'm failing to replicate it (and actually not even trying very hard). I guess it will automagically fix itself on some minor update.
The point being that, IMHO, extensions over ESBuild tend to not improve the developer experience because they have a tendency to bloat and steer towards being "webpack but esbuild", which defeats the point of esbuild, being the "anti-webpack".
Tools such as Vite, aren’t designed to improve the developer experience of ESBuild. They exist separately and leverage ESBuild’s strong qualities – JS, TS and CSS bundling and minification.
I used Parcel for about a week a month ago. Couldn't get the HTTPS server to use my certificate and key and would have needed to write a plugin so that it wouldn't rename favicons and delete files not mentioned in the HTML file.
So, Esbuild for 'no configuration' as it is significantly faster than Parcel and Webpack for more complex stuff.
> Parcel CSS is based on the cssparser[0] Rust crate, a browser-grade CSS tokenizer created by Mozilla and used in Firefox. This provides a solid foundation, including tokenization and basic parsing. However, it does not interpret any CSS properties or at rules. That's where Parcel CSS comes in. It handles parsing each individual rule and property value, as well as minification, compilation, and printing back to CSS.
Nice too that it's a compiled language, so you get the end tool in a nice static binary. As a non-Node dev, I hate the experience of hacking on some project and having to install a giant pool of NPM stuff just to run some minifier or linter. Hound is an example of this— the guts of the project are golang, but it has a frontend that uses webpack, jest, etc: https://github.com/hound-search/hound
Which is fine, I guess; definitely use the right tool for the job. And maybe Node developers hate finding my Python projects and needing to set up a virtualenv to run them in. But all the same, I approve a direction where more of this kind of tooling is available without a build-time Node dependency.
The bigger win here are projects that increasingly do more and do it faster. You can essentially replace babel’s hydra with typescript (single dependency) and do it faster too. TS also essentially supports “preset-env” via the target property (however you must pick the ES version, not a browser list)
The important part now is ensuring that these projects don’t just die and disappear like, say, Rich Harris’ “buble” tool (a very old “fast Babel alternative”)
Agreed. If cssparser blows chunks on your css then at least you know that will probably break some of your user agents too, rather than the game of “is my tool broken or am I?”
To do the job of minifying, this minifyer first reads the CSS file into an internal datastructure. That process is called parsing.
All the minifying operations are done on that internal datastructure.
Then it writes it out again into a textfile, ready for the browser to parse it again - hopefully the minifying step made it a bit easier for the browser to parse this minified CSS.
It would be really great for me if any of these CSS preprocessors offered a standalone command line version that didn't need npm to install. My developer blog (https://ajxs.me) is built by a homegrown static-site-generator built in Python, which reads data from a simple SQLite database. I've been looking for a suitable CSS preprocessor for a while, however the ones I can find are all totally overcooked for my needs, and nearly exclusively designed to be integrated with Node.js.
For some reason I trust minifier / compiler benchmarks way more than I trust DB benchmarks. I wonder why that is?
Maybe I expect DB workloads to be much more varied so benchmarks are less representative. Whereas with minifying you can run it against some large project and expect it'll reflect your real world experience
Is it just me or parcel is starting to lose focus and performance?
Back in the v1 days, things just worked without config and with v2, I need some config and even basic glob pattern to include all files in a directory is now a plugin and for some reason a parcel process takes 3GB of memory to compile a smallish project and I don't know how to fix it or pinpoint what the cause is.
Unrelated but regarding SCSS/SASS: I am curious, why did we invent a new syntax for scss instead of writing a library in Python or Go that maps data-objects to CSS classes? You have full access to a proper programming language instead of this new DSL we need to learn. I tried searching for it but no luck, any reason why we don't do this?
I can't speak for all features in SCSS/SASS. I'd prefer JS (for reusability) to Python, Go and DSLs, but why open Turing complete Pandoras box in the first place when declarative works so incredibly well?
Regular CSS didn't have variables for a long time, which was the killer use case imo. I haven't been in the loop for a while but just found out that there is a CSS Custom Properties[1] standard which would solve most of my use cases. Heck, they're even scoped! Imo front-end folks have an unhealthy low threshold for taking on dependencies that are already supported natively, imo.
Outside, didn't read yet. Does minification include summering? Like if a child has an attribute that it actually doesn't need because it will inherit anyway since it is on the parent too, we can remove that attribute and save a line. Does it make sense?
The syntax lowering and modules handling might be really nice. Maybe even the tree-shaking. Minification, thought, especially with this level of parsing, just seems like an elaborate waste of effort if the web server is set to gzip the final CSS file, anyway.
After I saw the title, I thought "oh, it seems to kind of occupy the same space as esbuild, but for CSS. I wonder if the devs gave any thought to performance?" Then I clicked the link and saw that there a direct comparison with esbuild, with Parcel being 3x faster on a large real-world benchmark (Bootstrap 4).
This is really impressive. Although Rust tooling is rather suboptimal, Rust programs seems to have quite the performance edge. I'll take the RESF any day as long as it means getting away from ultra-heavy webtech everywhere.
The Rust compiler is both dog-slow and massive (both from a source and binaries perspective), and doesn't have a working incremental compilation mode yet, or support in-process hot-patching. There's no Rust REPL (hacks like papyrus don't count). Poor structural editing support. Integration with various editors/IDEs is lacking (e.g. there's no support for reporting possible performance issues with code constructs in any editor that I'm aware of, nor is there in-editor per-function disassembly or LLVM IR inspection, or borrow-checker lifetime annotation, no extract-function refactoring).
Cargo being good is necessary, but not sufficient.
I hope this can support SugarSS or other indented syntaxes easily. As far as I can tell, you'd still need to slap in PostCSS to transform to CSS first which defeats the purpose. Semicolons and brackets, like with JSON, make for a condense format for compressing but have an awful writing experience and contribute nothing to readability.
Hmm I wonder which approach is actually better overall when it comes to content-encoding like their own site uses (brotli compression) or client side parsing performance. It's all probably a bit off into the weeds over something like 0.05% performance though.
That's an... interesting optimization, and one that might make sense if you only care about byte size, but intuition (which might be wrong!) tells me that this will be more expensive (especially if it saves only one or two bytes).
I'd bet that browsers can more quickly parse a string like '0x00ff00' into its internal color representation than it can parse the string 'green'. It's probably faster to check for a '0x' prefix and convert hex-encoded ASCII to u8 values, than it is to normalize the string & do a lookup in a dictionary of 147 CSS3 color names, when taking into account the extra two bytes that need to be transferred.
More importantly, I would expect #00ff00 to compress better than green, because it would usually make the CSS file more predictably repetitive. Reducing network bytes is usually very important for speeding up loading (at least it is in the boondocks of the internet - out at the rim of the world).
Compress these with gzip, and the first is smaller than the second (56 and 58 bytes): lowercase doctype because you’re using very few uppercase letters in your document (on slightly larger samples it tends to save a byte or two), and omit the quotes as unnecessary. On larger documents there will be some places where you need quotes around attribute values, but it’s still worth omitting them when you can.
LZMA, similar: 60 and 63 bytes.
But then compress these with Brotli, and it’s the other way around by a larger margin, 29 and 19 bytes, because Brotli ships a dictionary primed on arbitrary web content. And so it becomes a popularity contest, and an inferior but vastly more popular technique compresses better.
In the case of #008000/green/#00ff00/#0f0/lime/#ffff00/#ff0/yellow, the dictionary doesn’t look to bee tainted, so traditional length and repetition wisdom still applies.
I don't get the sense that runtime CSS parsing is really much of a concern - it's more about build speed and asset size - since 'green' might be used hundreds of times in a large CSS bundle, the optimization might make sense even with an imperceptible speed cost in the browser.
> one that might make sense if you only care about byte size
A minimizer should care a lot abut size. I'm not sure what would be faster, but the difference must be minimal anyway, so it's safe to focus on your default concern.
>That's an... interesting optimization, and one that might make sense if you only care about byte size, but intuition (which might be wrong!) tells me that this will be more expensive (especially if it saves only one or two bytes).
This is an interesting premise, do you know of any tools that optimize web pages for parsing and rendering speed as opposed to size? I wrote a tool once that bundles and archives webpages for offline viewing, and in those cases network throughput is not an issue since files are stored locally. It could be interesting to see what performance benefits I might be able to get from optimizing for parsing speed in this case, although I suspect the performance differences will be so negligible that it won't make much of a difference. Still would be an interesting experiment nonetheless.
Don’t know, but I wouldn’t be surprised if the browser first tried a dict lookup (which is super fast) and only then tried to actually parse hex the string.
My genunine hope is this will replace PostCSS sooner rather than later. PostCSS isn't the most performant thing I've worked with, and a lot of plugins for PostCSS rely on doing multiple passes at the AST, they can slow down significantly as a result
For that however, we will need some kind of plugin system. I know the SWC[0] project has had some struggles with this as their current JS API relies on serializing the AST back and forth. I wonder if this forgoes a JS API entirely or if that is something planned for say 1.5?
[0]: https://swc.rs/