Hacker Newsnew | past | comments | ask | show | jobs | submit | Orphis's commentslogin

Which app are you using?


It's not that relevant for video conferencing, most apps are still either doing H264, VP8 or VP9 and jumping to AV1 directly.

And for video streaming, AV1 is becoming increasingly used on Youtube and Netflix for example ( https://aomedia.org/av1-adoption-showcase/netflix-story/ )/.

It is used a lot more for people who don't have to worry too much about licensing at scale, such as pirate content or local streaming (quite often backed by an OS wide license). Doing a quick search on various pirate content search engine, I can see a lot of AV1 content now, so it'll eventually get more popular!


But it doesn't really apply when big entities with a lot of money are making the video conferencing services that would be using paid codecs. Then the consortiums have clear targets to request licenses to be paid.


They were still using H264 last time I checked, so it's irrelevant to them.


HEVC is far from being the most popular codec on the planet in the context of video conferencing. Most implementations are using WebRTC and as it is unevenly supported and AV1 support is becoming more prominent and stable, most implementations are going from H264/VP8 -> VP9 -> AV1 and skip HEVC entirely.

Each new codec to support is adding a lot of complexity to the stack (negotiation issues, SFU implementation, quality tuning, dealing with non conformant implementations...), so it's never quite as easy as toggling a switch to enable them.


But it is still more performant to do so in general. There are more image corrections of great quality happening than just background removal nowadays, like lighting improvements or sometimes upscaling, and you wouldn't want to do all that on the CPU.

But also, HW encoding of some codecs is not always of great quality and doesn't support the advanced features required for RTC, so the CPU encoding code-path is sometimes even forced! While it doesn't necessarily apply to HEVC as you'd need a license for it (and almost all apps rely on the system having one), it's happening for VP9 or AV1 occasionally more frequently.


pkg-config works great in limited scenarios. If you try to do anything more complex, you'll probably run into some complex issues that require modifying the supplied .pc files from your vendor.

There's is a new standard that is being developed by some industry experts that is aiming to address this called CPS. You can read the documentation on the website: https://cps-org.github.io/cps/ . There's a section with some examples as to why they are trying to fix and how.


`pkg-config` works great in just about any standard scenario: it puts flags on a compile and link line that have been understood by every C compiler and linker since the 1970s.

Here's Bazel consuming it with zero problems, and if you have a nastier problem than a low-latency network system calling `liburing` on specific versions of the kernel built with Bazel? Stop playing.

The last thing we need is another failed standard further balkanizing an ecosystem that has worked fine if used correctly for 40+ years. I don't know what industry expert means, but I've done polyglot distributed builds at FAANG scale for a living, so my appeal to authority is as good as anyone's and I say `pkg-config` as a base for the vast majority of use cases with some special path for like, compiling `nginx` with it's zany extension mechanism is just fine.

https://gist.github.com/b7r6/316d18949ad508e15243ed4aa98c80d...


Have you read the rationale about CPS? It gives clear examples as to why it doesn't work. You need to parse the files and then parse all the compiler and linker arguments in order to understand what to do with those to properly consume them.

What do you do if you use a compiler or linker that doesn't use the same command line parameters as they are written in the pc file? What do you do when different packages you depend on have conflicting options, for example one depending against different C or C++ language versions?

It's fine in a limited and closed environment, it does not work for proper distribution, and your Bazel rules prove it as it is not working in all environments clearly. It does not work with MSVC style flags, or handles include files well (hh, hxx...). Not saying it can't be fixed, but that's just a very limited integration, which proves the point of having a better format for tool consumption.

And you're not the only one who has worked in a FAANG company around and dealt with large and complex build graphs. But for the most part, FAANGs don't all care about consuming pkg-config files, most will just rewrite the build files for Blaze / Bazel (or Buck2 from what I've heard). Very few people want to consume binary archives as you can't rebuild with the new flavor of the week toolchain and use new compiler optimizations, or proper LTO etc.


Yeah, I read this:

"Although pkg-config was a huge step forward in comparison to the chaos that had reigned previously, it retains a number of limitations. For one, it targets UNIX-like platforms and is somewhat reliant on the Filesystem Hierarchy Standard. Also, it was created at a time when autotools reigned supreme and, more particularly, when it could reasonably be assumed that everyone was using the same compiler and linker. It handles everything by direct specification of compile flags, which breaks down when multiple compilers with incompatible front-ends come into play and/or in the face of “superseded” features. (For instance, given a project consuming packages “A” and “B”, requiring C++14 and C++11, respectively, pkg-config requires the build tool to translate compile flags back into features in order to know that the consumer should not be built with -std=c++14 ... -std=c++11.)

Specification of link libraries via a combination of -L and -l flags is a problem, as it fails to ensure that consumers find the intended libraries. Not providing a full path to the library also places more work on the build tool (which must attempt to deduce full paths from the link flags) to compute appropriate dependencies in order to re-link targets when their link libraries have changed.

Last, pkg-config is not an ideal solution for large projects consisting of multiple components, as each component needs its own .pc file."

So going down the list:

- FHS assumptions: false, I'm doing this on NixOS and you won't find a more FHS-hostile environment

- autotools era: awesome, software was better then

- breaks with multiple independent compiler frontends that don't treat e.g. `-isystem` in a reasonable way? you can have more than one `.pc` file, people do it all the time, also, what compilers are we talking about here? mingw gcc from 20 years ago?

- `-std=c++11` vs. `-std=c++14`? just about every project big enough to have a GitHub repository has dramatically bigger problems than what amounts to a backwards-compatible point release from a decade ago. we had a `cc` monoculture for a long time, then we had diversity for a while, and it's back to just a couple of compilers that try really hard to understand one another's flags. speaking for myself? in 2025 i think it's good that `gcc` and `clang` are fairly interchangeable.

So yeah, if this was billed as `pkg-config` extensions for embedded, or `pkg-config` extensions for MSVC, sure. But people doing non-gcc, non-clang compatible builds already know they're doing something different, price you pay.

This is the impossible perfect being the enemy of the realistic great with a healthy dose of "industry expertise". Do some conventions on `pkg-config`.

The alternative to sensible builds with working tools we have isn't this catching on, it won't. The alternative is CMake jank in 2035 just like 2015 just like now.

edit: brought to us by KitWare, yeah fuck that. KitWare is why we're in this fucking mess.


If someone needs a wrapper for a technology, that modifies the output it provides (like meson and bazel do), maybe there is an issue with said technology.

If pkg-config was never meant to be consumed directly, and was always meant to be post processed, then we are missing this post processing tool. Reinventing it in every compilation technology again and again is suboptimal, and at least Make and CMake do not have this post processing support.


This was the point of posting the trivial little `pkg.bzl` above: Bazel doesn't need to do all this crazy stuff in `rules_cc` and `rules_foreign_cc`: those are giant piles of internal turf wars within the Blaze team that have spilled onto GitHub.

The reason why we can't have nice things is that nice things are simple and cheap and there's no money or prestige in it. `zig cc` demonstrates the same thine.

That setup:

1. mega-force / sed / patch / violate any build that won't produce compatible / standard archives: https://gist.github.com/b7r6/16f2618e11a6060efcfbb1dbc591e96...

2. build sane pkg-config from CMake vomit: https://gist.github.com/b7r6/267b4401e613de6e1dc479d01e795c7...

3. profit

delivers portable (trivially packages up as `.deb` or anything you want), secure (no heartbleed 0x1e in `libressl`), fast (no GOT games, other performance seppeku) builds. These are zero point zero waste: fully artifact cached at the library level, fully action cached at the source level, fully composable, supporting cross-compilation and any standard compiler.

I do this in real life. It's a few hundred lines of nix and bash. I'm able to do this because I worked on Buck and shit, and I've dealt with Bazel and CMake for years, and so I know that the stuff is broken by design, there is no good reason and no plan beyond selling consulting.

This complexity theatre sells shit. It sure as hell doesn't stop security problems or Docker brainrot or keep cloud bills down.


It's just a way to ensure you open the desired context on a local Discord instance, not any instance that might be logged in to your account. I have a few personal computers logged in on Discord on the same account that could be active at the same time for example.


I have this laptop and I've charged it with USB-C PD at 90W on multiple occasions. What chargers have you tried?


> Additionally, how do you avoid doing pointless builds when new features are pushed? I can only imagine what the `.github` folder in a monorepo looks like.

It's simple, with proper tooling, you know exactly the dependencies, so you know which test depend on the affected files and can run those tests, the rest shouldn't be impacted. And that tooling exists. It's not the one you may be using, but it exists, and not just in FAANG.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: