Hacker Newsnew | past | comments | ask | show | jobs | submit | data-ottawa's commentslogin

Are these AI filters, or just applying high compression/recompressing with new algorithms (which look like smoothing out details)?

edit: here's the effect I'm talking about with lossy compression and adaptive quantization: https://cloudinary.com/blog/what_to_focus_on_in_image_compre...

The result is smoothing of skin, and applied heavily on video (as Youtube does, just look for any old video that was HD years ago) would look this way


It's filters, I posted an example of it below. Here is a link: https://www.instagram.com/reel/DO9MwTHCoR_/?igsh=MTZybml2NDB...

It's very hard to tell in that instagram video, it would be a lot clearer if someone overlaid the original unaltered video and the one viewers on YouTube are seeing.

That would presumably be an easy smoking gun for some content creator to produce.

There are heavy alterations in that link, but having not seen the original, and in this format it's not clear to me how they compare.


you can literally see the filters turn on and off making his eyes and lips bigger as he moves his face. It's clearly a face filter.

To be extra clear for others, keep watching until about the middle of the video where he shows clips from the YouTube videos

I would but his right "eyebrow" is too distracting

What would "unaltered video" even mean.

The time of giving these corps the benefit of the doubt is over.

Wouldn't this just be unnecessary compute using AI? Compression or just normal filtering seems far more likely. It just seems like increasing the power bill for no reason.

The examples shown in the links are not filters for aesthetics. These are clearly experiments in data compression

These people are having a moral crusade against an unannounced Google data compression test thinking Google is using AI to "enhance their videos". (Did they ever stop to ask themselves why or to what end?)

This level of AI paranoia is getting annoying. This is clearly just Google trying to save money. Not undermine reality or whatever vague Orwellian thing they're being accused of.


Agreed. It looks like over-aggressive adaptive noise filtering, a smoothing filter and some flavor of unsharp masking. You're correct that this is targeted at making video content compress better which can cut streaming bandwidth costs for YT.

The people fixated on "...but it made eyes bigger" are missing the point. YouTube has zero motivation to automatically apply "photo flattery filters" to all videos. Even if a "flattery filter" looked better on one type of face, it would look worse on another type of face. Plus applying ANY kind of filter to a million videos an hour costs serious money.

I'm not saying YouTube is an angel. They absolutely deploy dark patterns and user manipulation at massive scale - but they always do it to make money. Automatically applying "flattery filters" to videos wouldn't significantly improve views, advertising revenue or cut costs. Improving compression would do all three. Less bandwidth reduces costs, smaller files means faster start times as viewers jump quickly from short to short and that increases revenue because more different shorts per viewer/minute = more ad avails to sell.


Activism fatigue is a thing today.

Why would data compression make his eyes bigger?

Because it's a neural technique, not one based on pixels or frames.

https://blog.metaphysic.ai/what-is-neural-compression/

Instead of artifacts in pixels, you'll see artifacts in larger features.

https://arxiv.org/abs/2412.11379

Look at figure 5 and beyond.


Like a visual version of psychoacoustic compression. Neat. Thanks for sharing.

"My, what big eyes you have, Grandmother." "All the better to compress you with, my dear."

Whatever the purpose, it's clearly surreptitious.

> This level of AI paranoia is getting annoying.

Lets be straight here, AI paranoia is near the top of the most propagated subjects across all media right now, probably for worse. If it's not "Will you ever have a job again!?" it's "Will your grandparents be robbed of their net worth!?" or even just "When will the bubble pop!? Should you be afraid!? YES!!!" and also in places like Canada where the economy is predictably crashing because of decades of failures, it's both the cause and answer to macro economic decline. Ironically/suspiciously it's all the same re-hashed redundant takes by everyone from Hank Green to CNBC to every podcast ever, late night shows, radio, everything.

So to me the target of one's annoyance should be the propaganda machine, not the targets of the machine. What are people supposed to feel, totally chill because they have tons of control?


It's compression artifacts. They might be heavily compressing video and trying to recover detail on the client side.

Good performance is a strong proxy for making other good software decisions. You generally don't get good performance if you haven't thought things through or planned for features in the long term.

I had a teacher who said "a good programmer looks both ways before crossing a one way street"

Nitpick: "Is the next Game of Thrones book out yet?"

This is always "No", because the latest book can never be the next book.


I find the qwen3 models spend a ton of thinking tokens which could hamstring them on the runtime limitations. Gpt-oss 120b is much more focused and steerable there.

The token use chart in the OP release page demonstrates the Qwen issue well.

Token churn does help smaller models on math tasks, but for general purpose stuff it seems to hurt.


Increasingly where the desks and servers are is critical.

The cloud act and the current US administration doing things like sanctioning the ICC demonstrate why the locations of those desks is important.


With gpt5 did you try adjusting the reasoning level to "minimal"?

I tried using it for a very small and quick summarization task that needed low latency and any level above that took several seconds to get a response. Using minimal brought that down significantly.

Weirdly gpt5's reasoning levels don't map to the OpenAI api level reasoning effort levels.


Reasoning was set to minimal and low (and I think I tried medium at some point). I do not believe the timeouts were due to the reasoning taking to long, although I never streamed the results. I think the model just fails often. It stops producing tokens and eventually the request times out.

I'm newish to Linux, so take my opinions with a grain of salt and having a lot of unknown unknowns.

I think the update/os upgrade situation is better, security is better, and frankly my least favorite thing with Linux is going in and making sure the system state is healthy.

When I started using Linux this summer I had to wipe my system twice because I put it in broken states or couldn't figure out how to undo some change. I went through all sorts of issues like managing grub and gnome not working with my studio display or thunderbolt peripherals. Almost all of the fixes required editing arcane files then calling commands which fed them into some subsystem I had no idea about. All that blind faith online sourced stuff felt like a security nightmare too.

Since migrating to atomic fedora and then this weekend Bazzite, that has not happened once. There was initial friction with dev tool setup and toolbox, but things have been completely on the rails since then.


I think it’s an organization accountability issue.

Why would a company pay for anti cheat infrastructure when they can outsource it to some company and blame them if there are cheaters or upset users? Windows is the status quo too, so it’s very easy to point to everyone else when justifying your choice to the execs.

It would be great if steam deck+box start costing studios quantifiable amounts of money that can be used to justify fixing this instead of outsourcing and hand waving.


The nice thing about atomic distros is switching operating systems is as easy as typing 'ostree rebase', and registering a secure boot key.

So if Bazzite did go that way you could have fedora running in under an hour and with flatpak most thing will just work.


I want to bear witness that I did exactly that when I downmoted one of my computers from 'gaming machine' to 'closet server'.

One `rpm-ostree rebase` from Bazzite to a server-oriented flavour of Fedora Silverblue and it's been running and updating flawlessly since then.


As I understand 'ostree rebase' between KDE and Gnome will lead to broken system.

Kinda, the config files of each other break stuff and those are kept by default

You can still fix them manually though, although that's probably not worth the effort in most cases


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: