Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Canon R5 Mk Ii Drops Pixel Shift High Res – Is Canon Missing the AI Big Picture? (kguttag.com)
85 points by LorenDB on Aug 23, 2024 | hide | past | favorite | 73 comments


My Lumix camera has a similar mode (uses sensor shift and 4 exposures to fake a composite 100MP image from my 25MP sensor), and I will say that even with a rock-solid tripod, you're not _actually_ getting a 4X (or whatever) increase in quality alongside the 4x resolution. If you zoom in Lightroom, the resulting image is at best a bit sharper in really high contrast areas, and at worst full of blatant overlapping and stitching artifacts.

I've never figured out a good use-case for that mode, and I've tried to use it quite a few times to shoot landscapes or static scenes with a tripod.

Maybe Canon does it better, but perhaps they dropped it because it's just not producing worthwhile results.


If it's the Lumix S5 I have the same camera and as far as I know you are getting genuinely more resolution but it relies on there being absolutely no movement in the scene. So, great for buildings and architecture, terrible in forests or city streets.


It's a GH6, so probably the same system roughly...

And yes that's what I said, sorry if I wasn't clear.

It does get more "pixels" resolution, the image file is 100MP, but it's certainly not 4X the image resolution in terms of image quality. Not that I was expecting that, but it's not even a middle ground that I consider usable, at least not good enough to use over a normal 25MP picture.

I've done a bunch of landscape shots (with a very solid tripod, camera in silent mode, shutter release delay, no wind, etc) and generally I've found the results mixed-to-bad..

Like I said, it seems like the algorithm has a hard time stitching the images together often so if you zoom into the details (and really if you're using this mode, it's ostensibly because you want more detail), things get fuzzy or muddy in most of the pictures I've taken.

To be fair I took maybe 20-30 test shots in the first 6 months I owned that camera, and I haven't gone back to the mode since.. Maybe some of it is user error, but I really did try to make it work because it seemed like a cool feature.


Do you have a remote shutter trigger? I think pressing the shutter button can count as too much movement for these systems.


What I struggle with - isn't even let's say walking around, small air movements etc. producing enough tremor to influence this? We're talking about moving the sensor/scene by one half of a pixel size which is a miniscule amount (2 micrometers or so).


Yes, hence tripod and photographing buildings, as mentioned in an ancestor comment[1]. On a windy day (enough to shake the tripod) or on shaky ground (cars, trams) I guess this might still not be enough.

[1]: https://news.ycombinator.com/item?id=41333736


I do not but as I mentioned in my previous post the camera supports a shutter delay (to account for the movement of pressing the button), which I used for this mode.

I feel like I really did everything I could to create the ideal conditions for this mode to produce good results, and it simply didn't.

That said, all this talk has made me want to go try again.. :-)


My S5IIx allegedly can do a pixel-shifted 96MP high resolution shot up to 1 second _hand-held._ I haven’t tried it and probably never will.


Hah yes the GH6 also claims that the IBIS (in-body image stabilization) can allow you to use the pixel-shift mode handheld but given the mixed results I've had on a tripod, I also have not tried this.


GFX 100 has similar mode. It’s not surprising that stitching multiple images of a moving subject , or thousands of moving subjects in the case of leaves and blades of grass, fails to produce something similar to a pixel shifted image of something static.


So, they just literally determine every fourth pixel in the final image from the fourth image? That's a solution to a niche problem. I hope they didn't sell that as general-purpose...


Nah the manual and specs are pretty clear on the limitations of the mode and how it works.. I personally find it pretty useless, but it's easy to just ignore it entirely since it's a dedicated mode on the dial.


Canon doesn't do it better. I tried the R5 mode a few times when it came out and it's complete garbage. More like something an intern did as a proof of concept than an actual well implemented feature. The stitching could be done a lot better, but it isn't. Purely a marketing gimmick.

The new R5 Mk II AI upscale is the same, a gimmick that got way to much attention by the press solely for using the word AI.

Canon does use AI features in their R5 AF tracking, and it's really impressive. But Canon calls those features deep learning or other more specific terms.


I’m pretty sure all modern fast AF systems have computer vision in their control loop.


A7III has a dedicated DSP for focus & tracking, plus it has a computer vision model which can detect eyes and focus for them (even when you have sunglasses it can spot eyes perfectly), moreover you can give 5 faces to it with focus priority, and if it finds any of these faces in the frame, it focus to that one (lower the number, higher the priority). System works at 30 calculations per second, correcting focus 30 times per sec, as required.

This is Sony's AF system, 2 generations ago. I'm sure big three can do these tricks, at least at 30FPS. Latest, highest end Sony cameras do it at 120FPS, plus it can track whole body after losing the face, so they can track the same person in the frame after latching to it.


I wish Sony made a retro style camera. Maybe not full ISO / shutter / Exposure dials tercet on top, PASM dial is fine.


Interesting. I have a pen-f and the high-res mode absolutely increases the resolution. Everything is very sharp when viewed at 100%. If the scene is stationary (no water or leaves blowing in the wind) I can't detect any stitching artifact.

I also think that the lenses have to be able to resolve this resolution. My 12mm prime looks somewhat sharper than my 8-25 zoom in high-res mode, even though there is no difference in regular mode.

I'm not sure how to quantify the increase in quality, but the details are sharp. However, I'd say that the real benefit of this mode is that the image is smoother than in the normal mode. Tonal transitions, in particular.

I can't find any specifics, but just taking a picture in this mode and looking at my watch, I think this particular camera takes 8 exposures.


I use that mode on a Panasonic S1R to scan 5x4" film negatives, it does actually work.

However it requires a solid reproduction column mount, excellent lenses, perfect focus and vibration free environment etc.


Huh, this is super interesting, thanks for sharing!

I might try something like this just to see the results..


[dead]


Yeah, in the states we say 4x5, but in either case it’s the most common large format film size. You buy them in boxes and use them one sheet at a time.


I don’t know how much you’re actually missing. Pixel shift is nice for architecture but if the purpose is to increase resolution, it’ll just kind of work for landscape and certainly not street. Leaves move in the air, and water moves. People definitely move. Birds, it takes practice to get the right shutter speed.

There is this kind of weird and intense competition for features and specs in this space of photography, and it doesn’t always make sense. I think if Canon dropped pixel shift, they probably figured that people wouldn’t terribly miss it, but for a little while, it was the feature of the moment. I did appreciate that they added it in firmware for the R5, but I never found a use for it.


As someone experienced with cameras, we are living in an age where upperclass smartphones look superior to many cinema production cameras on paper.

Yet the smartphone picture still sucks, as it is too compressed, the colors are off, weird post processing is applied that ruins the picture, the sensor may have impressive resolution, but all other specs are not as impressive and of course the lens assembly doesn't even compare due to size constraints. Which in the end leads to a mix where the unprocessed gamma-adjusted picture of the cinema camera will look like the real thing, while the smartphone picture will have its flaws, that will come through or not depending on the source material.

As for features, making more use of existing hardware is good. I wouldn't need pixelshift in most cases, but when I would it is there.


Interesting- I have an OM-1 mk1 and similarly I've never found a decent use case for that mode.


As an EM-1.3 owner...

I can't find it right now, but there's a YouTube video that demonstrates using this mode for astrophotography. It seems that a side effect of the computational process is that noise gets averaged out, resulting in a cleaner (and sharper, as you'd be expecting anyway) image.


In a sense, it's an in camera image stacking where the sensor noise can be removed while retaining actual image data.

The Nikon bodies were known to eat stars with its in camera noise removal worse than other makers, it was one of the reasons astro recommendations are to not use that feature for any body.


This concept was used in microscopy before high resolution sensor was available. I don't know why they are using it on a camera that takes moving images most of the time.


> I've never figured out a good use-case for that mode

In studio product photography where you control 100% of the light and have 0 movements


If you have that kind of requirements, budget and setup, why wouldn't you just invest in a camera with a native MP that is closer to what you need?

Right tool for the job and all that..

And I'll tell you, I did test this. I set up ideal conditions in my home studio with a tripod, good lights and - short of ground tremors - zero movement, and the results still weren't that usable.

Comparing the 100MP image to a 25MP version, you really couldn't see that much more detail, and when you zoomed in on the 100MP image, imperfections were quickly visible.


If you need to take a picture of something else, like detail in a product photo, move the camera.


Yeah, it's a real problem. It's not 4x better since the assumptions don't hold up. Check out Figure 5 from [0] to see what the software is up against. I have no idea how well first party software handles this motion.

[0]: http://aggregate.org/DIT/PARSEK/paper.pdf


Doubling the mpx count doesn't even add 50% in term of linear resolution.

That's why 4x the mpx count isn't "4x better"


It depends on how you want to count better, and mpx. Yeah you need 4 images perfectly alligned to sample each color at each subpixel location (assuming your lens + aperture + shutter shake + movement allows you to have that much resolution) and I agree that you won't see a 4x improvement in resolution since the thing we compare too isn't a binning of each Bayer subpixel into a single final pixel, it's the result of a complex interpolation scheme. Then there's the issue of what we mean by resolution. Are we talking about MTF curves? And in that case is it black and white alternating lines, or is it colored ones?

I would much prefer for these pixel shift modes to produce an image where each pixel has an RGB value associated with it, rather than yet another Bayer image that's just bigger.


> I would much prefer for these pixel shift modes to produce an image where each pixel has an RGB value associated with it, rather than yet another Bayer image that's just bigger.

It depends on the implementation but that's what pixel shift is for, you get full color information (essentially negating the bayer filter by moving the sensor 1px up/down/right/left): https://www.pentaxforums.com/articles/photo-articles/how-pen...


I know that's one way to do it, I was under the impression that for some reason it was being used by some cameras to increase resolution only to ultimately produce a bigger Bayer image. I'm probably wrong since I haven't been able to find cameras that do that.


> While my specific needs are a little special

I think that about sums it up. I've never before heard of anyone seriously using pixel shift. The consensus seems to be that it's more of gimmick that rarely delivers anything close to what you'd expect.


Maybe I didn't make the point well in the article. It is very limited to special cases when they limit the IBIS pixel shift to in-camera processing with only a JPEG output.

The camera should take pictures with the IBIS fractional picture shift and save the RAW files. They should give the option of how many "cycles" (go through all the shift orientations more than once) of pictures to take. With that level of information, smart software will be able to figure out and deal with at least small hand motions and considerable motion in the subject.

Smartphones are already using computational photography, combining multiple photos for things like panning and taking pictures in the dark. For a dedicated camera like the R5 mk ii, I would want the camera to save RAW images that can be put together later by "smart" software on a computer with greater processing and memory resources.


I’ve heard it’s actually quite useful for the slightly unusual use case which is film scanning, particularly for formats larger than 35mm. Then you’ve actually got the raw resolution and the controlled environment for it to matter. Otherwise, yeah, pretty much a gimmick.


Film scanning, document scanning, poster digitization, architecture, scientific work. It's genuinely useful, but for technical applications. That's why it gets flack from portrait photogs, casual users, youtubers.


It works well for art reproduction and still macro on a tripod. The 80MP one on my Panasonic G9 works very well for that. Everything else, no.


Gimmick for the very loud - and amateur in their footage - influencers who cry a river if their use case is not supported in a camera.


I have an R5.1 and I have used the Pixel Shift mode maybe twice, just to see what it did. Like most cameras with this capability the scene has to be completely still, which is very rare for the type of photography I do (even architecture often has moving clouds or reflections). My Olympus OM-1 also has a similar mode with a similar experience.

I also do not want any AI generation in my cameras. I want my cameras to take the photo I told it to make, and not to fake a photo it thinks I wanted to take.


I suppose you have to divide the shutter speed by 9 to get the same level of sharpness for something that is moving.


It's not really sharpness - you get weird artifacts since it's combining multiple (slightly different) captures to create one big one. Some other cameras reportedly have better merging algorithms that make them more capable of handling motion in the scene but I haven't tried those.

It really is kind of a gimmick feature unless you have very specific locked down scenes, like a still life.


Pixel shift would be amazing for specific use cases, but Canon only offers it in JPEG. Serious shooters who would use this feature would be shooting in raw.


Canon has a history of perplexing blunders that indicate they don't know the needs of some important target audiences very well.

Another example is their issuance of one video-centric (or at least "hybrid") camera after another that's crippled by an insulting and nearly-useless micro-HDMI port, while everyone else offers full-sized HDMI.

I have a decent investment in Canon lenses; but after waiting and waiting for Canon to pull its head out of its ass and compete (especially in terms of video capability), I think I have to jump ship.


They know exactly what is needed, but will only provide big updates, once the competition catches up or they think it would otherwise make their market share endangered. Like when they released the 5d mark ii with that awesome sensor.


Catches up? Canon is perceived as a laggard at this point. Sony, Panasonic, and even Nikon are more vital than Canon today.


They have been market leaders for over 20 years...


If you're going with that laurel, you can say way more than 20 years.


Well, they put a full-size HDMI port on the R5 mk. II if that is all you are worried about.


That's not all. But it's long overdue.


Canon is great at market segmentation, and would rather get you to buy a great video camera for video needs than make a great camera that would barely meet your needs for video.

It's like Apple wanting you to buy a phone, tablet, and watch.


Sony makes cameras that do both quite well. With Nikon's acquisition of Red, they may also up their video game.

Canon is clinging to laurels that are long gone. And this is coming from a longtime Canon shooter who likes their still-image quality. They need an FX3 killer, pronto.


Sony is great at two things, marketing and sensor development. Their marketing is so great that everyone assumes they have 90% market share, where the reality is that Canon has approximately doubly Sony's market share.

Sony makes top notch sensors and fairly mediocre cameras, and sensor development is something that has been leapfrogged multiple times in history.

Nikon is the one that has a decent chance of dethroning Canon, unless canon ups their RF lens game.


I have certainly heard knocks against Sony cameras' reliability.

I don't assume anything about their market share; just the specs they currently offer. I wish Canon were more competitive for video shooters.

We'll see what happens with Nikon & Red.


> that's crippled by an insulting and nearly-useless micro-HDMI port

:P Feels like a page out of the Apple playbook…

“There’s a dongle for that”

https://a.co/d/ez3xF2i


Ha! But the problem isn't physical incompatibility. It's that micro-HDMI is flimsy garbage that will be loose and unusable by the end of its fifth use. Yep, fifth. That's my estimate (from experience) and I think it would hold up empirically.


Canon loves feature cripple but don't worry they'll incrementally add those features back on that next 4 year cycle. I've had the R5ii for a few days and would definitely use Topaz gigapixel if enlarging was needed.


Presumably it would take multiple raw photos and then you’d have to do the post-processing to put them together in your editing software.


High res pixel shift isn't terribly useful for MOST types of photography. And the R5 II is meant to be an action camera with a stacked sensor, so the MAJORITY of its users probably won't find much use for pixel shift. It's useful for still life and art reproduction.


It's a gimmick. Sensor resolution hasn't advanced much so they are trying to come up with "something". We could produce larger sensors, but those need a new system (bodies and lenses) and those get expensive fast (see prices of Fuji GFX100, Hasselblad, or Phase One cameras and lenses).


Pixel shift isn't that useful for simple physical reason: shifted pixels overlap. With microlences on top light from the whole frame is collected. You can imagine for simplicity 1D array of segments. First pixel covers segments 0 to 7, second 8-15, and so on. Pixel shift will make them cover 4-11, 12-19, 20-17,... So, you get 2x pixel, but don't actually get 2x resolution. Because they overlap. To get 2x you need 0-3, 4-7, 8-11. It can be produced only with some accuracy by algorithm or AI.

Conclusion is that supershift is better then interpolation, but not a lossless 2x magnifier. However, it should reproduce colors better. Because for each pixel we can gen 2 or 3 RAW colors instead of one in normal shot.


I mean, the R5 IBIS mode was always a bit of a toy. No compensation for motion blur, no compensation for hand held, and even when it works it's still not super-resolution without artifacts.

But it's mostly "shoot with a tripod, nobody move", and if I'm in that situation, most of the time you're better of getting a smaller FOV and taking shots for panoramic stitching.

Is there a real pixel shift use case outside of "I want a picture of my 4K monitor"?


I don't understand. IBIS is very useful for handheld photography and video. Works very well. Motion blur of the subject, obviously not.


Parent is referring to IBIS High Resolution Shift, not normal IBIS.


> Is there a real pixel shift use case outside of "I want a picture of my 4K monitor"?

Resolution matter for more than just filling the pixels on a big monitor. Many uses are more technical / less artsy, like digitizing film/prints/paintings/whatever, shooting through a microscope or telescope or, as the author mentions, avoiding artifacts by downscaling.


> Is there a real pixel shift use case outside of "I want a picture of my 4K monitor"?

Anything that requires good color reproduction. Archiving for example (museums &c.)


Huh. I'm surprised you'd use an R5 in that situation, but I assume since it's a good chunk cheaper than an actual archival camera, it might make sense especially for smaller institutions.

TIL, thanks for sharing!


Hmm I'm not into photography enough to follow the whole article, but what does this pixel shift mode have with "AI"?

Looks to me like nothing except a little clickbait in the title.


It says in the article what the problem is.


Yea, and I saw nothing about “AI”.


they brought Luminar-like ML in-body post-processing upscaling, supposedly to replace high res IBIS


I have the R5 and tried pixel shift high resolution when it came out with a firmware update.

I was excited yet after I tried it was nothing but total disappointment.

No RAW output, weird artifacts, extremely long processing time (though this is acceptable given what the camera is doing) to name a few. I can barely say anything from that session was usable, and it's not much better than AI upscale + sharpen on the computer. Yes, better, but a little better. Much worse than I expected.

Therefore I think Canon did the right thing by cleaning up a half-baked gimmick from a pro body.

However I'd be all in if they come of with a proper implementation that actually works well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: