While pixel density is good, there is no substitute for pixel size ie, larger sensors. Hence the full-frame 35mm and larger sensors will always produce better images. I have a Sony RX100 and its images are superb for a pocket camera -- a combination of the 1 inch sensor and fine Weiss glass.
I studied photography before digital and the same was true then: larger formats equalled higher quality particularly for landscape and scenes with specular reflections. The reason was explained to me like this: there is a tree with shiny leaves and these leaves have specular reflections (reflecting the sun directly) so there are extremely small but very bright points of light. These specular reflections hit the film/sensor and spread around a bit. If the grains/pixels are small the highlights will spill into adjacent grains/pixels and reduce sharpness.
With 'Better images' people often mean today less noise and higher dynamic range.
While this is true, the returns are diminishing.
10y ago a 35mm sensor was 1 stop better than APSC, today a 35mm sensor is 1 stop better than APSC [1]. The difference is that 10y ago an APSC sensor was usable at 800 iso and 35mm at 1600iso, while today an APSC sensor is usable at 6400iso and 35mm at 12800iso. The usable range today for the smaller sensor is 7 stops (starting at iso 100) and 35mm is 8 stops, while it was 4 stops and 5 stops 10y ago. 35mm will always be 1 stop better for the same generation [2], but the increase of 1 stop in relation to the usable range (1/5 vs. 1/8) is going down. Same for dynamic range.
[1] Due to 35mm sensor being double the size of APSC and iso being log2
That's where you're talking about sensitivity-related quality - noise, colour rendition, dynamic range.
There's other quality issues in an image though, edge definition and depth of field for example. All things being equal a larger sensor with equivalent glass will always provide better edge definition, and having both 35mm and m43 chip bodies I would say the difference is noticeable. And for depth of field it depends what you want. I can isolate a subject much better with a large chip camera, but that's no use for a macro image where the small chip camera is better.
This depends on the quality of the lens more than on the sensor.
Bad lens, bad image, a larger sensor doesn't help in any way. Good lens, good image, a smaller sensor isn't worse.
If you want shallow DoF, you can get plenty of m43 and APSC (16/1.4, 56/1.2, 90/2) lenses for shallow DoF. If you want razor thin DoF - nose sharp, eyes blurred - you need <=1.4 35mm glass.
If you want full body portrait with blurred near background you need medium format lenses (or far away background).
The lens drives these concerns, yes, but the pixel size still has significant impact.
If I build a lens that will cover a hypothetical 16MP sensor with good quality over the whole sensor, it will also cover a hypothetical 16MP sensor of half the dimensions. However the smaller sensor demands a higher optical resolution (line pairs / mm) than does the larger to resolve those 16MP with the same clarity, and so the smaller sensor will deliver lower effective output resolution (again, in terms of resolvable line pairs) than will the larger sensor given the same glass.
And if I'm searching for shallow depth of field, again the larger sensor has inherent advantages. Physics isn't your friend once you start trying to build arbitrarily fast small sensor lenses to replicate what's easy with a larger sensor lens.
I'd be delighted if these concerns weren't true; my m43 camera is much lighter to cart around than my FX one :) And yes, we can now get DR and noise performance from a smaller chip that we used to not be able to with a larger chip - but there are definite advantages still to the larger chip that won't go away.
Larger sensors allow for shallower depth of field and higher resolution + dynamic range but generally slower readout.
Faster lenses help up to a point but since f stops of under 1 aren't really a thing, you're not going to replicate a large sensor with a fast lense with a smaller sensor.
Perhaps if you have the 'same' image shot with different cameras side by side you could point out some minor differences. Sure. But if I showed you (or anyone) a shot with no context and background information you wouldn't have a clue. And that's a good thing. Equipment shouldn't matter, and now the playing field is more level than ever.
I shoot pics of my kid with a $3500 35mm dSLR and constantly save pictures to my phone. Whenever I show people the pics on my phone, they almost immediately say how amazing the photos are and ask what kind of phone this is.
I'm not some distinguished photographer, but with the right camera and lens, the color, contrast, bokeh, light, some minimal composition (not in that order) all come together very nicely.
Same holds true for video as well.
My point is, this comes up again and again, without any context that the pics weren't taken with the phone.
It's not the $3500 35mm dDSL but the better lens. Cheap lenses on 35mm look worse than an iPhone X. People see shallow DoF, creamy bokeh or flat faces (taken with 85mm++ lenses) and wonder why they didn't see this on other phones. So they ask about the phone.
"Same holds true for video as well."
Which is the reason everyone seems to shoot with a GH5S and no longer with 5DMXs.
You're absolutely right... I was just trying to keep my comment short, and saying $3500 implied a similar level lens as well.
Also, and I think we agree on this, "better lens" doesn't need to mean "expensive", I can get very similar results from a cheap 40mm f/2.8 vs. and a 10x cost 50mm f/1.2 when the lighting is difficult and I can use a very high ISO.
As for the video part of your comment, let's agree to disagree.
I was trying to say that I get people commenting on how good my videos are because they're used to seeing video shot on mobile phones.
However, I'll take Canon color and 1.74x crop 4k over a 2x crop any day...shit 500mbps Canon codec be dammed.
I think the thing to remember is that modern DSLRs are going to continue to produce decent images across challenging lighting conditions. A DSLR buys you more usable shots to select from later.
You are correct, On a sunny day both types of device will produce similar looking 1000 pixel wide web-ready exterior landscape shots. Now take them into a dark cathedral with high stained-glass windows, try for a quick portrait of a priest walking by holding a candle, and the DSLR will still get the shot, and it will look great. Smartphones cannot do this...yet.
Depends on the style of photography. I tend to go for hyperfocal-distance landscapes using a 36 megapixel FF sensor. If you want to call that an edge case then I'd agree.
I'm not too hung up on shallow DoF - it looks nice for a portrait or picture of your meal but I think there's always room for higher resolution sensors capturing a world of visual information (provided they can produce great dynamic range which a lot of recent/new high resolution FF sensors do).
APS-C, similar to M4/3's 16-20 megapixel sweet spot, seems to be stuck at 24 megapixels for good dynamic range results.
Is it really true that larger pixels are better, or is it really only true that sensing more photons is better?
Specifically, what is the reason multiple sensors with smaller pixels could not perform as well or better if their combine total sensor area was larger?
The L16 is an example. It had issues, but they mostly seemed like implementation problems rather than inherent limitations.
After all their challenges seeing that they still apparently raised 100MM in capital, it would seem to suggest something better is possible. https://spot.light.co
Also they have bigger hardware that today’s smartphones would want, but in principal I’ve never understood why smaller pixels are inherently worse as opposed to total photons collected.
I forgot the specific term that describes this, but basically each pixel on a sensor is going to have a threshold for whether it activates (much like a neuron's action potential). Say its 20 photons. If it gets less than that, it is unable to see anything at all. Meanwhile a larger pixel will see something because it will be able to capture more than 20 photons in a given unit of time. On the other hand, the aggregate of several smaller pixels across many sensors will each fail to reach that threshold and thus see nothing.
The L16 has some low light and dynamic range problems, I believe owing to this very situation.
I believe the term you're looking for is reciprocity failure[1] (or schwarzschild effect), though apparently it doesn't happen with digital sensors - only film. Noise becomes an issue in similar situations, but it's not affected by the number of photons hitting the sensor.
Correction: Shot noise[2] is related to the number of photons hitting the sensor.
You might be referring to the dark current, which is the noise floor for any signal coming out of the CCD. There doesn't need to be an activation threshold of a particular number of photons (maybe there is, though?) for the dark current to reduce your signal to noise ratio.
There are fundamental physical limits at play. Smaller sensors, with smaller pixel size are diffraction limited more quickly. This new sensor has a pixel pitch about one eighth of what a modern APS-C DSLR has. That sensor would be diffraction limited at around f1.4. Most camera systems are more limited by lens resolution than diffraction. Fast lenses like those found on smart phones often have issues with optical aberrations. The lens quality issue is both a financial and technological one. It is possible to create a lens that is diffraction limited, but even for a small sensor it is very expensive in practice.
The difficulty is actually more with maxing it flat, so the phone can be thin. If you have length to spare and only a chip to record, you can get awesome f/2 optics (diffraction limited) with only a Schmidt corrector and otherwise two spherical surfaces and 3 flat ones (not counting the flat front outside facing of the Schmidt plate) with a field-flattened Schmidt camera. The only strong downside apart from the limit on FOV at around 4 degrees for this simple design (slight better designs should allow for up to about 30 degrees) is that it's twice as long as the focal length. It should still allow ultra-tele shots by resting the tube on the shoulder, without anything to actively compensate rotation shake.
The issue is that not all noise scales linear with area. Specifically, a larger pixel can hold more electrons, which means you have more dynamic range. Compare e.g. the CMOSIS CMV12000 with the CMV50000. They are pretty much the same, except that the latter has slightly smaller pixels to fit them all inside the chip. The well depth is lower for the latter, without the readout noise compensating fully. The 12 bit of ADC resolution are not sufficient to handle the full well depth of the CMV12000 though, which means that at base ISO of the sensor, i.e. using the maximum photon capacity of the pixels, you have nearly no noise in the image with the CMV12000. Once you activate the iirc 3x analogue gain, you only have a third of the maximum photon capacity left to use, as the rest just reads out full brightness, but you have about 1 LSB per electron (2^12 == 4096, 15k electrons / 3 == 5000 electrons ~ 4096). So, this means that you have a little noise in the last few bits, not much though.
These sensors are awesome btw, the smaller one can actually handle an about equally expensive (if you count a device to handle the datarate) catadioptic, i.e. using both lens(es) and mirror(s), in this case a spherical mirror, a spherical lens, a non-curved circle of glass, and a slightly wavy ground pane of glass to correct the distortions caused by the spherical surfaces, with a 400mm focal length (about 600mm 35mm film equivalent) and f/2 as far as DOF goes, with a little bit light loss and some slight mirror-optics bokeh due to the sensor being in the middle of the 900 mm long tube, 200mm inner diameter. This is a design that does not work well without being able to mount the sensor right in the middle of the tube, and any alternatives are really hard to handle. Oh, and yes, this design is _sharp_. The geometric error due to the not perfectly compensated spherical surfaces is about 4 micrometer circle, and the diffraction circle of confusion is about the same, so combined there is a sharpness about perfect if you were to use a bayer filter to get color images. The thing would weight about 2 to 3kg, not including power supply, but including all that is needed to provide the raw pixel data 12MP@10bit@300fps via QSFP.
Larger sensors are better in respects other than megapixels: diffraction limits hit at much higher f/stops, there is higher dynamic range and less noise, much better microcontrast as a result.
With smaller sensors and ND filters you don't need higher f/stops because of larger DoF from sensor size - more is sharp with smaller f/stops. One reason I wish more cameras had builtin ND filters though.
Yup. The RX100 is an amazing little thing - Fits in my pocket but packs a huge punch. People are always surprised (almost put off) when they see me shooting with a tiny point and shoot.
It really is awesome and when friends ask for camera recommendations I always recommend something small because it's the only thing they'll carry around (and thus use) consistently. I know so many people that bought a DSLR as their starter camera that now permanently sits as a house decoration because "it's too heavy."
There's another essential factor that people just don't seem to be talking about - the decline of print. You don't need 40 megapixels of useful resolution if your photos will be viewed on a 2mp iPhone screen. You don't need a sensor with 14 stops of dynamic range if your images will be displayed on a screen with 6 stops of dynamic range.
Sensor quality matters much less than it did at the beginning of the DSLR revolution, because the bottleneck in the image chain lies elsewhere. The vast majority of photographers rarely if ever print their images, so most of the advantages of a large sensor are rendered moot.
A 4x6 print didn't have crazy high resolution. They usually use 1800x1200px as input resolution (~2.2MP). Your iPhone X is actually slightly more detailed (2436x1125=2.7MP). And if you're talking about stops of dynamic range, print has always been less than a screen.
But if you want a photo to look crystal-clear on your iMac's slide show screen saver? That's 5120x2800px=14.3MP. It allows for stunning detail which consumers just never had access to before. (Even the iPad Pro is 5.6MP, more than double a traditional photo.)
But the real advantage, of course, in both megapixels and dynamic range is in post-processing: you can crop to the quarter of the photo that's interesting and bring out the detail in the shadows and it's still high-quality enough to use as an eye-catching retina-resolution header for your Medium post. Remember, phones don't have deep optical zoom lenses, so megapixels have to do that job instead.
Which is why most consumers used fairly crummy fixed-lens cameras with fast film and were perfectly happy with digital cameras with ~4MP effective resolution. You do need a whole bunch of resolution if you're printing to double-page magazine spreads, large-format fine art prints or billboards, which is why professional photographers (other than photojournalists) were relatively late adopters of digital cameras.
>And if you're talking about stops of dynamic range, print has always been less than a screen.
At your local photomat, yes. A good giclée print has about 11 stops of dynamic range. A normal 8-bit LCD has about six stops of dynamic range and a really good OLED has about ten stops under ideal viewing conditions.
My broader point is that pretty much all modern cameras are more than good enough for 99% of users. It's a tiny minority who will actually see the benefit of better cameras in the images they produce. Most people will only ever notice an improvement in resolution if they're pixel-peeping in Lightroom and will only notice an extra stop of dynamic range on the histogram. If you go out and buy an entry-level DSLR with a kit lens, you probably won't get noticeably better results than just using your iPhone. That's a good thing for consumers, but a bad thing for camera manufacturers.
This is not entirely accurate. While many people do view their photos at small sizes on pretty poor screens, the dynamic range of the entire analog processing chain was typically much less than what even entry-level digital cameras offer today.
Also, for any print that is intended to fit in an album or a frame, there is little benefit over 3 MP. For really, really big prints? Maybe 8, to err on the safe side. Like most analog reproduction media, prints tend to be very forgiving.
Someone viewing their 12 MP photos on a 500 ppi HDR mobile screen is pretty much getting the best comparable experience since the dawn of image capture.
Which of course doesn't mean that the tiny, faded black and white print from a hundred years ago you find in your grandpa's old wallet doesn't convey much more emotion.
I got a Samsung S9 from work. Holy shit those images are sharp.
Yeah at night the software cheats like hell but still.
And while my mobile phone has tones of great features (super slow etc.) my DSLR is heavy and autofocus is okayisch.
I really expect something good from Canons upcoming 90D otherwise i'm not sure how long hobby people will keep using DSLRs.
Of course there are differences in bokey, low light performance, macro, tele etc. but i was very impressed what a Samsung S9 can do now already. I can't imagine that they will stop and you can do a lot with software.
I have an S8, and I actually stopped using my aging Canon 450D as the pictures it takes are so good (and I'm only taking family pics these days). As long as you don't need to zoom, the quality of the images modern phones can take really is incredible.
I looked at the S8 recently; how do you find the curved glass at the edges in use? I found the distortion distracting, and thought the colour was shifting.
In my costly experience - the curves make it guaranteed to crack the glass if dropped from above 40 cm if not using a bulky case. Mine vibrated itself off a low coffee table and onto hardwood floor, cracking three corners. Because of the curves, the cracked glass would fall off over the span of a few weeks, leaving razor sharp shards in my pockets.
And don't get me started with the non-existent palm/grip rejection.
It takes "you're holding it wrong" to a new level.
>While pixel density is good, there is no substitute for pixel size ie, larger sensors.
Actually, there is: improved sensor technology. A sensor is not just a passive collector of photons (which would make size the only thing that matters).
All other things being equal, newer sensors usually improve on thermal noise, dynamic range, etc over older sensors. When things are not equal, a 5-6 generations older sensor might still have worse low light than a smaller one.
There is not much room to improve sensor technology. Photon shot noise is already the dominant factor. There is a bit of improvement available by modifying the bayer filter to filter less. But other than that, you just need more photons, which means a bigger sensor or a bigger aperture, both of which imply more glass. Or a longer exposure. Higher framerates and computational photography help make longer exposures an acceptable option.
Specifically, something like 200+ fps pipelined global shutter. That means you loose less than 1% time between exposures at these high frame rates. But maybe rolling shutter at similar speeds might be able to handle it too... It seems that is mostly trouble if shooting at fast shutter speeds, with a non-stationary scene/camera, due to the wobble e.g. periodic roll shake causes.
One other problem with high mp cameras with small pixels vs lower mp cameras with larger pixels is that they needlessly need to process those extra pixels and they also need to do a lot more work just to get to the same quality. That translates to needing more processing power and less battery life.
Can't we have a combined (computational) approach, where we use a slower shutter-speed to get rid of noise, while at the same time we use a fast shutter-speed to avoid motion blur? Or use several pictures taken with a fast shutter-speed to reduce noise?
You can have smaller sensor sites (pixels) and downsample to a lower resolution in order to average away most of the noise. That's one approach with, for example, cameras like RED's new 8k film camera.
"One reason for the saccadic movement of the human eye is that the central part of the retina—known as the fovea—which provides the high-resolution portion of vision is very small in humans, only about 1–2 degrees of vision, but it plays a critical role in resolving objects.[citation needed] By moving the eye so that small parts of a scene can be sensed with greater resolution, body resources can be used more efficiently. " from https://en.wikipedia.org/wiki/Saccade
The human eye is enormous compared to a smartphone camera, and the retina is still larger than the 1 inch squared sensor he mentioned. What exactly are you talking about?
There’s no shortage of large sensors, they just don’t make sense in a phone unless you want it to be huge. In fact there’s a migration upwards in camera sensor sizes owing to the decreasing size of the support electronics, e.g. you can now get a Hasselblad medium format camera that’s smaller than a DSLR. Of course the lenses are huge.
I for one would love for Apple or Samsung or whoever to release a photographer’s smartphone with a micro 4/3 mount (it’s the only multi firm properly documented lens mount system and also seems to be the sweet spot for trading off image quality, sensor readout speed, and size) BUT the phone would be at minimum 20mm thick.
No shortage? Are you kidding? There is only Sony and Canon who make full frame sensors right now at mass market level (nikon is supplied by Sony), and the amount of cameras available with not-really-medium-format size, you can count on one hand. This is a far cry from the situation of film cameras where you had at least at least several dozens of medium format cameras available for purchase at once.
Are you talking about sensors or film? Kodak and Fuji dominated film, and today Sony and Canon dominate sensors (Samsung had a pretty solid go of it and simply gave up). Lots of other companies make sensors but they aren’t competitive with Sony.
There are plenty of camera companies, mostly using Sony sensors or Sony fabs.
The lack of manufacturing diversity isn’t a shortage. It’s a different problem. Is there a shortage of cpus?
What about CMOSIS, well, to be fair, the CMV50000 is still just sampling without bayer, but it's a really nice sensor, and you can't really say that it's brother the CMV12000 is bad.
I wouldn't quite agree with your statement about there being no substitute for pixel size. The trade-offs between pixel size, dynamic range, and noise, when making fair comparisons is complex. Not to mention that today it seems that every phone is taking advantage of computational image processing in order to give a more "DSLR like" image, while ILC cameras aren't implementing this at all. I'm sure in many cases a cellphone camera will shoot a better image than something like your RX100 due to this.
> Not to mention that today it seems that every phone is taking advantage of computational image processing in order to give a more "DSLR like" image, while ILC cameras aren't implementing this at all.
When it comes to ILC cameras, the role of processing is fulfilled by the photographer's editing software (e.g. Lightroom, Capture One Pro, Photoshop, RawTherapee, Darktable etc.). Especially these days, with the camera market being decimated by smartphones, the kind of person who uses a dedicated camera usually wants control over the processing.
> I'm sure in many cases a cellphone camera will shoot a better image than something like your RX100 due to this.
The out of camera JPEG on a smartphone might look nicer than an out of camera JPEG on an RX100M6 but after a bit of processing with the right software, the RX100M6 will blow it away. The resolution, dynamic range and noise characteristics are all much better. The RX100M6 can even shoot 24fps RAW for quick HDR brackets if you need it to.
Smartphones will definitely produce a nice image easier than a real camera but a real camera in the end, with the right work, still produces a better image.
It's sad that these DSLRs can't yet use the real fast video-mode non-binned sensor readout to just buffer 3 wide bracketed exposures to RAM and then write them out slowly. One of those exposures will be rather long anyway...
Shooting HDR quickly has the benefit that while you want something to reduce rotational shake, you don't need to be much longer in the position than the exposure time itself dictates. You can use the time you need to move to a new angle/direction to compress and store the RAWs, as barely anyone has hardware capable of yanking the camera to a new angle in a few milliseconds, just alone due to the power density needed to have something able to do this compact enough for casual handling. Imagine it like a much, much faster clockwork, with the purpose to deliver dozens of kilowatts for a few milliseconds to accelerate the camera and slow it right back down. Almost no one has such a thing that would keep the device steady during the exposure, but permit very short spacing between exposures that change the direction it is pointing by more than 20% FOV. If you were to rotate the camera that fast at over 50fps, good luck with the light you need (hint: your lens can't provide DOF at the aperture you need to do this even in full sunlight (sunny 16 rule)).
> It's sad that these DSLRs can't yet use the real fast video-mode non-binned sensor readout to just buffer 3 wide bracketed exposures to RAM and then write them out slowly.
They essentially can. The A7RIII and RX100M6 can both shoot RAW to their buffers at 24fps then write it out. A full bracket usually takes less than a second and works just fine handheld.
> Shooting HDR quickly has the benefit that while you want something to reduce rotational shake, you don't need to be much longer in the position than the exposure time itself dictates. You can use the time you need to move to a new angle/direction to compress and store the RAWs, as barely anyone has hardware capable of yanking the camera to a new angle in a few milliseconds, just alone due to the power density needed to have something able to do this compact enough for casual handling. Imagine it like a much, much faster clockwork, with the purpose to deliver dozens of kilowatts for a few milliseconds to accelerate the camera and slow it right back down. Almost no one has such a thing that would keep the device steady during the exposure, but permit very short spacing between exposures that change the direction it is pointing by more than 20% FOV. If you were to rotate the camera that fast at over 50fps, good luck with the light you need (hint: your lens can't provide DOF at the aperture you need to do this even in full sunlight (sunny 16 rule)).
I'm not sure what you're getting at here. It sounds like maybe you're after some kind of automtaed panorama bracketing or something?
Yes, a device for such fast rotating would allow for fast panorama capture, allowing things that are not possible without really fast capture of a high resolution panorama. E.g., things like high resolution 360 degree timelapses, etc.
No, when you have a bigger pixel size you have also a better dynamic range and s/n ratio.
There is no substitute for pixel size in sensors with the same technology.
If you compare different sensors technologies, like front-illuminated vs back-illuminated than at the same pixel size the back illuminated has clear advantages.
The computational image processing can be applied also to bigger sensors, so it doesn’t make any sense to compare the raw capacity of a big sensor camera with the processed output of a phone camera.
I don’t really think that you have ever seen the output of an RX-100 if you judge it inferior to cellphone cameras.
The reasons I say it is complex is because depending on the sensor technology, resolution, image pipeline, etc. there are many different ways you could compare image quality. If you compare extremely different sensors, like a 1/3" and full frame, then of course the larger sensor will be better, but comparing say a 1/2" sensor with a 1" sensor is less clear.
The smaller pixel sensor will have a better resolution in lp/mm for example, if downscaled to the same resolution as a lower resolution sensor, in many cases you will end up with an image with better detail and less noise (due to averaging out the noise). This is why cameras like the Sony A7iii take a 6K readout and downsample it to 4K for video output.
My comment about computational processing still stands though. While a RAW file will give you much more ability to process, complex computational processing just isn't used to nearly the same level in any raw processor I'm aware of (DxO Photolab is probably the most advanced at this, but still is relatively basic), which does make it a valid comparison at the moment. The simple fact is that small sensors these days have closed the gap with larger sensors significantly, while there are areas they are inadequate in, there are other shooting conditions where the differences are minimal which is a statement I would have scoffed at when I started working with cameras 6 years ago.
The sensor size has nothing to do with the image quality.
Bigger sensors, at the same pixel count, have better quality only because their pixel are bigger.
If you have a 100 mpx full frame it will have worse quality of a 1/2.3” at 8 mpx if they have the same technology.
The possibility of 4x more dynamic range (2 stops) is very exciting. DPReview's writeup [1] mentions the possibility that each of the subpixels could be set to different sensitivities and that each one's information could contribute to the larger pixel's output, increasing its overall dynamic range at the expense of resolution. This simple approach isn't practical with the Bayer pattern because all adjacent pixels have different color filters.
EDIT: I actually found a little explainer on quad bayer on Sony's site [2] for an industrial sensor from 2017. Instead of changing each subpixel's sensitivity, they change how long the subpixel collects light.
The fact that they have to explicitly state "effective" every time, along with a footnote that vaguely explains "Based on image sensor effective pixel specification method", strongly suggests that this isn't actually a 48MP sensor.
...and indeed, the link in the other comment here https://news.ycombinator.com/item?id=17597939 explains it best: "This requires processing to convert the 'Quad Bayer' data into an approximation of what a 48MP Bayer sensor would have captured."
Each of the R-G-B sensitive photoreceptors in single-chip sensors are arranged using a Bayer filter pattern which looks like a mosaic. The filter pattern is 50% green, 25% red and 25% blue.
The "pixels" of the final image needs to be calculated from this pattern (called de-mosaicing or de-bayering). There is not a 1-to-1 correlation between RGB pixels in the sensor and RBG pixels in the final image, and the final image size may depend on different algorithms to remove artefacts caused by the filter.
There are exceptions like the Foveon sensor where the red, green and blue sensors are stacked behind each other or monochrome sensors with interchangeable filters used in scientific applications, but those are not found in common consumer level cameras.
Sigma does in fact sell Foveon sensor based cameras. The latest is the "SD quattro" (APS-C 1.5x crop from 35mm full frame) and "SD quattro H" (slightly larger, about 1.3x crop).
I used an earlier Foveon the "quattro dp2" (APS-C, 45mm focal length equivalent), they have different characteristics from Bayer cameras. They get high marks from landscape photographers where the shooting uses the lowest ISO setting and fast operation is not required.
They have less dynamic range, however given the amount of investment into Bayer tech vs. Sigma's spending on Foveon, that might just be due to more advanced engineering of Bayer and not something inherent in the Foveon design.
Quattro is unfortunately no longer "true Foveon", as they do something akin to 4:1:1 downsampling, making it more Bayer-like, IMO obliterating the main advantage so apparent on Merrill sensors - when I take a picture of a model face on a top-end Nikon, I can see "mushy" areas on the skin all the time when zoomed it, moire on clothes, whereas with Merrill I get unbelievable sharpness (like individual hair); I can't go back to Bayer as it can't be unseen and working with Bayer-like cameras just leaves a deep sense of artistic dissatisfaction. Also, when shooting clouds on Merrill, the dynamic range is way better than any camera I've seen; some call it "cloud camera" for this reason.
This is a hotly debated issue on the Sigma forum I linked to.
The "Merrill" sensors (named in honor of Dick Merrill, an engineer and photographer that did a lot of work on that generation of sensor) are 1:1:1 as you mention - each of the photosites pick up all 3 colors.
The "quattro" sensors have the top, blue-sensitive, layer at the full resolution, with this layer also capturing the luminance of the image (which makes for stunning black and white, even at ISO 3200). The lower 2 layers are at a lower resolution than the blue layer.
I did get very high quality 13x19 inch prints from the quattro ; but have never owned a camera that used the previous Merrill sensor, so I can't give a direct comparison from my own experience.
Does the output from a Bayer pattern bother you like that when you shrink it to 50%? Because "100%" means two very different things between those kinds of sensor.
Yes, I can't get the same quality from downscaling Bayer/pixel binning :( Maybe 100MPx Hasselblad is getting it done though from what I've seen. I guess the averaging interpolation is missing out on some signals picked up by my eyes/brain and once you finally see them on Foveon and can't see them on Bayer, you can't unsee it any longer. Of course, Foveon has its own problems, some might be related to immature manufacturing tech like banding or purple-green patches in shadows, but when paired with top-end optics in Sigma Art lenses, I simply can't avoid feeling unhappy with Bayer sensors. I have a perfect color vision as well, so I can spot issues where most people can't.
The existence of de-mosaicing doesn't stop the question from being answered, it just means different use cases get different amounts of quality.
Since they listed the image element size as .8 micrometers, we can do simple math to see that they're quoting the biggest possible number. There are 48M small color cells, 12M big color cells, and 3M RGGB cells. So your luma data is amazing, and your chroma data is garbage.
The Foveon sensors [1] have stacked diodes at each pixel instead of a Bayer (or similar [2]) sensor. The Foveon sensors don't require demosaicing. Some more recent cameras move the Bayer filter one pixel two additional times to capture and record each color for a single image [3] with some obvious limitations for moving subjects.
Trying to capture some nice sunset photos while on a bike trip yesterday got me thinking, with these high MP sensors could one use an augmented Bayer filter to get higher dynamic range?
Instead of just the same RGB pixel filters all over, mix in pixel filters which are darker (passes less light). Then use this info during de-Bayering to construct a HDR image directly.
Maybe a very silly idea, but would have been fun to play around with (maybe I'll write a simulator).
They do this on certain cameras, look up "Dual ISO". Effectively they make every other line more sensitive, increasing dynamic range when they merge the lines together. It's not a perfect solution, moire and other artifacts increase, and you lose resolution. The trade-offs are often worth it, especially for video.
HDR is as simple as taking multiple exposures and different EV values. It was often down with sheet film cameras, the two negatives being used to create the final print.
This sensor is different. R, G, and B each have their own 2x2 blocks which are arranged into the Bayer pattern over a 4x4 block. Conversion to the full 48 MP is a step beyond Bayer demosaicing.
- The Nikon D1x had about 6 million rectangular photon wells on the sensor. But by doubling them up in processing (to make square final pixels), Nikon claimed about 10 million effective pixels. Needless to say, when actual 10 MP sensors debuted (sensors with 10 million photon wells), everyone stopped using their D1x's.
- Fuji used a diagonal array of photon wells on the sensor in the Finepix S2 camera, and claimed that through interpolation, this provided 12 megapixels of effective resolution. Again, once actual 12 MP sensors debuted, the difference was obvious.
- Sigma used a Foveon sensor with about 3.5 million photon wells in their S9, but because each photon well independently sensed red, green, and blue wavelengths (at different depths in the well), Sigma claimed a "3x factor" that allowed their software to produce about 10 million effective pixels in the final image. How'd that go... you guessed it, the vast majority of people preferred sensors with 10 million photon wells.
In short, there are plenty of examples of using computer processing to produce a high number of "effective" pixels, but that doesn't mean the final image will meet user expectations. I predict a pretty obvious difference between an "effective" 48 MP image from this stacked sensor array, vs. an image from, say, a Nikon D850.
That said, even though it's a compromise... compromises sometimes make sense. A D850 is very heavy and very expensive.
EDIT to add: The D1x was good enough to convince National Geographic to use that camera for their first all-digital photo story. I went to the live event at Nat Geo headquarters where they had some 20x30 prints on display, and they did look amazing. I believe they took the D1x 10MP image and sized it up using Genuine Fractals.
I have already sold my Nikon camera and currently selling all my lenses to permanently switch to Sony. As far as I know, Sony is the only company currently doing real innovation in the field of image sensors.
Are you talking about going from DSLR to mirrorless? If so, I'm also toying with the same idea, but I'm unsure about when to switch.
DSLR tech has stagnated meaning they keep their value, so presently I can sell with a small loss, while mirrorless tech is rapidly improving so today's models will lose their value quickly - not a good investment. However, a lot of people seem to be making the switch, which will drive down the value of DSLRs over time, so switching too late means making a larger loss on my DSLR.
Not to mention the cost effectiveness of DSLRs vs. mirrorless cameras. From what I've read, mirrorless are still behind.
I suppose I should wait for the release of a mirrorless model that rivals similarly priced DSLRs at a time when the rate of improvement has lessened somewhat to maintain its value.
> while mirrorless tech is rapidly improving so today's models will lose their value quickly
It is improving very slowly, actually, by any standard of improvement. I have an almost 6-years midrange MFT camera, which I wanted to replace with the latest and greatest MFT, but things haven't changed in a significant way.
I've actually spent a holiday carrying around both the old camera and a "latest and greatest" APS-C, and taking the same pictures with both. While there was difference, it wasn't as large as one would expect from a ~5 years timeframe. Of course, I must mention that I don't generally take pictures in extremely challenging conditions (very low light, very fast focus speed etc.etc.).
On top of that, the MFT market radically changed in the last years. Nowadays the best sensors are in the top-of-line camers, which are very expensive (on par with APS-C), and it seems they're not permeating down to the midrange segment, as they used to do.
In other words, buying today a midrange MFT gives almost no noticeable improvement compared to a 5 years old midrange MFT. Buying a top-of-line MFT gives an advantage that is not as large as expected, while also being very expensive.
I don't imply that using MFT doesn't make sense (it's my favourite format), however, it's nowadays far from being revolutionary.
>It is improving very slowly, actually, by any standard of improvement. I have an almost 6-years midrange MFT camera, which I wanted to replace with the latest and greatest MFT, but things haven't changed in a significant way.
Yeah, just 4K at 200mbps (GH5 and co) and 10bit color, 6MP snapshots from video, 5 axis image stabilisation that's rock solid, sensor-shift 40+ MP images, much better low light, and so on.
>While there was difference, it wasn't as large as one would expect from a ~5 years timeframe.
Analog cameras image quality didn't have ANY difference in a 5 or even 20 years timeframe, so there's that...
>Nowadays the best sensors are in the top-of-line camers, which are very expensive (on par with APS-C), and it seems they're not permeating down to the midrange segment, as they used to do.
Err, Panasonic for one has several cheap models with essentially the same sensor as GH5.
>I don't imply that using MFT doesn't make sense (it's my favourite format), however, it's nowadays far from being revolutionary.
MFT does not exhaust mirroless (which is what the parent spoke about).
This is an extremely approximate answer, so I won't follow up.
> Yeah, just 4K at 200mbps (GH5 and co) and 10bit color, 6MP snapshots from video, 5 axis image stabilisation that's rock solid, sensor-shift 40+ MP images, much better low light, and so on.
None of them has nothing to do with the sensor quality in a strict sense (dynamic range/color resolution/low light sensitivity).
I've personally done real-world tests with old and new sensors (difference of more or less 5 years), and the generational advantage has been significantly lower than the previous 5 years.
In particular, the "low light" is far from being "significantly better". On paper, in best cases, the best sensors from each mirrorless, including Fuji, have gained more or less one stop.
In real world, less, and I've tried it.
> Analog cameras image quality didn't have ANY difference in a 5 or even 20 years timeframe, so there's that...
Irrelevant.
> Err, Panasonic for one has several cheap models with essentially the same sensor as GH5.
They're not the same. With the vague term "essentially" one can make any argument.
> MFT does not exhaust mirroless (which is what the parent spoke about).
This is a fair point, but the slowdown applied to all the other mirrorless as well. Probably the most interesting/significant improvements have been the X-Trans sensor, and the A7S; the former probably made, in the respective segment, the largest difference.
> Yeah, just 4K at 200mbps (GH5 and co) and 10bit color, 6MP snapshots from video
This is a good thing to point out, because video really has progressed substantially in the last five years. But most people still can't make videos like they can photos, and 10bit colour an high-bitrate 4K are both totally useless to the average person, or even a enthusiast photographer.
Not really. I know pros who exclusively use Olympus gear in their studio. Olympus Pro lenses are of the very highest quality (and priced accordingly) and the image quality is right up there.
In certain situations, like when light is low, larger sensors have the edge. But, compare a modern APS-C or MFT sensor to one of ten years ago and the performance gulf in low light is huge.
For the record, I'm a Nikon user that's very tempted by Fuji.
At the pro-level it sounds like they're comparable, but at least in my price range DSLRs seem to give better value for money. I'm currently using a Nikon D7100 and was looking into switching to mirrorless, primarily to save on weight as I mainly do travel photography. I couldn't see anything out there that rivaled it for a similar amount to what I paid for the D7100 4 years ago. I put the money into the awesome Nikkor AF-S 14-24mm f/2.8G ED instead. It did nothing but exacerbate my weight problems though! Hah
I have the Olympus M10 mark III for the same reasons you want a new camera. It's pretty good. The only thing I really miss is a persistent Bluetooth connection to my smartphone instead of the ugly WiFi solution this camera has.
This is a strange (and off-topic) argument. Sony is mainly prevalent because they sell sensors to other companies, while Nikon doesn't do that (AFAIK). Other companies advancing camera sensor design include Aptina and TowerJazz.
... yes, I know... ...
You might also want to know that Sony has had a change of heart recently and doesn't sell its top-notch lenses to 3rd party companies anymore, meaning that a Sony camera will always have a better sensor than a Nikon camera.
Hmm, I don’t think 48 MP is very suitable for smartphones. That sounds very large and unwieldy with the often limited space (not only on phones but even more so cloud backups). Who is asking for larger photos these days anyway?
The inproved DR is interesting though! Smartphones are currently very behind APS-C and full frame sensors there since it’s traditionally been more or less tied to sensor size.
Presumably the phone's image processing algorithms can make productive use of extra sensor data without ultimately needing to persist a larger file to the filesystem.
You can use pixel binning to downsample an image to a more reasonable size with less noise, and you can use the extra resolution to crop the image and effectively have a usable digital zoom.
There was a point in the 90s when every upcycle in clock speed translates directly to a payoff for software and users - that is moving to a new CPU generation bought you faster response times or quicker databases
Somewhere a decade ago this kinda stopped for anything mere humans could recognise - you could keep running your laptop or tower past the 18 month upgrade cycle and not notice or care too much (pace gaming)
Cameras seem to be in the same position now - i click the button on my camera phone and zoom in and am astounded - just astounded, at clarity.
I suspect the next driver will be like big data sucked up all those new cores, something like 3d cameras will be where the new frontier for camera improvements come from - not a demand for pictures that are better than most humans can already tell
If you read the description of the sensor, it actually sounds a lot more interesting than just "moar pixels". It can use the adjacent same-colored pixels at different sensitivities to either increase the dynamic range (basically single-shot HDR), or join them together to increase low-light sensitivity at a 1/4th the resolution.
One really useful application is better digital zoom. I've definitely found I wanted to crop or zoom an image on my smartphone to frame a distant subject, but the resolution of my camera becomes apparent when I do it.
There is the world of hype, where reviewers and people who want to buy the newest thing live. As soon as the newest thing comes out it makes everything before it look like trash, at least until the next thing comes out.
OTOH the Nikon D3S/D700 (2008, 12MP) are still selling used for $1,000+ so I guess that 12MP is plenty for whoever is buying them.
I'm curious to see how the "Quad Bayer" mosaic works out. Other manufacturers have tried novel filter patterns before, but nothing so far has really been able to compete.
Essentially, almost all digital cameras today use a planar CMOS sensor with alternating RGB-sensitive pixels, arrayed like so:
RGRGRGRG
GBGBGBGB
RGRGRGRG
GBGBGBGB
This pattern is not perfect, but is highly effective. Luma (color-independent) resolution is essentially equivalent to the actual number of pixels, while chroma (color-dependent) resolution is only slightly less - we essentially get one "real" point of color information at each intersection of four color pixels, because at each of those intersections we have one red, one blue, and two green pixels.
In other words, "the luma information we gather for a given color on a pixel of that color is immediately relevant to the effective pixel composed by it and its adjacent neighbors at each of its four corners". In this 8x8 Bayer pattern pixel grid with 64 real pixels, we get 49 effective chroma pixels; one for each intersection of 4 physical pixels.
In comparison, here's the pattern for "Quad Bayer":
RRGGRRGG
RRGGRRGG
GGBBGGBB
GGBBGGBB
I'm concerned that chroma resolution and overall color accuracy will be much lower with this pattern. Essentially, with the original Bayer demosaicing, you only need to sample from the four color pixels adjacent to each corner in order to get a bit of the three channels, and the pattern gives equal weight to both red and blue, while providing extra accuracy in the green channel that human vision is most sensitive to.
In comparison, as far as I can tell, a single "effective pixel" (one with information on all three channels) using the Quad Bayer pattern has to be made up of data from nine individual pixels. Additionally, when an effective pixel is centered on an actual pixel with either red or blue filters, that color is relatively dominant in the pixels considered - it'll be equally weighted with the green channel, and the opposing color will only make up 1/9 of the total signal composing that effective pixel. Effective pixels centered on green pixels will give equal weight to red and blue, with slightly over 5/9 of the weight given to the green channel.
Granted, the sensor should still be able to produce a full 48MP of luma resolution, but chroma detail will be much more "smeared" because of the wide area that has to be considered to get a full color pixel, and the more substantial overlap of that full color pixel with other full color pixels. Color accuracy will likely also be lower, because in effective pixels centered on red and blue pixels, only a single pixel of the opposing color will be used, which means that any noise in that channel will have an outsized impact on the overall color.
What this boils down to is that, when used as a 48MP sensor, this sensor will have entirely different imaging characteristics than a traditional Bayer imager, and that those characteristics will be highly dependent on how the output of this sensor is processed - which will be interesting in a world full of software highly optimized to demosaic Bayer-pattern images.
What's slightly more interesting is the high-sensitivity 12MP mode. Essentially, it's an attempt to reduce the impact of random noise in the image by adding together four pixels of each channel to produce a "superpixel" less impacted by noise overall. These superpixels can then be processed in a standard Bayer pattern as a 12MP effective image.
Thinking about it overall, though, I become more and more confused. In both of these modes, this pattern doesn't give us anything, really, that we can't already do using a Bayer filter.
Let B represent a sensor using a standard Bayer filter pattern, and let Q represent a sensor using this "Quad Bayer" pattern, where each of these patterns have a red pixel in the top-left corner.
Let any given effective pixel be represented by a 3-tuple of the form (R, G, B), where R, G, and B are the number of physical pixels sensitive to each of the red, green, and blue channels which compose that effective pixel.
Let f(p, w, h, s, i) be a function returning a two-dimensional matrix of all the effective pixels produced by a matrix of physical RGB pixels, laid out in pattern p, with actual pixel width and height w and h, where an effective pixel measures s actual pixels horizontally and vertically, and where an offset of i actual pixels in either vertical or horizontal directions produces the "next" pixel in that direction.
Thus, our standard Bayer pattern produces the following:
Note that there are fewer effective pixels for the same total number of pixels - that's okay, though, because the number of effective pixels approaches the number of total pixels as the sensor scales in the X and Y dimensions - this very small hypothetical sensor doesn't benefit from that scale yet.
You can also see that each effective pixel is composed of a larger number of physical pixels - there's a tradeoff there, in that this means that overall, noise should have a smaller impact on the value of a given pixel, but there's a loss of resolution because those pixels are spread over a wider area.
This raises the question, "what if we do a 9-pixel effective pixel on a standard Bayer pattern?" Well, we get this:
Interestingly, while the exact arrangements of the different color channels within the effective pixels are different, the total number of pixels of each channel remains completely identical, meaning that any given effective pixel should have identical noise characteristics to the Quad Bayer pattern. In fact, it's arguable that the Bayer pattern is better, because the color physical pixels are more evenly distributed around the effective pixel.
What if we do the high-sensitivity superpixel sampling? For the Quad Bayer pattern, it looks like this:
Again, sampling in a similar pattern gives the same overall result. So, all else being equal, I'm not sure it makes sense.
Of course, there is the possibility that all else is not equal. Having multiple adjacent pixels of the same color could enable consolidating the signals of those pixels together earlier on in the image processing pipeline into an actual lower-resolution standard-Bayer signal. That could actually have real benefits if that early-stage signal combination results in a greater signal amplitude that drowns out noise.
Basically, this has all been kind of stream-of-consciousness and much longer than I originally planned, but here's the Cliff Notes from what I can tell.
In comparison to a Bayer sensor of the same pixel resolution...
Pros:
- (If implemented to take advantage, possibly) Ability to act as unified large pixels, increasing SNR at lower resolution settings
Cons:
- Less-fine maximum chroma resolution when all pixels are active
Why would a 2018 sensor from the low-light kind (Sony) have worse "low light performance" than a 2013 one (of equal number of megapixels too)? Brand loyalty?
5MP is just very small these days. 3840x2880 is practically the minimum you want to go (to be able to capture 4k shots) and that is already 11+MP.
For 4k video there is the additional challenge that if your sensor is not exactly the right size then how are you going to sub-sample it? I bet it is not a complete coincidence that the quoted 48MP figure is pretty close to 4x above 11MP (=double the horizontal/vertical resolution); I imagine they are planning to sample every other pixel over 8K area, which probably brings in all sorts of benefits.
Who watches their photos on a 4K display and pixel-peeps that they're 20% less that its full resolution? (4K is about 8MP -- but you need to go to 2MP (square root of 8MP) to have half the visual resolution. 5-6MP to 8MP is much closer that it looks on paper, as the change is not linear (pixels cover an area, not a line).
That is bit borderline claim. My reading would be that the image would have about 180 PPI (or less..), which is maybe passable but certainly not above human ability to notice improvement to something like 300 PPI. Assuming of course sufficiently good printer etc.
That’s what you get with this in low light conditions: a 12MP image. If enough light is available, you get a 48MP image. Best of both worlds, so to say.
whats really interesting here is the 4 times greater dynamic range, that could actually make a visible difference to your pictures! increasing the pixelcount is largely irrelevant for smartphones, and has been for a while.
I studied photography before digital and the same was true then: larger formats equalled higher quality particularly for landscape and scenes with specular reflections. The reason was explained to me like this: there is a tree with shiny leaves and these leaves have specular reflections (reflecting the sun directly) so there are extremely small but very bright points of light. These specular reflections hit the film/sensor and spread around a bit. If the grains/pixels are small the highlights will spill into adjacent grains/pixels and reduce sharpness.