Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All single-sensors cameras are like this.

Each of the R-G-B sensitive photoreceptors in single-chip sensors are arranged using a Bayer filter pattern which looks like a mosaic. The filter pattern is 50% green, 25% red and 25% blue.

The "pixels" of the final image needs to be calculated from this pattern (called de-mosaicing or de-bayering). There is not a 1-to-1 correlation between RGB pixels in the sensor and RBG pixels in the final image, and the final image size may depend on different algorithms to remove artefacts caused by the filter.

https://en.wikipedia.org/wiki/Bayer_filter



There are exceptions like the Foveon sensor where the red, green and blue sensors are stacked behind each other or monochrome sensors with interchangeable filters used in scientific applications, but those are not found in common consumer level cameras.

https://en.wikipedia.org/wiki/Foveon_X3_sensor


Sigma does in fact sell Foveon sensor based cameras. The latest is the "SD quattro" (APS-C 1.5x crop from 35mm full frame) and "SD quattro H" (slightly larger, about 1.3x crop).

I used an earlier Foveon the "quattro dp2" (APS-C, 45mm focal length equivalent), they have different characteristics from Bayer cameras. They get high marks from landscape photographers where the shooting uses the lowest ISO setting and fast operation is not required.

They have less dynamic range, however given the amount of investment into Bayer tech vs. Sigma's spending on Foveon, that might just be due to more advanced engineering of Bayer and not something inherent in the Foveon design.

The DP Review forum for Sigma cameras: https://www.dpreview.com/forums/1027


Quattro is unfortunately no longer "true Foveon", as they do something akin to 4:1:1 downsampling, making it more Bayer-like, IMO obliterating the main advantage so apparent on Merrill sensors - when I take a picture of a model face on a top-end Nikon, I can see "mushy" areas on the skin all the time when zoomed it, moire on clothes, whereas with Merrill I get unbelievable sharpness (like individual hair); I can't go back to Bayer as it can't be unseen and working with Bayer-like cameras just leaves a deep sense of artistic dissatisfaction. Also, when shooting clouds on Merrill, the dynamic range is way better than any camera I've seen; some call it "cloud camera" for this reason.


This is a hotly debated issue on the Sigma forum I linked to.

The "Merrill" sensors (named in honor of Dick Merrill, an engineer and photographer that did a lot of work on that generation of sensor) are 1:1:1 as you mention - each of the photosites pick up all 3 colors.

The "quattro" sensors have the top, blue-sensitive, layer at the full resolution, with this layer also capturing the luminance of the image (which makes for stunning black and white, even at ISO 3200). The lower 2 layers are at a lower resolution than the blue layer.

I did get very high quality 13x19 inch prints from the quattro ; but have never owned a camera that used the previous Merrill sensor, so I can't give a direct comparison from my own experience.


Does the output from a Bayer pattern bother you like that when you shrink it to 50%? Because "100%" means two very different things between those kinds of sensor.


Yes, I can't get the same quality from downscaling Bayer/pixel binning :( Maybe 100MPx Hasselblad is getting it done though from what I've seen. I guess the averaging interpolation is missing out on some signals picked up by my eyes/brain and once you finally see them on Foveon and can't see them on Bayer, you can't unsee it any longer. Of course, Foveon has its own problems, some might be related to immature manufacturing tech like banding or purple-green patches in shadows, but when paired with top-end optics in Sigma Art lenses, I simply can't avoid feeling unhappy with Bayer sensors. I have a perfect color vision as well, so I can spot issues where most people can't.


The existence of de-mosaicing doesn't stop the question from being answered, it just means different use cases get different amounts of quality.

Since they listed the image element size as .8 micrometers, we can do simple math to see that they're quoting the biggest possible number. There are 48M small color cells, 12M big color cells, and 3M RGGB cells. So your luma data is amazing, and your chroma data is garbage.


The Foveon sensors [1] have stacked diodes at each pixel instead of a Bayer (or similar [2]) sensor. The Foveon sensors don't require demosaicing. Some more recent cameras move the Bayer filter one pixel two additional times to capture and record each color for a single image [3] with some obvious limitations for moving subjects.

[1]: https://en.wikipedia.org/wiki/Foveon_X3_sensor

[2]: e.g. Fuji's X-trans, https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor

[3]: https://www.bhphotovideo.com/explora/photography/tips-and-so...


Trying to capture some nice sunset photos while on a bike trip yesterday got me thinking, with these high MP sensors could one use an augmented Bayer filter to get higher dynamic range?

Instead of just the same RGB pixel filters all over, mix in pixel filters which are darker (passes less light). Then use this info during de-Bayering to construct a HDR image directly.

Maybe a very silly idea, but would have been fun to play around with (maybe I'll write a simulator).


They do this on certain cameras, look up "Dual ISO". Effectively they make every other line more sensitive, increasing dynamic range when they merge the lines together. It's not a perfect solution, moire and other artifacts increase, and you lose resolution. The trade-offs are often worth it, especially for video.


Thanks, that search term did indeed show some interesting hits.


Fuji did this with “Super CCD” sensors in the early 2000s.

They used micro lenses, which had some overall sensitivity and angled aliasing challenges, but the results were impressive for the time.

https://en.m.wikipedia.org/wiki/Super_CCD


See this link, that's likely what this sensor is for: https://www.sony-semicon.co.jp/products_en/new_pro/may_2017/...


You’d be decreasing overall image resolution.

HDR is as simple as taking multiple exposures and different EV values. It was often down with sheet film cameras, the two negatives being used to create the final print.


Taking multiple exposures is not always simple to perform with good results. It requires a fairly static scene and camera.

And I doubt most people really need effective 48 MP mobile cameras. They could very well do with more dynamic range though.


This sensor is different. R, G, and B each have their own 2x2 blocks which are arranged into the Bayer pattern over a 4x4 block. Conversion to the full 48 MP is a step beyond Bayer demosaicing.


What about using a color wheel and high readout speeds? Did anyone try how this works for video with an equivalent color wheel and a DLP?

This obviously assumes the sensor is fabricated without a Bayer filter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: