Jump to content

sgl_imaging_challenge_2021_annual.thumb.jpg.3fc34f695a81b16210333189a3162ac7.jpg

RGB not parfocal, what hope for luminance


Recommended Posts

  • Replies 38
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Composition of brightness / luminance. There is discrepancy between how humans perceive brightness of different wavelengths and how sensitive the sensor is. Human brightness perception is cl

This is only true for human wavelength response - but not for imaging sensors. Dave above gave good explanation. Your filters are parfocal but telescope you are using has focus shift based on w

A simple question, pretty much as per the title. With a refractor if R G and B are not par focal, how can the luminance be in focus? Non parfocal RGB setups seem common, so I'm puzzled how luminance c

Posted Images

11 hours ago, michael8554 said:

Hi vlaiv

IMO  you've over-complicated a simple explanation for someone struggling to understand how L is derived from RGB.

Michael

I think we are talking about two different things here - one is deriving luminance from RGB if we want to make color image appear grey scale - then yes, rough formula that you quoted is right.

However, parfocality of RGB filters and color correction of refractor telescope and relation to luminance data from telescope is another matter altogether. Above formula can't be applied to it.

 

  • Like 1
Link to post
Share on other sites
11 hours ago, Tommohawk said:

The thing is when making a call about how good the image is, "photographic CA" is in the end perceived and judged by the human eye....

.... I'm still thinking about it! 

When you take X-Ray of a patient - you judge if there is fracture by eye, I agree - but that does not mean that you have X-Ray vision and can see that same thing without X-Ray machine.

Camera is sensitive enough to show CA that can't be seen by naked eye at the eyepiece. Whether we look at the image or use other metric like FWHM difference between channels to determine chromatic blur / defocus is rather irrelevant to the level of that blur.

Link to post
Share on other sites
53 minutes ago, vlaiv said:

When you take X-Ray of a patient - you judge if there is fracture by eye, I agree - but that does not mean that you have X-Ray vision and can see that same thing without X-Ray machine.

Camera is sensitive enough to show CA that can't be seen by naked eye at the eyepiece. Whether we look at the image or use other metric like FWHM difference between channels to determine chromatic blur / defocus is rather irrelevant to the level of that blur.

Not sure I can run with  your X-Ray vision analogy! But there is another issue here - we can look at how photographic sensors work with OSC or mono images using filters, their sensitivity at different frequencies etc OR we can consider what the eye sees, eg  at the telescope eyepiece. We would agree they are two different things.

But the third comparison is what the human eye sees when it looks at a sensor/screen generated image - then we have to consider both aspects. And with astrophotography this is mostly what we are doing .My point here is that if an image produced by a sensor/telescope/screen is wonderfully sharp in red and blue, but blurry in green, and the eye is most sensitive to green, then the screen viewed image will appear mostly blurry.  I think this is the point Michael was making.

In any event, the key point of my initial question was to establish whether a LUM image generated by telescope system having differing focal points for RG and B, will be defocussed, and I think this must be so. The RGB components derived from filters will be better focused, and so adding the LUM data may therefore spoil the image. A partial remedy is to limit the bandwidth of the RGB filters, and similarly limit the bandwidth of the LUM filter. The other partial remedy is to use a superluminance generated from the RG and B data, rather than straight LUM, but this doesn't have the SNR advantage that the LUM data has. 

My interpretation of Michael's point is that if the green is focussed in the final image viewed on a screen, the final image may not suffer too badly. My only problem with this is there isn't much data in the green.

If this is so, we are all correct!

Link to post
Share on other sites
8 minutes ago, Tommohawk said:

My interpretation of Michael's point is that if the green is focussed in the final image viewed on a screen, the final image may not suffer too badly. My only problem with this is there isn't much data in the green.

If this is so, we are all correct!

I think we are all correct in what we are saying.

I also have example for all of this - rather exaggerated example - but quite revealing.

image.png.788f9d3ae45033534f8039a382ca6da5.png

This was taken with optics that is not well corrected at F/2 (Samyang F/1.4 85mm lens) with small pixels.

Left to right, we have red, green and blue channel. It is obvious that green is actually rather fine and sharp - the sharpest and most sensitive of the three.

Here is same region - converted into luminance by above formula, and processed as luminance.

image.png.f579271b914de3bc449e2e48a294e1b8.png

I don't think that sharpness in green helped much for overall sharpness. This is because of the way sensors work and because of how we process our data.

 

  • Like 1
Link to post
Share on other sites
On 26/01/2021 at 00:50, michael8554 said:

Luminance is very roughly  0.2R +0.7G + 0.1B.

So if G is in focus, L may not look too bad.

Michael

No that's just the human eye response. And if the object is blue then the lum will have no red in it at all will it. So it also depends on the colour of the thing your imaging as well. Most lrgb filter sets try to balance each channel against the sensitivity curve of modern sensors and so white light will be close to evenly split between the three channels. 

Link to post
Share on other sites
15 hours ago, Adam J said:

And if the object is blue then the lum will have no red in it at all will it.

Exactly right !   The Luminance of a Blue image will be 10% of peak white !

15 hours ago, Adam J said:

No that's just the human eye response.

And for some reason, the response of a DSLR seems to have been tailored to match the human eye........

Link to post
Share on other sites
2 hours ago, michael8554 said:

Exactly right !   The Luminance of a Blue image will be 10% of peak white !

And for some reason, the response of a DSLR seems to have been tailored to match the human eye........

The colour balance might be but not the pixel level sensitivity. Hence why raw OSC astro images usually come out green before colour balance is applied.  But you can't get rid of bloat by balancing colour it's baked in to the image. 

I promise you that the QE of a OSC is not 10% in the blue. if we see green stars in a image something is wrong and that's what you would get if it worked as you suggest. 

Edited by Adam J
  • Like 1
Link to post
Share on other sites

I'll remind you all again, it was a simple answer to "how is L derived from RGB"

If you have a better answer, by which I don't mean more complicated, feel free, I haven't seen that yet.

Michael

Link to post
Share on other sites
1 hour ago, michael8554 said:

I'll remind you all again, it was a simple answer to "how is L derived from RGB"

If you have a better answer, by which I don't mean more complicated, feel free, I haven't seen that yet.

Michael

R+G+B for interference LRGB filter set where RGB do partitioning of 400-700nm range (Baader is very close to this).

For any other combination you can find coefficients that produce closest value for particular camera model and filter set.

Link to post
Share on other sites

I don't even think anyone actually asked about how L is derived from RGB... the OP's question was "With a refractor if R G and B are not par focal, how can the luminance be in focus?"

The simplest answer is it can't. Although logically focusing in green first before switching to L will produce the least amount of defocus in R and B. Hence the great desire for, and effort to produce,  refractors that can focus all three colours to the same point. 

Somehow this thread got onto human eye responses and colour TVs which is what confused the matter! 

  • Like 1
Link to post
Share on other sites

Craig has it right, the question had nothing to do with deriving Luminance from RGB, or colour telly, or sRGB, but with focus shift between filters. Not every 'scope is perfectly corrected, even triplet apos can show focus shift, some more than others depending on choice of glasses etc. Which is why, for example, Astronomik offer three L filters of differing bandpass. The effect will be to blur the Luminance image slightly.

With my TS 130 apo I measured slight focus offsets between R,G, and B. Probably not not enough to be significant but I entered them in of autofocus offsets table anyway. My ODK has no observable image shift.

  • Like 1
Link to post
Share on other sites

Guys there is a risk here that we may generate more heat than light, focused or otherwise.😉

The question I asked is how can all of the various wavelengths comprising LUM be focused simultaneously in a system where RG and B have different foci, and I think the answer pretty unequivocally is that it's not possible. The LUM image will therefor be blurred to  some extent and this can be minimised by restricting the bandwidth of the LUM filter. 

The sensitivity of the human eye is still relevant though because we use a human eye to evaluate the end result, and I think it must therefore be true that if the R or B is defocused, this will be less apparent to the human observer of the image, than if the G were defocused.  But as I have said previously, for astrophotography this is not actually very useful, because there isn't much detail in the green part of the spectrum. 

In any event leaving humans out of it for a moment, the practical outcome is that LUM is of particular benefit in well corrected refractors and less so in scopes with poor colour correction. My experience is that LUM can actually be detrimental with systems with significant CA, especially with wide pass filters, and especially with sensors that have (relatively) high QE at the extremes of the band. In these cases a synthetic superluminance layer derived from combined (focused) RGB elements will work better, although the SNR benefit of true LUM is then lost. 

  • Like 1
Link to post
Share on other sites
50 minutes ago, CraigT82 said:

I don't even think anyone actually asked about how L is derived from RGB... the OP's question was "With a refractor if R G and B are not par focal, how can the luminance be in focus?"

The simplest answer is it can't. Although logically focusing in green first before switching to L will produce the least amount of defocus in R and B. Hence the great desire for, and effort to produce,  refractors that can focus all three colours to the same point. 

Somehow this thread got onto human eye responses and colour TVs which is what confused the matter! 

You're right Craig.

I original tried to show that if G was in focus, and R and B were a bit off, L might not be too bad.

Off course he should strive to get RGB all in focus.

I'll slink away now !

Michael

  • Like 1
Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.