Jump to content

sgl_imaging_challenge_banner_31.thumb.jpg.b7a41d6a0fa4e315f57ea3e240acf140.jpg

RGB not parfocal, what hope for luminance


Recommended Posts

A simple question, pretty much as per the title. With a refractor if R G and B are not par focal, how can the luminance be in focus? Non parfocal RGB setups seem common, so I'm puzzled how luminance can actually work properly in these cases.

Grateful for any thoughts!

  • Like 1
Link to post
Share on other sites
  • Replies 38
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Composition of brightness / luminance. There is discrepancy between how humans perceive brightness of different wavelengths and how sensitive the sensor is. Human brightness perception is cl

This is only true for human wavelength response - but not for imaging sensors. Dave above gave good explanation. Your filters are parfocal but telescope you are using has focus shift based on w

A simple question, pretty much as per the title. With a refractor if R G and B are not par focal, how can the luminance be in focus? Non parfocal RGB setups seem common, so I'm puzzled how luminance c

Posted Images

11 hours ago, michael8554 said:

Luminance is very roughly  0.2R +0.7G + 0.1B.

So if G is in focus, L may not look too bad.

Michael

Thanks for that.... though  TBH I'm not quite sure what you mean. Looking at the transmission curves the luminance bandwidth should contain all of the R G and B wavelengths, no?

Link to post
Share on other sites

Michael I'm grateful for your input, and I'm sure you're right in what you say - but I'm struggling to follow!

The transmission curves for example filters are below.  If green light is 500-565nm, give or take, why is that 70% of the LUM is green? 

Transmission Astronomik Lx Filter

Transmission Astronomik Deep-Sky RGB

 

Link to post
Share on other sites

Having to go back 50 years to my Colour TV training !

If you passed midday sunlight through the Deep-Sky RGB filter you'd get equal amounts of R, G, and B passing through the filter.

To derive L the software would mix RGB in the ratio 0.1B,  0.7G, 0.3R

That matches the human eye response, mostly G with some R and B

Michael

Link to post
Share on other sites

OK Michael, now I see where you're at! 

I need to give that a bit more thought, but my instinct is that although the human eye is more sensitive to yellow/green, for astrophotography most signal is in the blue and red wavelengths. If the R and B signals focus differently, this can be fixed by refocussing when doing the colour (RGB) capture, but when doing the LUM capture they are captured simultaneously so will inevitably be mismatched for focus. This will cause blurring.

If there was a big green signal then as you say that could outweigh the R and B - but I don't think thats the case. In any event, I think most folk process in such a way that the detail is all in the LUM, and if thats defocussed the image will suffer even if the underlying G signal was sharp. 

I'm expressing this as a statement, but it is all followed by a big question mark - sort of thinking out loud!  

Link to post
Share on other sites

If the filters aren't parfocal then I'd expect that the FWHM you get with the luminance filter will be worse than the individual RGB filters, how much worse will depend on how well or poorly colour corrected the scope is..  It will be in focus though, or as in focus as it can be.  Also if you are shooting LRGB then generally it is best to shoot L when the target is highest/the seeing is best and the effects of the atmosphere are at their least.   Also I'd have thought that the photons that come through to constitute your luminance will depend on the colour of the target..  you can see this by looking at an unstretched RGB image  eg in a random sample within the core of M42  the RGB ratio of a Red bit is 80:25:35 and a Blue bit  30:32:38...

HTH

Dave

  • Like 1
Link to post
Share on other sites
7 minutes ago, Laurin Dave said:

If the filters aren't parfocal then I'd expect that the FWHM you get with the luminance filter will be worse than the individual RGB filters, how much worse will depend on how well or poorly colour corrected the scope is..  It will be in focus though, or as in focus as it can be.  Also if you are shooting LRGB then generally it is best to shoot L when the target is highest/the seeing is best and the effects of the atmosphere are at their least.   Also I'd have thought that the photons that come through to constitute your luminance will depend on the colour of the target..  you can see this by looking at an unstretched RGB image  eg in a random sample within the core of M42  the RGB ratio of a Red bit is 80:25:35 and a Blue bit  30:32:38...

HTH

Dave

Thanks for that Dave. One thing I should have made clear at the outset is that I expect the RGB filters are parfocal - it's the scope which is the issue. I agree with  everything you say - and the proof will be in the results. Using FWHM is likely the best way of assessing the reality of the situation with the LUM filter.

Partly the reason for asking all this is I've just changed my LUM filter for one with a slightly reduced bandpass (Astronomik L3) and am wondering what to  expect. The previous LUM filter worked greaet with the Newtonian, but was unusable with the refractor - huge FWHM. So far I only had one evening with the new one which had 75% moon and my target was M45 which is a bit of an odd one as most of the signal is in the blue I think.

Time will tell, but my thinking is that with a refractor that's not 100% chromatic, refocusing between filters fixes the RGB foci, but the LUM detail will likely be compromised. 

 

Link to post
Share on other sites
On 26/01/2021 at 01:50, michael8554 said:

Luminance is very roughly  0.2R +0.7G + 0.1B.

So if G is in focus, L may not look too bad.

Michael

This is only true for human wavelength response - but not for imaging sensors.

10 minutes ago, Tommohawk said:

Thanks for that Dave. One thing I should have made clear at the outset is that I expect the RGB filters are parfocal - it's the scope which is the issue. I agree with  everything you say - and the proof will be in the results. Using FWHM is likely the best way of assessing the reality of the situation with the LUM filter.

Partly the reason for asking all this is I've just changed my LUM filter for one with a slightly reduced bandpass (Astronomik L3) and am wondering what to  expect. The previous LUM filter worked greaet with the Newtonian, but was unusable with the refractor - huge FWHM. So far I only had one evening with the new one which had 75% moon and my target was M45 which is a bit of an odd one as most of the signal is in the blue I think.

Time will tell, but my thinking is that with a refractor that's not 100% chromatic, refocusing between filters fixes the RGB foci, but the LUM detail will likely be compromised. 

 

Dave above gave good explanation. Your filters are parfocal but telescope you are using has focus shift based on wavelength of light. Lum will be in focus but will have higher FWHM and could have purple halo around bright stars (purple part of spectrum is often defocused the most).

Further away from optimum - more it will be defocused. This is why L3 is working - it cuts biggest offenders - wavelengths at far ends of spectrum.

  • Like 2
Link to post
Share on other sites

Shapstar 61 and ASI1600 cool.  Re M45 I got a good result in RGB, and was trying to see if I could improve it with L - but it wasnt a fair test due to moon and slight high cloud so will have to wait for another chance.

Link to post
Share on other sites
9 minutes ago, vlaiv said:

This is only true for human wavelength response - but not for imaging sensors.

Dave above gave good explanation. Your filters are parfocal but telescope you are using has focus shift based on wavelength of light. Lum will be in focus but will have higher FWHM and could have purple halo around bright stars (purple part of spectrum is often defocused the most).

Further away from optimum - more it will be defocused. This is why L3 is working - it cuts biggest offenders - wavelengths at far ends of spectrum.

Thanks Vlaiv - pretty much confirms my thoughts. 

Link to post
Share on other sites
1 minute ago, Tommohawk said:

Shapstar 61 and ASI1600 cool.  Re M45 I got a good result in RGB, and was trying to see if I could improve it with L - but it wasnt a fair test due to moon and slight high cloud so will have to wait for another chance.

That might be a side effect of microlensing with the camera o..  in my experience its worse with Lum, but is dependant on exact scope/reducer/filter configuration, 

Link to post
Share on other sites
Just now, Laurin Dave said:

That might be a side effect of microlensing with the camera o..  in my experience its worse with Lum, but is dependant on exact scope/reducer/filter configuration, 

Sorry I mean mono version - I've heard of microlensing but assume this is down to RGB matrix, so mono should be OK?

Link to post
Share on other sites

Hmmm - just done some reading and looks like microlensing occurs with mono too. I've done quite a bit of BB and NB and although I've had some issues I've never noticed this effect.

Also   i seem to have lost my edit post option - not sure why! 

  • Like 1
Link to post
Share on other sites
1 hour ago, Tommohawk said:

The previous LUM filter worked greaet with the Newtonian, but was unusable with the refractor - huge FWHM

There's your solution... Newtonians for the win 😂

 

33 minutes ago, Tommohawk said:

Also   i seem to have lost my edit post option - not sure why! 

It's under the three dots to the top right of the post?

  • Like 1
Link to post
Share on other sites

Hi Craig - I'm onside with  Newts for sure but no real options for a widefield newt sadly. There are a few 114mm but only with 1.25" focuser. I've tried designing from scratch but there are design limits especially if you need to get a coma corrector in the train.

Thanks for the 3 dots tip - I tried that but on someone else's post so of course it dint offer edit!

Whilst youre there - Ive also lost the signatures somehow - any ideas on that?

 

Link to post
Share on other sites

Yeah it's a shame there's not really a small fast newt designed for imaging. Heritage 100p could be good at f/4 but would need a lot of work to make it viable for imaging I imagine.  ot sure about the signatures, are you on the desktop version of the site?

 

 

 

Link to post
Share on other sites
57 minutes ago, Tommohawk said:

Hmmm - just done some reading and looks like microlensing occurs with mono too. I've done quite a bit of BB and NB and although I've had some issues I've never noticed this effect.

Also   i seem to have lost my edit post option - not sure why! 

As far as I can tell - it is highly dependent on type of filters and other optical elements, speed of the system and distances involved.

Some people have micro lensing effects, others don't. I always advise people to try to change distances in optical train / rearrange things when they notice the effect.

20 minutes ago, Tommohawk said:

Hi Craig - I'm onside with  Newts for sure but no real options for a widefield newt sadly. There are a few 114mm but only with 1.25" focuser. I've tried designing from scratch but there are design limits especially if you need to get a coma corrector in the train.

Of course you can, but it requires a bit of "creative thinking".

Take 130PDS, or 150PDS, or 150 F/4 - all three are excellent wide field instruments with focal length of about 300mm or 200mm or 150mm - whichever you want :D

However, acquisition of the data and processing is a bit more involved.

You need to do mosaics and bin your data.

Mosaics will get you wide field, while binning will keep imaging time the same.

If you need to do say 2x2 mosaic - you'll spend only 1/4 of time on each panel - so your SNR will suffer, but you can bin 2x2 each panel and that will restore SNR and you'll still get the same pixel count as if you used shorter focal length instrument.

Say you are using camera that has ~ 4000x3000px

You'll bin 2x2 each panel and each panel will end up being 2000x1500px but when you stitch those panels together, you'll again end up with 2x2000 = 4000 and 2x1500 = 3000 so 4000x3000 image.

Only "loss" is overlap to make stitching easy - but that can be as low as 5-10% of image size.

Want shorter focal length - just make 3x3 mosaic and bin each image x3.

  • Like 1
Link to post
Share on other sites

Thats a good point Vlaiv - the 130PDS gets great results for a bargain basement price, and mosaics are becoming easier to do eg with NINA. The only box it doesnt tick unfortunately is the portability one. I typically fly to a kinder sky location a couple of times a year and although the 130PDS will go in a case (and frankly is cheap enough that it wouldnt be the end of the world if it got trashed) it does eat up a lot of luggage space.

But your point about binning and achieving the same overall result is a good one. 

Having said all that, the Sharpstar 61 Mk1 - which has been referenced in a number of threads - is mechanically excellent, and the field is wonderfully flat with reducer and ASI1600 sensor. The only issue is that it isn't parfocal - the reduced bandwidth Astronomik RGB filters fix this nicely for mono work at least  (apart from having to refocus between filters) and it just remains to see how good the L3 will work. I'll be sure to post when I have my next imaging session although of course this scope has now been superceded so not sure this will help others much. 

  • Like 1
Link to post
Share on other sites
8 hours ago, CraigT82 said:

Yeah it's a shame there's not really a small fast newt designed for imaging. Heritage 100p could be good at f/4 but would need a lot of work to make it viable for imaging I imagine.  ot sure about the signatures, are you on the desktop version of the site?

 

The trouble with small newts is you need disproportionately large secondary to get fully illuminated sensor. Below is the spec for an F4 100mm. Even with the camera and EFW fixed direct to the tube (focusing done at the primary a la SCT) you need a 32mm secondary to get fully illuminated sensor (of decent size)

But this doesnt allow for a coma corrector - if you allow 50mm for a CC that doesnt block the light path then you need about 42mm secondary which is  too big a central obstruction. 

Its not so bad with an OSC because the camera sits closer, but that would make for a very niche product. I've been looking at this on and off for some years, and the only design that would work properly is if one element of the CC is built into the secondary housing. Technically this would work well, but it such a niche market it would never fly.

RE the SGL signature - yes am on desktop... I did something the other day and the signatures disappeared and now I cant see how to get them back! 

image.thumb.png.ac5011dba5ebd07f0bb4031ac30f51d1.png

  • Like 1
Link to post
Share on other sites
2 minutes ago, michael8554 said:

Which part of my explanation is not for imaging sensors ?

Michael

Composition of brightness / luminance.

There is discrepancy between how humans perceive brightness of different wavelengths and how sensitive the sensor is.

Human brightness perception is closely modeled in Y component of XYZ color space.

image.png.12f5e9d1d03bceb6351148f454746845.png

Numbers that you have mentioned come from here:

image.png.39fea9682bf9a0baf3367e129255196d.png

that is inverse transform from sRGB linear data to XYZ color space. It can be seen that R contributes with ~0.21, G with ~0.72 and B with ~0.07

or as you put it very roughly:

On 26/01/2021 at 01:50, michael8554 said:

Luminance is very roughly  0.2R +0.7G + 0.1B.

But this is for sRGB data and human brightness perception.

If you look at above curve - you'll see that our perception of brightness is more than x100 higher in green around 550nm than it is in blue / violet part of spectrum at 400nm - or in another words - 400nm wavelength needs to have about x100 more energy than 550nm to be perceived as equally bright. Another way to put it - when we observe and have two equal energy sources, one at 550nm and another at 400nm - we will see later being only 1% of brightness of former.

Now let's look at QE of camera, for example ASI1600:

ASI1600MM-QE2.jpg

Here we see that situation is quite different - sensor at 400nm has about half the sensitivity of 550nm. Camera will see equal energy sources at 400nm and 550nm differ by about half (a bit more than that as 400nm photons have higher energy so there is a bit less of them in equal energy source).

This the reason why telescope that is completely color free visually - can still suffer from CA photographically.

  • Like 2
Link to post
Share on other sites

Re 2 previous posts...... this is exactly why I said previously "I need to think about it!"

35 minutes ago, vlaiv said:

This the reason why telescope that is completely color free visually - can still suffer from CA photographically

The thing is when making a call about how good the image is, "photographic CA" is in the end perceived and judged by the human eye....

.... I'm still thinking about it! 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.