Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

RGB and chromatic aberration


Recommended Posts

1 hour ago, ollypenrice said:

They are parfocal only with each other. They cannot correct non-parfocal (non-apochromatic) optics. They make no attempt to correct for optical defects which means they depend on the optics being parfocal.

This is only true of perfectly corrected optics (as in an all reflecting system). Any refracting element will introduce some variance between the focal points of different wavelengths. This may mean your parfocal filters operate parfocally within measurable limits or that they don't. Sara now uses a Baby Q which was once mine. I found it parfocal with Baader filters but Sara doesn't. She may be fussier than I am, of course, but I think the real difference is that she is using smaller pixels so what was effectively perfect for me is not effectively perfect for her.

Olly

Yes, I understand. Optics defects cannot be corrected by parfocal RGB filters. In addition, my tube long (1600 mm) focal lenght maybe lowers those limits. I have no idea of the pixel width effect, but I think that, if any, I'll see the sum of the objective and the camera pixels effect. In both cases I will have to refocus each channel after change. Boring but solving.... By the way, is it correct to get star image size in this order: G>B>R after focusing on R and leaving the same focus for B and G? With a Flint/Crown ED doublet, I mean.

Thank you!

Link to comment
Share on other sites

  • Replies 54
  • Created
  • Last Reply
5 minutes ago, cesco said:

Yes, I understand. Optics defects cannot be corrected by parfocal RGB filters. In addition, my tube long (1600 mm) focal lenght maybe lowers those limits. I have no idea of the pixel width effect, but I think that, if any, I'll see the sum of the objective and the camera pixels effect. In both cases I will have to refocus each channel after change. Boring but solving.... By the way, is it correct to get star image size in this order: G>B>R after focusing on R and leaving the same focus for B and G? With a Flint/Crown ED doublet, I mean.

Thank you!

Changes in temperature will probably have more effect on focus than anything else. The ideal practice is to shoot red and green at the lower elevations and blue and luminance when the object is high. This is because blue suffers most from atmospheric dispersion and luminance is all about detail so it needs good seeing. If you do that you'd want to check focus between filters anyway, just to be sure.

Olly

Link to comment
Share on other sites

Well this is an interesting topic, and feasibility of RGB imaging with achromatic doublet depends on several factors.

Take a look at this image for example:

achromat_01.png

 

This sort of curve represents focus shift depending on wave length of light. There is also something called critical focus zone. In order to know if RGB type imaging would work with particular setup, one would need to have similar graph for particular scope, and calculate critical focus zone and see if focal shift falls within critical focus zone.

For example if we roughly take red to be 600-700nm, green 500-600nm and blue 400-500nm, from this graph we can see that R and G both have roughly 0.05 width of focus zone, while blue on other hand has 0.25 - five times larger span. This is why there is usually bloat in blue channel - light is simply not focused throughout wave lengths between 0 and 0.25. If scope had such F/ratio that critical focus zone is close to 0.25 - like F/10 scope (232um) then there would not be much bloat and one could easily do RGB with such setup.

So there is a way to actually calculate given graph similar to above one for your particular scope if RGB imaging is going to work, or any other narrow band type - by inspecting focal range for given filter wavelength range and comparing to critical focus zone for that wavelength.

This is of course not the whole story. Telescope lens can have optical aberrations other then spherochromatism that depend on wave length. For example my F/5 ST102 has a bit of astigmatism in red part of the spectrum.

 

Link to comment
Share on other sites

2 hours ago, ollypenrice said:

Changes in temperature will probably have more effect on focus than anything else. The ideal practice is to shoot red and green at the lower elevations and blue and luminance when the object is high. This is because blue suffers most from atmospheric dispersion and luminance is all about detail so it needs good seeing. If you do that you'd want to check focus between filters anyway, just to be sure.

Olly

I agree on that. I already used to do that with broadband RGB filters: capturing B at the highest of the object and red at the lowest, just to facilitate focusing in response to bad seeing affecting more so much dimmer B. I will resume this trick with narrowband RGB filters, and with L (that I am turning more convinced to capture) to gain better detail. You know, I am discovering things compelling me to update my capture procedure... Thank you!

Link to comment
Share on other sites

49 minutes ago, vlaiv said:

Well this is an interesting topic, and feasibility of RGB imaging with achromatic doublet depends on several factors.

Take a look at this image for example:

achromat_01.png

 

This sort of curve represents focus shift depending on wave length of light. There is also something called critical focus zone. In order to know if RGB type imaging would work with particular setup, one would need to have similar graph for particular scope, and calculate critical focus zone and see if focal shift falls within critical focus zone.

For example if we roughly take red to be 600-700nm, green 500-600nm and blue 400-500nm, from this graph we can see that R and G both have roughly 0.05 width of focus zone, while blue on other hand has 0.25 - five times larger span. This is why there is usually bloat in blue channel - light is simply not focused throughout wave lengths between 0 and 0.25. If scope had such F/ratio that critical focus zone is close to 0.25 - like F/10 scope (232um) then there would not be much bloat and one could easily do RGB with such setup.

So there is a way to actually calculate given graph similar to above one for your particular scope if RGB imaging is going to work, or any other narrow band type - by inspecting focal range for given filter wavelength range and comparing to critical focus zone for that wavelength.

This is of course not the whole story. Telescope lens can have optical aberrations other then spherochromatism that depend on wave length. For example my F/5 ST102 has a bit of astigmatism in red part of the spectrum.

 

Thanks! I like your graph and your considerations. My narrow band RGB filters bandpass is annowledged by Baader, but I woild like to know how to calculate the critical zone of my flint/crown 178-mm ED doublet (1600-mm focal lenght), and the above graph  for it. Of course, this will be theoretical calculations, since I know my objective is no so perfect. I remember that I qualitatively determined, by comparing in/out focus images from Cor Berrevoets Aberrator, that my obj is affected by 2nd order and 4th order spherical aberration to some extent. Therefore, in practice, I don't expect anything more than a good approximation....

Link to comment
Share on other sites

Critical focus zone is quite easy to calculate, here is an example:

http://www.wilmslowastro.com/software/formulae.htm#CFZ

As for diagram, that is beyond me - in domain of opticians with special equipment. I do have idea of how one might do some basic measurement, but it still requires additional equipment. If you happen to have or can borrow star analyzer - it can be used to do some rough measurements. You only need to take unfiltered shot of a bright star with camera and it could give you rough idea what is that curve like - one might be able to reconstruct curve from this data I suppose. Take a look at a drawing for explanation.

Because star analyzer is refracting light at an angle - if camera is perpendicular to optical axes it will not produce a straight thin line but rather will be focused at one point and focus slightly diverging left or right - this needs to be taken into account. Since we can determine defocus by width of spectrum line and there is additional defocus because of achromatic lens and different wave lengths - spectrum will look a bit like on diagram - you can think of it as series of unfocused airy disks (or rather seeing disks if you shoot real star) - so it will be formed as a sum of circles of different diameter. So if you know this and measure thickness of spectrum line, and measure defocus circle size vs defocus distance (can be done again by taking a narrowband filter and shooting star with different defocus values and measuring size of circle) - you can do some sort of calculation for curve.

 

If you are planing to this sort of recording, might I suggest processing tips that I consider would give best results? Mind you this technique is advanced one, and there is no software that will provide it out of the box, but one can use ImageJ and some plugins to achieve all of it.

First shoot each channel as you normally would - for each the filter do best focus you can. Calibrate and stack each channel.

If you inspect above curve - green is most likely to be at a good focus in all wave length range - so we will use it as "template".

Pick bunch of stars with decent SNR - use stars that are bright enough, but do not saturate. For each star do following: Select area around star, make a copy of it and subtract background so it is at roughly 0. Do that on both green and red channel. Deconvolve red channel star with green one - save result. Repeat on other stars. Load all saved results and normalize them (make sum of all pixels equal to 1). Do average stack of these kernels (or deconvolved red stars with their green counter part). This is going to be master red kernel. Deconvolve red image with this kernel. Do the same with blue channel.

After you tightened up red and blue channel - load all three channels and add them together - this will form Luminance.

So you shoot RGB and you end up with corrected R and B, normal G and luminance - after that process as you would regular LRGB.

Screenshot_4.jpg

Link to comment
Share on other sites

6 hours ago, ollypenrice said:

This is only true of perfectly corrected optics (as in an all reflecting system). Any refracting element will introduce some variance between the focal points of different wavelengths. This may mean your parfocal filters operate parfocally within measurable limits or that they don't.

Olly

I stand corrected :)

Link to comment
Share on other sites

On 21/1/2017 at 17:17, vlaiv said:

Critical focus zone is quite easy to calculate, here is an example:

http://www.wilmslowastro.com/software/formulae.htm#CFZ

As for diagram, that is beyond me - in domain of opticians with special equipment. I do have idea of how one might do some basic measurement, but it still requires additional equipment. If you happen to have or can borrow star analyzer - it can be used to do some rough measurements. You only need to take unfiltered shot of a bright star with camera and it could give you rough idea what is that curve like - one might be able to reconstruct curve from this data I suppose. Take a look at a drawing for explanation.

Because star analyzer is refracting light at an angle - if camera is perpendicular to optical axes it will not produce a straight thin line but rather will be focused at one point and focus slightly diverging left or right - this needs to be taken into account. Since we can determine defocus by width of spectrum line and there is additional defocus because of achromatic lens and different wave lengths - spectrum will look a bit like on diagram - you can think of it as series of unfocused airy disks (or rather seeing disks if you shoot real star) - so it will be formed as a sum of circles of different diameter. So if you know this and measure thickness of spectrum line, and measure defocus circle size vs defocus distance (can be done again by taking a narrowband filter and shooting star with different defocus values and measuring size of circle) - you can do some sort of calculation for curve.

 

If you are planing to this sort of recording, might I suggest processing tips that I consider would give best results? Mind you this technique is advanced one, and there is no software that will provide it out of the box, but one can use ImageJ and some plugins to achieve all of it.

First shoot each channel as you normally would - for each the filter do best focus you can. Calibrate and stack each channel.

If you inspect above curve - green is most likely to be at a good focus in all wave length range - so we will use it as "template".

Pick bunch of stars with decent SNR - use stars that are bright enough, but do not saturate. For each star do following: Select area around star, make a copy of it and subtract background so it is at roughly 0. Do that on both green and red channel. Deconvolve red channel star with green one - save result. Repeat on other stars. Load all saved results and normalize them (make sum of all pixels equal to 1). Do average stack of these kernels (or deconvolved red stars with their green counter part). This is going to be master red kernel. Deconvolve red image with this kernel. Do the same with blue channel.

After you tightened up red and blue channel - load all three channels and add them together - this will form Luminance.

So you shoot RGB and you end up with corrected R and B, normal G and luminance - after that process as you would regular LRGB.

Screenshot_4.jpg

The calculator yields a critical focus zone (at f/9) of 0.257mm, 0.202mm and 0.188mm, for R, G and B, respectively. These  values also hold for the CCD focus zone, but there is no indication whether all this is valid for an achromatic doublet.

From the plot, using the filters  bandwidth values given by Baader, I get a focus shift of 0.05, 0.02 and 0.25 for R, G and B, respectively. The focus shifts of R and G fit well into the R and G, respectively, critical focus zone above, while the focus shift of B does not (0.25 vs. 0.188). Would this mean that I will never get proper focus for B? And no problem with R and G, although?

As for the processing you describe, thank you for acknowledging that a solution is available, but I am afraid it is too much complicated for me…. However, this interesting reasoning ends with the practical statement that I must forget RGB filters parfocality and use them just as non parfocal, keeping in mind that if corrections are needed, they are available by the processing you suggest.

Thanks,

cesco

Link to comment
Share on other sites

Diagram above is one just describing what that curve looks like and it is there to point out that for certain critical focus zone (which depends on F/ratio) and particular filters one can use achromatic doublet with no problem at all - meaning all wave lengths of light for particular filter / range will be in focus.

One of the things you may notice - it is often said that achromatic refractor with large F/ratio (like F/10 - F/15) shows little chromatic aberration - that is directly related to diagram above. Larger F/ratio means that critical focus zone will be larger and cover more wavelengths.

What is critical focus zone at all? It is range of focus positions where blur from defocus is smaller then airy disc - meaning you can't tell that there is defocus - star looks like it is properly focused. There is often used term "snap to focus" - and it is usually used for fast APO refractors - this is because fast focal ratios have small critical zone - slight turn of focuser knob will get you out of focus zone.

Diagram above also explains why is there blue halo around the stars when using achromatic refractor - blue part of spectrum has largest difference in focus positions - hence largest parts of that part of spectrum will be out of focus.

One more thing: I did not mention that you should use filters parfocally - do focusing for each filter. Even when you do that, there is chance that blue will not come to right focus because range of focal positions for blue filter is perhaps greater then critical focus zone.

Mind you there are other solutions possible to use achromatic refractor and get decent images - I did so with SW F/5 ST102 and OSC camera (only one focus position for R, G and B, no chance to do per channel focusing there).

Let me sum up all of it:

1. Use R,G and B filters with same focus position or OSC camera with achromat normally - worst results.

2. Use R,G and B filters each with its own focus position (not possible with OSC) with achromat - better results than option 1.

3. Use R,G and B filters each with its own focus position and do deconvolution on R (possibly) and B (probably) based on G (most likely to have good focus) - probably better results then 1 & 2 if you have good S/N images (this technique can work with point 1 as well)

4. Use aperture mask to stop down lens to slower F/ratio - in combination with any of the above - this will increase critical focus zone and reduce blue halo - at expense of lower resolution and longer exposure needed. You can combine this approach with any of above points (1-3)

5. Use minus violet / yellow / any filter that blocks below certain wave length (for example below 400nm) when shooting B or OSC - also improves results in points 1 - 4 at expense of some S/N and a bit of color balance.

 

I did some experiments a while ago, and there is a thread about it, let me see if I can find it and give a link here. Here it is:

There is also image of M42 that I took using this approach in an album:

 

Link to comment
Share on other sites

On ‎21‎/‎01‎/‎2017 at 13:02, ollypenrice said:

The ideal practice is to shoot red and green at the lower elevations and blue and luminance when the object is high. This is because blue suffers most from atmospheric dispersion and luminance is all about detail so it needs good seeing.

I've never thought about that.  Being a DSLR imager there's not a lot I can do about it but still, I've never previously come across this good advice.

Mark

Link to comment
Share on other sites

On 22/1/2017 at 21:21, vlaiv said:

Diagram above is one just describing what that curve looks like and it is there to point out that for certain critical focus zone (which depends on F/ratio) and particular filters one can use achromatic doublet with no problem at all - meaning all wave lengths of light for particular filter / range will be in focus.

One of the things you may notice - it is often said that achromatic refractor with large F/ratio (like F/10 - F/15) shows little chromatic aberration - that is directly related to diagram above. Larger F/ratio means that critical focus zone will be larger and cover more wavelengths.

What is critical focus zone at all? It is range of focus positions where blur from defocus is smaller then airy disc - meaning you can't tell that there is defocus - star looks like it is properly focused. There is often used term "snap to focus" - and it is usually used for fast APO refractors - this is because fast focal ratios have small critical zone - slight turn of focuser knob will get you out of focus zone.

Diagram above also explains why is there blue halo around the stars when using achromatic refractor - blue part of spectrum has largest difference in focus positions - hence largest parts of that part of spectrum will be out of focus.

One more thing: I did not mention that you should use filters parfocally - do focusing for each filter. Even when you do that, there is chance that blue will not come to right focus because range of focal positions for blue filter is perhaps greater then critical focus zone.

Mind you there are other solutions possible to use achromatic refractor and get decent images - I did so with SW F/5 ST102 and OSC camera (only one focus position for R, G and B, no chance to do per channel focusing there).

Let me sum up all of it:

1. Use R,G and B filters with same focus position or OSC camera with achromat normally - worst results.

2. Use R,G and B filters each with its own focus position (not possible with OSC) with achromat - better results than option 1.

3. Use R,G and B filters each with its own focus position and do deconvolution on R (possibly) and B (probably) based on G (most likely to have good focus) - probably better results then 1 & 2 if you have good S/N images (this technique can work with point 1 as well)

4. Use aperture mask to stop down lens to slower F/ratio - in combination with any of the above - this will increase critical focus zone and reduce blue halo - at expense of lower resolution and longer exposure needed. You can combine this approach with any of above points (1-3)

5. Use minus violet / yellow / any filter that blocks below certain wave length (for example below 400nm) when shooting B or OSC - also improves results in points 1 - 4 at expense of some S/N and a bit of color balance.

 

I did some experiments a while ago, and there is a thread about it, let me see if I can find it and give a link here. Here it is:

There is also image of M42 that I took using this approach in an album:

 

I will stick to point 2. of your list, as for me it is the simplest and the closest to my setup. I am not so familiar with deconvolution and similar procedures, and I would rather like to get the best from what I am able to do, by now, and leave learning more about processing when I will reach my best.

As for point 4. of your summary, I also own a Vixen achromatic refractor, 80-mm, f/11, featuring a better star test than my Meade 7”. Given your suggestion, it would be worth trying some imaging with it, for comparison at least. Of course, I understand that I will lose detail and light…, or will it be exceedingly slow?

Another point: though taken at proper focus,  R G and B images should render different size Airy disks, due to different wavelengths, varying from 8 to 13 pixel in my 5.4x5.4 micron px sensor. I believe these differences will be leveled out by the seeing, and maybe will never be visible. Anyway, should I expect some contribution to B and G bloating by the inherent difference of Airy disks?

Finally, I also put an UV IR-cut filter on the camera, to get rid of luminous halos around star images as I used to do for B/W imaging. I keep this filter on when imaging in RGB. Do you think this is wrong? Can it replace the need of L imaging?

Link to comment
Share on other sites

5 hours ago, cesco said:

I will stick to point 2. of your list, as for me it is the simplest and the closest to my setup. I am not so familiar with deconvolution and similar procedures, and I would rather like to get the best from what I am able to do, by now, and leave learning more about processing when I will reach my best.

 

As for point 4. of your summary, I also own a Vixen achromatic refractor, 80-mm, f/11, featuring a better star test than my Meade 7”. Given your suggestion, it would be worth trying some imaging with it, for comparison at least. Of course, I understand that I will lose detail and light…, or will it be exceedingly slow?

 

Another point: though taken at proper focus,  R G and B images should render different size Airy disks, due to different wavelengths, varying from 8 to 13 pixel in my 5.4x5.4 micron px sensor. I believe these differences will be leveled out by the seeing, and maybe will never be visible. Anyway, should I expect some contribution to B and G bloating by the inherent difference of Airy disks?

 

Finally, I also put an UV IR-cut filter on the camera, to get rid of luminous halos around star images as I used to do for B/W imaging. I keep this filter on when imaging in RGB. Do you think this is wrong? Can it replace the need of L imaging?

 

Yes, you can try 80mm F/11 and yes you will need longer exposure if you are using same camera (same pixel size).

I'm a bit confused as to what setup are you using. 8 to 13 pixel per airy disk size is too much of a resolution to be useful. If we examine usual apertures that amateurs use - 3" to 10" - airy disk size in arc seconds are: 4" to 1". If you are using 8-13 pixels you must be imaging in 0.5"/pixel to 0.1"/pixel range. While 0.5"/pixel is doable (one really needs good equipment for that) - 0.1"/pixel is certainly not. Also to get to this resolutions with 5.4um pixel size - one would need 2400mm + focal length. So even if you are using 14" SCT without focal reducer you would still get - 3 pixels per airy disk size, not 8 to 13 pixels.

So if your setup is really 8 - 13 pixel per airy disk, then yes, you might get bloating / red halo around stars due to airy disk alone. Seeing will soften that a bit but it would still be visible. If you on the other hand made an error in calculating airy size and/or pixels per airy disk, then you should not worry about it. For usual setups / resolutions most amateurs work with difference due to airy disk is usually much less than half a pixel, and it's not really noticeable in long exposures due to increase in star size due to seeing.

As for RGB imaging alone without L, reasoning would be the following:

If you for example take 1h of each, and calibrate and stack frames, you can add result R, G & B channels and get in theory same or even better S/N of 1h L if you were to record it the same way (same camera, same number of subs), and provided that your R,G and B filters are such that there is no overlap in wavelengths and no uncovered wavelengths as well.

Why is this? Signal that you gather in each channel for 1h - you would get in 1h of Luminance. It is only split in 3 sessions of 1h getting only 1/3 of the signal. But stacking x3 frames will improve image by reducing dark and read noise by x1.7 (square root of 3). Adding the signal will reduce its noise (shot noise) to the same levels as doing 1h of full spectrum.

But mind you that when you create this kind of L you still have to combine it with R, G and B frames to produce color image - and since those frames contain original noise it will get injected back.

So I think that best approach would be - Shoot R, G and B - combine those by adding them to form L, then create separate image by combining R, G and B - do heavy noise reduction on this copy, and then split it into L*a*b - and replace L component with original L that you've got by adding R, G and B together - hope that this makes sense (and that my reasoning is ok) :D

 

Using UV/IR cut filter with R, G and B filters - in general not necessary, and even worse then using just R, G and B filters - provided that R, G and B filters are of a good quality and don't pass other frequencies next to main band they were designed to. There are however cases where adding UV/IR cut filter can be a good thing. If your R, G and B filters pass a bit of light outside of their respective main bands for example, or other example would be to trim a bit of high (for blue) and low (for red) frequencies. Astronomik for example has 3 versions of UV/IR cut filters - L1, L2 and L3. L3 is narrowest excluding wavelengths below 420nm and above 680nm - so L3 from Astronomik would be good fit to use with R and B to reduce level of bloat when shooting with achromatic refractor, but using with G one would see no benefit - it would be even a bit worse - additional filter would lover light levels by couple of percent.

Link to comment
Share on other sites

On ‎04‎/‎03‎/‎2011 at 21:46, Dangerous-Dave said:

I have attempted RGB using an achromat and, sadly, refocusing is not the sum total of your problems. You get bloated stars in the blue channel which manifest themselves as halos in the reassembled colour image. You can attempt to remove the halos in post-processing, Noel's tools do a good job or you can run a star reduction action on the blue channel only, but if any of your haloed stars overlap nebulosity or other stars you are right royally snookered.

Same here, you just end up with bloaty blue stars that are a royal pain to process! I gave in and bought a Tak FSQ85 - nice:)

Link to comment
Share on other sites

5 hours ago, vlaiv said:

Yes, you can try 80mm F/11 and yes you will need longer exposure if you are using same camera (same pixel size).

I'm a bit confused as to what setup are you using. 8 to 13 pixel per airy disk size is too much of a resolution to be useful. If we examine usual apertures that amateurs use - 3" to 10" - airy disk size in arc seconds are: 4" to 1". If you are using 8-13 pixels you must be imaging in 0.5"/pixel to 0.1"/pixel range. While 0.5"/pixel is doable (one really needs good equipment for that) - 0.1"/pixel is certainly not. Also to get to this resolutions with 5.4um pixel size - one would need 2400mm + focal length. So even if you are using 14" SCT without focal reducer you would still get - 3 pixels per airy disk size, not 8 to 13 pixels.

So if your setup is really 8 - 13 pixel per airy disk, then yes, you might get bloating / red halo around stars due to airy disk alone. Seeing will soften that a bit but it would still be visible. If you on the other hand made an error in calculating airy size and/or pixels per airy disk, then you should not worry about it. For usual setups / resolutions most amateurs work with difference due to airy disk is usually much less than half a pixel, and it's not really noticeable in long exposures due to increase in star size due to seeing.

As for RGB imaging alone without L, reasoning would be the following:

If you for example take 1h of each, and calibrate and stack frames, you can add result R, G & B channels and get in theory same or even better S/N of 1h L if you were to record it the same way (same camera, same number of subs), and provided that your R,G and B filters are such that there is no overlap in wavelengths and no uncovered wavelengths as well.

Why is this? Signal that you gather in each channel for 1h - you would get in 1h of Luminance. It is only split in 3 sessions of 1h getting only 1/3 of the signal. But stacking x3 frames will improve image by reducing dark and read noise by x1.7 (square root of 3). Adding the signal will reduce its noise (shot noise) to the same levels as doing 1h of full spectrum.

But mind you that when you create this kind of L you still have to combine it with R, G and B frames to produce color image - and since those frames contain original noise it will get injected back.

So I think that best approach would be - Shoot R, G and B - combine those by adding them to form L, then create separate image by combining R, G and B - do heavy noise reduction on this copy, and then split it into L*a*b - and replace L component with original L that you've got by adding R, G and B together - hope that this makes sense (and that my reasoning is ok) :D

 

Using UV/IR cut filter with R, G and B filters - in general not necessary, and even worse then using just R, G and B filters - provided that R, G and B filters are of a good quality and don't pass other frequencies next to main band they were designed to. There are however cases where adding UV/IR cut filter can be a good thing. If your R, G and B filters pass a bit of light outside of their respective main bands for example, or other example would be to trim a bit of high (for blue) and low (for red) frequencies. Astronomik for example has 3 versions of UV/IR cut filters - L1, L2 and L3. L3 is narrowest excluding wavelengths below 420nm and above 680nm - so L3 from Astronomik would be good fit to use with R and B to reduce level of bloat when shooting with achromatic refractor, but using with G one would see no benefit - it would be even a bit worse - additional filter would lover light levels by couple of percent.

OK, you are right! I got the wrong formula for Airy disk diameter: the guy who published the one I found in the net forgot a "sin"!!! After correcting the formula, now I get Airy disk widths ranging between 1.6 and 2.3 according to R, G & B  (my F is 1600mm f/9).

As for luminance,it should be B/W, right? Then should I combine R, G and B in B/W, to form L, BEFORE I put colours in each channel image? Then I should make a second B/W  L by combining  R, G and B again, and apply heavy denoising to it, but what do you mean by splitting it into L*a*b? Then I should take away this L and put the first L....when should I put colours in? Would Registax6 be good at all this, and how?

My RGB filters are dielectric type, narrow band (R: 590-710 nm; G: 50-560 nm; B: 400-500 nm) from Baader They are also labeled H-beta (B), OIII (G), H-alpha and SII (R). I was using the broad band type before, also Baader, and got no B and G bloating. I was surprised at seeing B and G bloating with those narrow band filters: I expected they were better.... hence my search for answers. With broad band RGB filters I had to use UV IR-cut to get rid of the extreme part of the spectrum passing through the filters. Bloating of B channel worries me most: I expected that in an achromat B size was close to R, not to G, because I focused on R, wich should be coincident with B. The next quote to this topic claims for B bloating in an achromat...., how is that I noticed no bloating at all in G and B images, out of my achromat, when using broad band filters plus UV IR-cut? What you say at the end of your last quote makes me suspect that, in general, playing with filters is not so straightforward: rather it looks that a quite delicated equlibrium of optical components must be reached, and that this is harder to get with more accurate definition of the wavelenght, as coarse filters were not so much demanding in my hands!

Link to comment
Share on other sites

2 hours ago, fireballxl5 said:

Same here, you just end up with bloaty blue stars that are a royal pain to process! I gave in and bought a Tak FSQ85 - nice:)

This is just what I get.... I shifted from broad band RGB (planetary) filters to narrow band, expecting image improvement, instead I got B bloating. I knew that G had to be problematic, as it is, but G must be refocused. Now, if I cannot rely on refocusing, things are getting worse. I hope that a simple enough processing would solve all this!

Link to comment
Share on other sites

2 hours ago, fireballxl5 said:

Same here, you just end up with bloaty blue stars that are a royal pain to process! I gave in and bought a Tak FSQ85 - nice:)

Same reply.... should I get back to broadband RGB? In My attachments I added a picture of M33 in which a green halo around stars is visible...(narrow band RGB filters) but I know that I kept R focus also for B and G, thinking that filters parfocality would work. Without refocusing, it seems that G has bigger problems. 

Link to comment
Share on other sites

54 minutes ago, cesco said:

As for luminance,it should be B/W, right? Then should I combine R, G and B in B/W, to form L, BEFORE I put colours in each channel image? Then I should make a second B/W  L by combining  R, G and B again, and apply heavy denoising to it, but what do you mean by splitting it into L*a*b? Then I should take away this L and put the first L....when should I put colours in? Would Registax6 be good at all this, and how?

Ok, let me try to explain a bit better. So you shoot your subs for each R, G and B, you also take darks, bias and flat frames. You calibrate and stack each channel to form B/W image for that filter / channel. So you end up with 3 B/W images for each channel.

First you create L - by using your favorite software, by adding layers / images together. You can either add them or do average of them - it does not matter. Save this image as L.

Next thing you do is - using your favorite software (you can do it in Gimp for example - but use 2.9 version not 2.8 as it allows you to work with 16/32 bit per channel instead of just 8), you create RGB composite image - color one from R, G and B images. So you get normal color image like you normally would. Then you denoise / smoothen this image quite a bit, until there is no visible noise on stretched image. You can also do color balance at this point. So in essence you process this image like you are normally processing your images, but keep it in high bit format (don't save it as 8bit image yet). After you are done with this, there is option in image editing programs (there is in Gimp, I'm sure there is in Photoshop as well), to convert to another color space. In RGB color space you will have composite image with three channels - red, green and blue. If you select Lab or L*a*b color space, image will remain the same when viewed as color image, but each channel will contain different information. It will no longer be blue, green and red part of the spectrum, but rather it will be separated in L - luminance part and ab (or *a*b) - which represent color part. So when you do this, you now have the option to change luminance channel with L image that you saved first - one that contains less noise, while keeping color information. This color information will be really smooth due to denoising, but detail will be preserved in L image that you first created (eyes are more sensitive to variations of light then variations of color - this is why L is usually taken the longest, and often people bin R,G and B subs). So when you replace L channel with L image that you saved - you will get new RGB image (still in L*a*b mode, you can switch back to RGB now) and continue processing - you will need to do stretch again, maybe adjust color if it is a bit off, and do just a bit of noise reduction on this one.

1 hour ago, cesco said:

My RGB filters are dielectric type, narrow band (R: 590-710 nm; G: 50-560 nm; B: 400-500 nm) from Baader They are also labeled H-beta (B), OIII (G), H-alpha and SII (R). I was using the broad band type before, also Baader, and got no B and G bloating. I was surprised at seeing B and G bloating with those narrow band filters: I expected they were better.... hence my search for answers. With broad band RGB filters I had to use UV IR-cut to get rid of the extreme part of the spectrum passing through the filters. Bloating of B channel worries me most: I expected that in an achromat B size was close to R, not to G, because I focused on R, wich should be coincident with B. The next quote to this topic claims for B bloating in an achromat...., how is that I noticed no bloating at all in G and B images, out of my achromat, when using broad band filters plus UV IR-cut? What you say at the end of your last quote makes me suspect that, in general, playing with filters is not so straightforward: rather it looks that a quite delicated equlibrium of optical components must be reached, and that this is harder to get with more accurate definition of the wavelenght, as coarse filters were not so much demanding in my hands!

Ok, so this set of interference filters that you have now are "proper" filters. If you shoot with these filters, and focus for each filter, G should have smallest stars. If you only focused on R and then recorded with that focus position G and B after you probably ended up with situation R smallest then G and the biggest stars in B. If this is the case, you should focus on each filter, and then you will end up with G the smallest then R and finally B will be most bloated. There is however case that you might end up with R being larger then B - but this would mean that your scope is optimized for visual deep sky and not suitable for photo work. Because human eye is the least sensitive in red, sometimes scope designers optimize B/G part and leave R - but most of the time it is optimized for G/R part and one gets those blue/violet halos when looking / taking picture of bright stars.

As for why you had no bloating with using previous filters, well, it might be due to frequency response of filters. These interference filters that you now have are broad band filters (being over 100nm wide as opposed to narrow band that are in range of 3-15nm). If previous filters were wider than that - then you had equal bloating in each channel so you did not see blue halo around stars. If on other hand they were some kind of planetary color filters then one needs to inspect frequency response of those filters and see what is going on. Such filters usually have slowly rising / falling edges - not like interference filters that go from almost 100% to close to 0% in matter of less then 10nm. These gentle slopes might have played a role in that as well.

Link to comment
Share on other sites

My previous filters were longpass Baader. R:  >650, T 95% steep sigmoid

V: 430-630, 65%T at 520nm shallow edges gaussian

B: 350-600, 65%T  at 470 shallow edges gaussian.

As for processingi, now it is much clearer. I will use DSS to make the first L image. As for Gimp software, I never heard of it. I better understand your new explanation, but I'll have to do some practice to master it, and I know I can ask you, thank you.

Link to comment
Share on other sites

19 hours ago, vlaiv said:

Ok, let me try to explain a bit better. So you shoot your subs for each R, G and B, you also take darks, bias and flat frames. You calibrate and stack each channel to form B/W image for that filter / channel. So you end up with 3 B/W images for each channel.

First you create L - by using your favorite software, by adding layers / images together. You can either add them or do average of them - it does not matter. Save this image as L.

Next thing you do is - using your favorite software (you can do it in Gimp for example - but use 2.9 version not 2.8 as it allows you to work with 16/32 bit per channel instead of just 8), you create RGB composite image - color one from R, G and B images. So you get normal color image like you normally would. Then you denoise / smoothen this image quite a bit, until there is no visible noise on stretched image. You can also do color balance at this point. So in essence you process this image like you are normally processing your images, but keep it in high bit format (don't save it as 8bit image yet). After you are done with this, there is option in image editing programs (there is in Gimp, I'm sure there is in Photoshop as well), to convert to another color space. In RGB color space you will have composite image with three channels - red, green and blue. If you select Lab or L*a*b color space, image will remain the same when viewed as color image, but each channel will contain different information. It will no longer be blue, green and red part of the spectrum, but rather it will be separated in L - luminance part and ab (or *a*b) - which represent color part. So when you do this, you now have the option to change luminance channel with L image that you saved first - one that contains less noise, while keeping color information. This color information will be really smooth due to denoising, but detail will be preserved in L image that you first created (eyes are more sensitive to variations of light then variations of color - this is why L is usually taken the longest, and often people bin R,G and B subs). So when you replace L channel with L image that you saved - you will get new RGB image (still in L*a*b mode, you can switch back to RGB now) and continue processing - you will need to do stretch again, maybe adjust color if it is a bit off, and do just a bit of noise reduction on this one.

Ok, so this set of interference filters that you have now are "proper" filters. If you shoot with these filters, and focus for each filter, G should have smallest stars. If you only focused on R and then recorded with that focus position G and B after you probably ended up with situation R smallest then G and the biggest stars in B. If this is the case, you should focus on each filter, and then you will end up with G the smallest then R and finally B will be most bloated. There is however case that you might end up with R being larger then B - but this would mean that your scope is optimized for visual deep sky and not suitable for photo work. Because human eye is the least sensitive in red, sometimes scope designers optimize B/G part and leave R - but most of the time it is optimized for G/R part and one gets those blue/violet halos when looking / taking picture of bright stars.

As for why you had no bloating with using previous filters, well, it might be due to frequency response of filters. These interference filters that you now have are broad band filters (being over 100nm wide as opposed to narrow band that are in range of 3-15nm). If previous filters were wider than that - then you had equal bloating in each channel so you did not see blue halo around stars. If on other hand they were some kind of planetary color filters then one needs to inspect frequency response of those filters and see what is going on. Such filters usually have slowly rising / falling edges - not like interference filters that go from almost 100% to close to 0% in matter of less then 10nm. These gentle slopes might have played a role in that as well.

Finding the proper site to get GIMP download is quite hard: there is a mess in the net, especially for version 2.9.... Woulg you please give me anuseful  indication? Thank you.

Link to comment
Share on other sites

My previous filters were longpass Baader. R:  >650, T 95% steep sigmoid

V: 430-630, 65%T at 520nm shallow edges gaussian

B: 350-600, 65%T  at 470 shallow edges gaussian.

As for processingi, now it is much clearer. I will use DSS to make the first L image. As for Gimp software, I never heard of it. I better understand your new explanation, but I'll have to do some practice to master it, and I know I can ask you, thank you.

I also gave a look to a thread in this section, Filters for DSO imaging, and the author  warns that interference filters may be prone to blue star  halos, while absorption filters are not. After some research I found an interesting site at www.sk-advanced.com/category/chapter-7-interference-filters explaining why. When using broadband Baaderabsorption RGB I had scarce detail,not B or G bloated stars. Maybe shifting to  good quality RGB absorption filters, instead of interference ones, might be a good idea...

Link to comment
Share on other sites

4 minutes ago, cesco said:

interference filters may be prone to blue star  halos

I'd be interested in the source of this comment....

 

We use ITF filters extensively in Ha solar filter construction.  The bandwidth shift due to a converging beam is not extreme enough to give rise to a "blue halo" it's only a matter of a few nm shift.

 

Link to comment
Share on other sites

3 hours ago, cesco said:

Finding the proper site to get GIMP download is quite hard: there is a mess in the net, especially for version 2.9.... Woulg you please give me anuseful  indication? Thank you.

Check this url:

https://www.partha.com/

On the left bar, scroll down, past Useful links, there is download section. There you can find links (recommend for now 2.9.5 64bit portable - no installer, just run exe) you can even try Gimp-cc (this is color corrected version - some changes to color models).

Link to comment
Share on other sites

10 hours ago, Merlin66 said:

I'd be interested in the source of this comment....

 

We use ITF filters extensively in Ha solar filter construction.  The bandwidth shift due to a converging beam is not extreme enough to give rise to a "blue halo" it's only a matter of a few nm shift.

 

In the section IMAGINE- TIPS TRICKS AND TECHNIQUES   -   FILTERS FOR DSO IMAGING, I read: Baader produce a significantly cheaper LRGB filter set. These are absorptive rather than interference filters. They do let through slightly less light than the more expensive interference filters and but I suspect the differences aren’t huge. One advantage of absorptive filters is that they generate far fewer internal reflections. This can reduce the risk of large, unnatural looking star halos which you do occasionally see with interference filters. These Baader RGB filters aren’t IR blocking so you need an additional IR blocking filter on the nose piece of your filter wheel.

In addition, the topic is further discussed in:               www.sk-advanced.com/category/chapter-7-interference-filters

Link to comment
Share on other sites

9 hours ago, vlaiv said:

Check this url:

https://www.partha.com/

On the left bar, scroll down, past Useful links, there is download section. There you can find links (recommend for now 2.9.5 64bit portable - no installer, just run exe) you can even try Gimp-cc (this is color corrected version - some changes to color models).

Thank you. By chance, I gave a look to FILTERS FOR DSO IMAGING in this section, and noticed the athors strong suspicion that interference filters generate uncontrolled reflections able to build halos around star images. Of course, I do not care of them in themselves, rather I take stellar halos like an indication that halos may spread throughout the whole image, since all light beams can give rise to reflections, and spoil the whole image. The author also say that absorption filters do not show halos. My previous RGB filters were absorptive type, and I think that, though cheaper and maybe lower quality, this is why I nevere saw halos around star when using them. I  am sorry I have no success in loading more images by myself ( I am a beginner in this forum...) to show stellar images in a picture og the Ring Nebula taken with broadband absorptive filters.

I also found a site: www.sk-advanced.com/category/chapter-7-interference-filters in which an interesting discussion of interference flters structure and functioning is presented, explaining why they are prone to diffuse internal reflection towards the green and blu part of the spectrum creating halos and star bloataing.

Maybe it would be a good idea to look for ABSORPTIVE RGB filters of good quality....

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.