Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

M81 & M82 LRGB+Ha IKO dataset processing with PixInsight


Recommended Posts

Thanks to A320Flyer for detailing his PixInsight process for the M81 & M82 LRGB+Ha IKO dataset.

 

I have tried to follow your process, I get much better results than before. JonRista noise reduction was a good read. But I am still nowhere near your image, especially the way you get the smooth IFN out, and also the blue tone in the galaxy.

A couple of questions if you don't mind:

4. RGBWorkingSpace on RGBHa with all channels as 1.0. 

What is the purpose of this? Does it do anything since all factors are set to 1?

2. Stretch using several small iterations of HT until the galaxies are well defined, and then use several small iterations of Curves to bring up the background and IFN whilst keeping the galaxies controlled.

I guess this is where I fail miserably. Did you use a mask at some point during this HT/curves process?

Did you at some stage apply Jon Rista's  noise reduction to the RGBHa or only at the monochrome LumHa?

 

Thanks in advance 🙂 

 

 

Link to comment
Share on other sites

On 22/09/2021 at 11:40, Viktiste said:

4. RGBWorkingSpace on RGBHa with all channels as 1.0. 

What is the purpose of this? Does it do anything since all factors are set to 1?

2. Stretch using several small iterations of HT until the galaxies are well defined, and then use several small iterations of Curves to bring up the background and IFN whilst keeping the galaxies controlled.

I guess this is where I fail miserably. Did you use a mask at some point during this HT/curves process?

Did you at some stage apply Jon Rista's  noise reduction to the RGBHa or only at the monochrome LumHa?

4. Since an RGB image does not have a separate luminance channel, it is usual to set all channels to have equal weighting prior to extracting the luminance. this ensures that the R, G and B channels all contribute equally. You could use 0.33, 0.33 0.33 as long as they are all egual.  If you used 1.0,0,0 say, you would effectively just extract the Red channel.

2. No mask. Just three HT and four Curves something like this:

1704508443_HTandCurves.jpg.171b9a537944d0b7244c377581c0bdab.jpg

No NR on RGBHa; the Convolution carried out on 9. sufficiently dealt with the noise.

After the PI process, I transferred to Photoshop to smooth the IFN. I duplicated the image, made a mask for the galaxies and blurred the background using the Dust & Scratches filter, reducing the opacity to suit. I prefer PS for doing this but if you wanted to do in PI, you could probably do it by using MLT to switch off the lower layers and blend the results with PixelMath and a luminance mask.

 

Hope this helps.

Bill

Link to comment
Share on other sites

On 24/09/2021 at 21:25, A320Flyer said:

Since an RGB image does not have a separate luminance channel, it is usual to set all channels to have equal weighting prior to extracting the luminance.

I'm not sure I understand that. Equal weighting? Does not a RGB image consist of unequal weighing of the RGB channels? If they were equal the image would appear black and white?

Link to comment
Share on other sites

I look at an RGB image a bit like a cake mix or cocktail - it’s difficult to extract the individual ingredients once they are all mixed together. In an RGB image, the luminance is perceived and if we want to extract a good approximation of it, we need to say roughly how we think it is made up. Hence RGBWS. 

Here is a quote from a well known PI guru on the PI Forum:

well it gets pretty deep into color theory but for the purposes of PI it modifies the R/G/B channel weights for the purposes of extracting something closer to true Luminance when extracting L* from an RGB image. you want them to be weighted equally which is why you change the weights to 1,1,1.

the human eye is most sensitive to green so with default channel weighting in an RGB image the green channel participates more in the calculation of L* than the other channels. if you're trying to get an equivalent L image that might have come from an L filter, there's no green bias there - all wavelengths pass equally thru the filter.

it doesn't seem to do anything because it doesn't really change how the data is displayed, just how it is interpreted behind the scenes.

 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.