Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Heart and Soul processing question.


Adreneline

Recommended Posts

I know there are a lot of these images on offer at the moment but I will add this one to the mix and seek advice/opinions on combining Ha and OIII.

This is 26 x 120s of Ha plus 7 x 120s of OIII. I had every hope of getting more OIII but the clouds had other ideas.

1887629560_HS_hoo.thumb.jpg.8e5e364d6c768e21391b754540ed4abb.jpg

The subs were integrated in APP and then processed in PI. I used StarAlignment, DynamicCrop, ABE (subtraction) and finally LinearFit before combining in PixelMath as RGB HOO. Light Vortex recommends applying HistogramStretch (or whatever stretch process you prefer) before combining but this seems counter-intuitive to me because LinearFit has supposedly equalised the background levels. Any form of stretching is almost certain to result in background levels being different making it more difficult to remove any resulting colour cast.

The question is should I combine and stretch or stretch and combine? What is the perceived wisdom when combining narrowband channels?

Thanks for looking.

Adrian

  • Like 4
Link to comment
Share on other sites

I would skip linear fit as it does not make much sense.

I would also do histogram stretch before channel mixing. In narrowband imaging, Ha is often very dominant signal component - like in your example above. It can be as much as x4 or more stronger than other components.

You might think that linear fit would deal with that - but it won't always do it properly - because of different distribution of signal (for example - if you have ha signal where there is no oiii signal and vice verse - linear fit will just leave same ratio of the two - it won't scale Ha to be "compatible" with OIII).

Once you have wiped background - effectively set to 0 where there is no signal (average will be 0 but noise will "oscillate" around 0) - then stretching will keep background at zero if you are careful with histogram - if you don't apply any offsets. This means that background will be "black" for mixing of color.

You also might want to create synthetic luminance layer for your image and do color transfer onto it once you have done your color mixing.

  • Thanks 1
Link to comment
Share on other sites

1 minute ago, vlaiv said:

I would skip linear fit as it does not make much sense.

Thanks vlaiv.

Light Vortex also advocate using LinearFIt for broadband imaging but recommend combining before stretching - the opposite of the advice given for narrowband. Would you recommend dropping LF for broadband as well?

I've not tried synthetic luminance before - I will give it a go and see how I get on.

Many thanks.

Adrian

Link to comment
Share on other sites

34 minutes ago, Adreneline said:

Thanks vlaiv.

Light Vortex also advocate using LinearFIt for broadband imaging but recommend combining before stretching - the opposite of the advice given for narrowband. Would you recommend dropping LF for broadband as well?

I've not tried synthetic luminance before - I will give it a go and see how I get on.

Many thanks.

Adrian

I don't use PI. Now, I'm assuming that Linear Fit does what it says, but do bare in mind that I already made a mistake with PI of assuming that certain operation "does what it says" - or rather I had different understanding of what command might do based on its title.

I would not use Linear fit for any channel processing. It is useful when you have the same data and you want to equalize the signal - for example you have target shot in two different conditions. As such it should be part of stacking routine and not later in channel combination.

Link to comment
Share on other sites

HistogramTransformation in PI allows you to individually alter gain and offset for colours individually plus all three together, making it easy enough to set the black level in each colour to balance the background colour.  I don't really understand what LinearFit achieves.

I will add though that I'm far from expert in PI and still very much learning.

Edited by Gina
Link to comment
Share on other sites

4 minutes ago, vlaiv said:

It is useful when you have the same data and you want to equalize the signal - for example you have target shot in two different conditions.

Thanks vlaiv. That is really useful to know.

In the tests I've carried out so far I find it really hard to distinguish between pre or post stretch but that may be due to other inadequacies in my processing regime.

My experience is that 'processes' often are quick to say what they do and not so quick to say what they don't do! Use with care! The provider accepts no responsibility, etc., etc. ;)

Thanks again for your 'qualified' advice - it is much appreciated.

Adrian

Link to comment
Share on other sites

3 minutes ago, Gina said:

I will add though that I'm far from expert in PI and still very much learning.

I'm in that club too!

I only know of one "expert" in PI. I think PI is a bit like quantum mechanics ;)

Adrian

  • Haha 1
Link to comment
Share on other sites

4 minutes ago, Adreneline said:

In the tests I've carried out so far I find it really hard to distinguish between pre or post stretch but that may be due to other inadequacies in my processing regime.

If you wish, you can post 32bit fits of linear Ha and OIII channels - without any processing done - just stacked - and I can do color composition for you with different steps shown.

Mind you, it won't be in PI (ImageJ + Gimp), but hopefully you'll be able to replicate steps in PI yourself.

Link to comment
Share on other sites

6 minutes ago, Gina said:

I don't really understand what LinearFit achieves.

Hi Gina,

This is what Light Vortex has to say on the matter:

"

The PixInsight process responsible for this feat is LinearFit, which assumes that a mathematically linear function can model the difference in average background and signal brightness between a reference image you choose and the target image you apply the process to. As a result, it works best at the very beginning of your post-processing, when your images are linear, but strictly speaking it does not need the images to be linear for it to do its job. Please note that LinearFit requires that the images it is applied to are registered to each other, otherwise there is no correlation between the image we set as reference and the image we apply the process to.

"

See this page for a more complete answer.

Adrian

Link to comment
Share on other sites

2 hours ago, Adreneline said:

Here are my calibrated/stacked and registered (but not cropped) fit files for Ha and OIII.

I had a look at these, and OIII SNR is very poor. From stack's titles I'm guessing you only did 960 seconds of OIII? That is less than Ha and Ha is stronger signal anyway.

Let's see what I can pull from these - I'll have to bin the data to try to improve SNR of OIII sub.

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

From stack's titles I'm guessing you only did 960 seconds of OIII?

Absolutely correct! I actually took 20 x 120s but only the first eight were worth keeping; high cloud rolled in earlier than forecast and the moon glow spoilt the rest.

Thanks for looking.

Adrian

Link to comment
Share on other sites

First step - crop, bin and background&gradient removal - I do it in ImageJ, here is result of OIII:

Screenshot_1.jpg.eafce3452cd94f1b2c890ef903fb6e3e.jpg

As you see, even with x4 binning (which improves SNR by x4) signal is barely visible above noise level.

Ha is of course much better:

Screenshot_2.jpg.2e98fe2764e23eeeca4f9b4b1474c0c0.jpg

Now that we have them wiped, we can continue to make an image out of this. Next step is loading those two still linear subs in Gimp.

First step is to create copy of Ha. That will be our luminance. There is no point in trying to add OIII into synthetic luminance since it has such poor SNR - we are just going to make Ha sub worse.

We proceed to make nice stretched black and white version of image - do stretch, do noise reduction and all to get nice looking image in b&w.

Screenshot_3.jpg.73d663af22d73cd6adb8910dca356f05.jpg

Now we do another stretch of other Ha copy - but far less aggressively. We also do OIII sub stretch. We will stretch that one rather aggressively because SNR is poor and we will use a lot of noise control because of it.

OIII:

Screenshot_4.jpg.0dcbc7d1997c3395e1ce926b5d1e126b.jpg

Don' worry about uneven background (probably cloud or something in stack) - we are using Ha luminance and it should not show in final image).

Screenshot_6.jpg.3a302664ad79a2a223d5c67c91ba5a71.jpg

Notice how subtle Ha stretch this time is - this is because Ha signal has good SNR and we don't want to drown color with it.

Next we do HOO composing of channels.

Screenshot_5.jpg.aedc0b4a7de763eaeff994f5b0da0a12.jpg

This does not look good and it shows the effect you feared - uneven background. But this is only because I had to boost OIII like crazy because SNR is so poor - there is hardly anything there. However, using Ha as luminance is going to solve a bit of that problem.

And the end result after some minor tweaks is:

GIMP-2_10.jpg.c634a4ba07c4ae2cc3eb67c45c3fa9f0.jpg

There is still a lot of green in the background - it is hard to remove it completely as it kills any OIII signal in nebula as well.

Another thing that would improve image would be use of StarNet++ or other star removal technique to keep stars white instead of them having hue.

  • Like 3
  • Thanks 1
Link to comment
Share on other sites

Wow! Thanks vlaiv. That's very interesting and I will re-visit the data and the process and have a go myself.

I have to say the main object of the imaging exercise last night was to check out the performance of my new 6nm filter with the 135/1600 combo and make sure I could achieve focus correctly with the spaceing I had set up. When I came to use the OIII the conditions were going down hill and the focus position was not optimal either as the focus position was not with the 'L' on the lens. This was entirely due to mixing filter types - Astronomik Ha (1mm thick) and Baader OIII (2mm thick). I knew I was pushing my luck trying to extract a HOO image but it was worth a go. Nothing ventured, nothing gained. I clearly need to save my pennies for an Astronomik OIII as well!

Really appreciate you taking the time to do this. I will have a go and see how I get on.

Adrian

Edited by Adreneline
Typo
Link to comment
Share on other sites

Well I have had a go @vlaiv at recreating your methodolgy in PI.

OIII is binned 4x4 and stretched to breaking point. Ha is cloned and moderately stretched for 'colour' and more aggressively stretched for luminance.

Ha and OIII combined as HOO in PixelMath and then synthetic luminance added to the end result, again in PixelMath.

I then adjusted the black point in HistogramStretch and moved the 'mid-point' to the right to increase the contrast.

The resulting tif was moved to PS for modest noise reduction.

1627025491_HS_L-HOO.thumb.png.9a818c381c18c80f20d7f8b43a20e6a0.png

The end result is not as vivid as your rendition but it provides an interesting contrast. Perhaps I need to experiment with Selective Colour in PS to see if I can change the hue/saturation.

I will have another go using Starnet++ before I stretch the OIII but that means using the PC rather than the MacBook (I just cannot fathom Starnet++ in Terminal on the MacBook).

As ever I would value your comments - good and bad!

Many thanks.

Adrian

P.S.

This has been coloured up a bit in PS.

234669044_HS_L-HOOPS.thumb.jpg.a6dcb4dab06d0109e2475fd5487d13c5.jpg

Edited by Adreneline
2nd image added
  • Like 2
Link to comment
Share on other sites

8 minutes ago, Adreneline said:

I will have another go using Starnet++ before I stretch the OIII but that means using the PC rather than the MacBook (I just cannot fathom Starnet++ in Terminal on the MacBook).

As ever I would value your comments - good and bad!

If you try with StarNet++, here is workflow that I found useful:

Same as above, but:

Once you have stretched Ha to act as a luminance layer, turn it into 16 bit and do star removal on it (as far as I remember StarNet++ only works with 16bit images). Next do blend of starless and original image with starless layer set to subtract. This should create "star only" version of ha Luminance - save that for later.

Do starless versions of Ha and OIII and blend those for color (again remove stars on stretched 16bit versions). Apply starless luminance to that and in the end - layer stars only version (should be only white stars and tight) over final image.

This will result in tight stars from ha and white stars without funny hues.

Above image is rather good. Yes, saturation is lacking - what is your color combine step like? Could you paste that one? PI with DBE removed nasty gradients, so there is potential to make this image better then my version which has that funny OIII gradient.

How did you calibrate your images? It looks like flats (if you used them) are over correcting slightly Ha image. Could it be that you did not do flat darks or used bias as flat darks?

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

How did you calibrate your images?

I used flats created immediately after imaging and corresponding dark-flats with darks and a BPM - all in APP.

5 minutes ago, vlaiv said:

what is your color combine step like?

Not sure I understand this - I combined Ha and OIII using PixelMath with Ha assigned to R and OIII assigned to G and B.

For the first image above I literally added synthetic luminance to the HOO image - no scale factors or anything - which resulted in it looking washed out.

The second image in the P.S. was tweaked in PS using Selective Colour and a Hue/Saturation layer.

Adrian

Link to comment
Share on other sites

3 minutes ago, Adreneline said:

I used flats created immediately after imaging and corresponding dark-flats with darks and a BPM - all in APP.

Not sure what BPM is?

3 minutes ago, Adreneline said:

Not sure I understand this - I combined Ha and OIII using PixelMath with Ha assigned to R and OIII assigned to G and B.

For the first image above I literally added synthetic luminance to the HOO image - no scale factors or anything - which resulted in it looking washed out.

Ah ok, don't just add luminance - that is not how luminance should work.

If you have HOO image (and you see colors and all) and you have luminance stretched - here is simple luminance transfer method (you will need to figure out how to tell PI to do pixel math - I'm just going to tell you the math)

final_r = lum * R / max(R,G,B)

final_g = lum * G / max(R,G,B)

final_b = lum * B / max(R,G,B)

This simply means - for each pixel of output image - take R component of HOO image and divide it with max value of R, G and B of that pixel - and multiply with lum value of corresponding pixel of luminance.

That will give you good saturation and good brightness. It is so called - RGB ratio color / luminance combination method.

  • Like 1
Link to comment
Share on other sites

8 minutes ago, vlaiv said:

Not sure what BPM is?

A Bad Pixel Map created by APP.

8 minutes ago, vlaiv said:

Ah ok, don't just add luminance - that is not how luminance should work.

If you have HOO image (and you see colors and all) and you have luminance stretched - here is simple luminance transfer method (you will need to figure out how to tell PI to do pixel math - I'm just going to tell you the math)

final_r = lum * R / max(R,G,B)

final_g = lum * G / max(R,G,B)

final_b = lum * B / max(R,G,B)

This simply means - for each pixel of output image - take R component of HOO image and divide it with max value of R, G and B of that pixel - and multiply with lum value of corresponding pixel of luminance.

That will give you good saturation and good brightness. It is so called - RGB ratio color / luminance combination method.

Right! I shall have a play around with PixelMath and figure out how to do this. I suspected my method was too simple!

Watch this space! Hopefully I will get my next offering up this evening.

Many thanks again for all your help and guidance - this has been a great learning experience for me.

Adrian

 

Link to comment
Share on other sites

40 minutes ago, vlaiv said:

That will give you good saturation and good brightness. It is so called - RGB ratio color / luminance combination method.

I figured out how to do your maths in PixelMath and this is the resulting image, saved as a png so as not to introduce more noise/artefacts.

2125487483_HS_LHOO_vlaiv.thumb.png.4d0ad27a063c74ec1a5e299569db67a5.png

It was very noisy so I've reduced the noise level in PS; the colour saturation and brightness has definitely increased.

I had to convert the synthetic luminance to RGB and then extract the channels as L_R, L_G and L_B.

I did the same to the HOO image saving them as R, G and B.

This is the PixelMath dialogue:

2050407231_Screenshot2020-03-0419_01_04.png.23bc65bd9b469bdafa66c79ab336b7ed.png

I think I need to start again now I know where abouts I'm heading.

Adrian

Edited by Adreneline
Typo
  • Like 2
Link to comment
Share on other sites

I think this is as good as it is going to get with the data I collected. Hopefully I can collect more OIII one day soon. I decided to crop the image tgive a more pleasing picture of this target.

1723937839_HS_hoocompositevlaiv.thumb.png.b1a95ff5a0c3be91baa39c5e0a5d92f8.png

Thanks to @vlaiv for all the help in pointing me in the right direction to extract more detail; it's been quite a journey and a great learning experience.

Adrian

  • Like 3
Link to comment
Share on other sites

On 04/03/2020 at 12:52, vlaiv said:

You also might want to create synthetic luminance layer for your image

I am currently producing starless versions in the hope of creating a synthetic luminance from both the Ha and OIII. Is there a correct way to produce the synthetic luminace image? Is it a simple addition process?

Thanks.

Adrian

Link to comment
Share on other sites

1 hour ago, Adreneline said:

I am currently producing starless versions in the hope of creating a synthetic luminance from both the Ha and OIII. Is there a correct way to produce the synthetic luminace image? Is it a simple addition process?

Thanks.

Adrian

No there is no simple process for doing that. In fact in your case, since you have very low SNR in OIII - using Ha as luminance is correct approach.

In general, when you want to create artificial luminance from multiple channels - you either do weighted adding (for example for RGB, I would use something like 1/2 G and 1/4 of R and B ) or you do some fancy SNR based combination - this requires that you calculate SNR for each pixel in each channel - which is done by stacking regularly and stacking to standard deviation and then using the two to calculate SNR of each pixel.

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.