Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

H O O processing


Recommended Posts

Hi all, first attempt at Ha Oiii processing, have to admit I got in a real mess !

I used Ha as R Oiii as G and a mix of both as B.

Any tips on making the synthetic 3rd colour ?

I have Photoshop CS6 and a trial version of Pixinsight (so complicated)

Thanks

 

Soul_HOO_180mins-01.jpeg

  • Like 4
Link to comment
Share on other sites

40 minutes ago, knobby said:

I used Ha as R Oiii as G and a mix of both as B.

Interesting! A very good first attempt at HOO.

You can do exactly as the name implies and map R:G:B - Ha:O:O but I find that gives a very 'red' final result which I don't like so much.

I am no expert but I thought a more common approach was Ha:R - Ha+OIII:G - OIII:B but I may be wrong. I usually go 30% or 40% Ha in the G with 70% or 60% OIII. I've always put 100% OIII in Blue. I guess at the end of the day you can do what you like as long as you are pleased with the result.

In PixInsight I use PixelMath exactly as above:

R - Ha

G - (0.3*Ha)+(0.7*OIII)

B - OIII

1219367372_Screenshot2019-10-0313_17_00.png.acf6fe0d7c9e4dab4dc5c32c85ae3fbf.png

If you use the expression editor it is easier to enter the formula for R, G and B.

Also, if you double-click on the identifier at the side of the image you can type in a shorter image name, like "Ha" which makes using PixelMath easier.

HTH

Adrian

 

 

  • Like 1
Link to comment
Share on other sites

31 minutes ago, vlaiv said:

Hm, point of HOO is it's title, isn't it?

Ha - red 100%, O - green 100%, O - blue 100%

 

Absolutely agree but dabbled as I thought that using the same data for G and B would be very Bi coloured so tried mixing.

Link to comment
Share on other sites

1 minute ago, knobby said:

Absolutely agree but dabbled as I thought that using the same data for G and B would be very Bi coloured so tried mixing.

Any sort of mix of only two sources will be very Bi colored :D

With different mixes of colors from two sources (like percentages per channel) - you are just changing hues of two colors that compose image. You need some fancy way of fiddling with your data in order to produce tri color image from only two sources.

One of the ways to do it would be - assign percentage to channel based on intensity, so that not all pixels contribute in equal measure - for example if pixels are over some threshold value - switch channel you are assigning values to, or similar.

That would in effect mean, for example - strong Ha signal is mapped to yellow (has both red and green contribution), while weak Ha signal is mapped to red (no green contribution), OIII signal is mapped to blue. That would create "tri color" image from bi channel data.

  • Like 1
Link to comment
Share on other sites

2 hours ago, MarsG76 said:

I think your image looks brilliant... the way you mixed your blue channel looks to me like it aesthetically works well.

Thanks Mars, I'm beginning to realize that the processing of narrowband images is more like art than processing 😅

Had a go just using Ha as R and Oiii as G and B no blending.

 

PSX_20191003_171738-01.jpeg

  • Like 1
Link to comment
Share on other sites

No matter how you blend the channels, once you've removed the background gradient (without normalize in PI) and/or do a background neutralisation, your original blend is gone. On the very few occasions that I've processed bicolour, I used a mix that just looked good. 

  • Like 1
Link to comment
Share on other sites

1 hour ago, wimvb said:

No matter how you blend the channels, once you've removed the background gradient (without normalize in PI) and/or do a background neutralisation, your original blend is gone

I'm a bit puzzled with this.

The way I've done it is to take linear mono channels and do background/gradient removal prior to any combination. I also don't like the idea of combining data prior to stretching it for NB images - Ha is much stronger and if you combine your data and stretch it - you will get poor saturation in color mix and Ha will dominate image. I tend to do it like this:

- calibrate / stack / whatever and do gradient / background removal on linear channels separately.

- do stretch of mono images until I get nice looking level of detail in each and brightness is at about same level between mono images (that largely depends on SNR of each channel)

- combine images and do channel mixing to get certain palette out of it

Link to comment
Share on other sites

8 minutes ago, vlaiv said:

I'm a bit puzzled with this.

The way I've done it is to take linear mono channels and do background/gradient removal prior to any combination. I also don't like the idea of combining data prior to stretching it for NB images - Ha is much stronger and if you combine your data and stretch it - you will get poor saturation in color mix and Ha will dominate image. I tend to do it like this:

- calibrate / stack / whatever and do gradient / background removal on linear channels separately.

- do stretch of mono images until I get nice looking level of detail in each and brightness is at about same level between mono images (that largely depends on SNR of each channel)

- combine images and do channel mixing to get certain palette out of it

PI has a nice tool called"Liner Fit".
Usual procedure is to use the weakest Signal (OIII in this case) as a reference image and apply "liner fit" on H (and SII if any), later combine, color calibrate and etc, and stretch at the end.

I go both ways each time... Stretch>combine, or Liner Fit>Combine>Stretch, and always end up with the different results on the different targets :) it takes sooo much time! :) I hate this part

Link to comment
Share on other sites

On 03/10/2019 at 17:59, knobby said:

Early days in the journey for me

Hi.

I found this very helpful when I started NB and Bi-Colour - having said that this applies to RGB, HOO and SHO in whatever combinations you like

https://www.lightvortexastronomy.com/tutorial-preparing-monochrome-images-for-colour-combination-and-further-post-processing.html

This takes you step by step through alignment, dynamic cropping, gradient removal and linearfit of monochrome images prior to combining as RGB or NB.

HTH

Adrian

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, RolandKol said:

PI has a nice tool called"Liner Fit".
Usual procedure is to use the weakest Signal (OIII in this case) as a reference image and apply "liner fit" on H (and SII if any), later combine, color calibrate and etc, and stretch at the end.

I go both ways each time... Stretch>combine, or Liner Fit>Combine>Stretch, and always end up with the different results on the different targets :) it takes sooo much time! :) I hate this part

Not sure if linear fit is a good tool to be used in this context. Presuming that linear fit is what I think it is (not sure since I don't use PI), and it should be according to name and the way you use it, it is related to something else.

Let's see how it works, what it should be used for and why it's not the best choice in this case.

Light from the target passes thru atmosphere and that causes certain level of extinction of the light. That level of extinction depends on transparency of the sky and "number of atmospheres" it passes (altitude of target). We can say that resulting signal is equal to a certain percent of original signal.

signal = a*original

There is one more component to this and that is sky glow. It adds photons to total measured signal so it is additive in nature, and in fact above formula should be written as:

signal = a*original + sky_glow

Now if you observe two different subs, taken at different times with different target position / transparency and different level of LP / sky glow you can see that following holds:

signal1 = a1*original + b1

signal2 = a2*original + b2

Now we have two different subs with different levels of light recorded. We want to "normalize" those frames - make them "equal", even if we don't necessarily know a1, b1, a2 and b2 constants. A bit of math can show us how to do it:

signal1 = a1 * original + b1 => c * signal = c* (a1*original +b1) => c * signal1 + d = c * (a1*original + b1)  +d = (c*a1) * original + (c*b1 + d) = a2 * original +b2 = signal2 (for c*b1 = a2 and c*b1+d = b2)

In another words, by multiplying one sub with some constant and adding another constant we can get second sub. Or, if we observe each pixel value with noise and we write down bunch of equations in form p1*X + Y = p2 (where p1 are pixels of first sub and p2 are pixels of second sub) - we can find X and Y such that it minimizes error - linear regression (linear fit).

This is essential process of preparation subs for stacking - you want your subs to be equal in signal strength with same level of background signal. That way sigma reject will work properly and you can use noise to estimate weights of each sub. It is often called equalization of frames before stacking. I developed extension to this that equalizes both signal and background as well as background gradient (so sub with least background gradient can be chosen as reference and background gradient is minimized that way).

Now that we know what linear fit is, let's see what it requires to work properly - it requires signal to be equal, and background to have "additive" aspect.

If you use linear fit on two subs (or stacks) that don't have same signal, difference in signal will be treated as noise and algorithm will include it in "minimize noise" part and you will not get what you expect. This can sometimes work better and sometimes worse - depending on background gradients in each channel, but also depending on signal strength and distribution of signal in each image. If signal is in the same place - there will be some "Equalization", but if it is in different places, you won't get results that you are looking for. Also if large area is covered with signal in one image, but only small area in other image - there will be poor results.

 

Link to comment
Share on other sites

1 hour ago, vlaiv said:

I'm a bit puzzled with this

In pixinsight the normal procedure is to first combine the channels and then do gradient removal, but you can do gradient removal first, of course. Gradient removal can include normalisation, which restores the median pixel value of the entire image. This leaves any colour cast intact. If you do not choose this option, gradient removal will also change the colour balance. And there is also the linear fit function, mentioned above. The idea behind this procedure is to have a common median or average pixel value for each master (Ha, S, O), in order to avoid a colour cast.

Link to comment
Share on other sites

21 minutes ago, wimvb said:

In pixinsight the normal procedure is to first combine the channels and then do gradient removal, but you can do gradient removal first, of course. Gradient removal can include normalisation, which restores the median pixel value of the entire image. This leaves any colour cast intact. If you do not choose this option, gradient removal will also change the colour balance. And there is also the linear fit function, mentioned above. The idea behind this procedure is to have a common median or average pixel value for each master (Ha, S, O), in order to avoid a colour cast.

Ah brilliant! :D

You just helped me to come up with better way of doing color cast removal :D

In any case, I wanted to point out that methods you mentioned won't do a good job of color cast removal in every case - in fact they are wrong in approach (although they can be used to some extent to do it).

Color cast over image is consequence of LP signal being in image and it's distribution over channels.

In formula signal = a * original + b, you actually want to remove b completely.

If we have 3 channels, we have three different coefficients b - b1, b2 and b3 that have different magnitude, and their ratio gives color to color cast. One way of "removing" color cast is to make them equal - that will give neutral / gray cast (r:g:b = 1:1:1), but better still is to remove b.

Putting mean / median value of whole image to same value across channels will not do this. Both mean and median depend on both background and signal values. So content of image changes this and it does not depend only on background.

How can we remove b completely on single frame without considering other channels?

So far I used following approach: I make mean value of all pixels in the image and calculate standard deviation - I "remove" all pixels above mean + c * stddev and repeat process. This isolates "background" pixels after couple of iterations (and depending on c which I usually put to be 3 or somewhere between 2 and 3).

This works very well for images that have "empty background" - like galaxies or clusters, or anything framed so that there is background in the image. It will not work so well on images where there is nebulosity covering most of the image (signal everywhere so you can't really isolate background).

Here is an example of it:

image.png.9db2edd2cf4db61e7d9910e5552afe23.png

Left is single channel (already wiped of background, but it does not matter - selection of background pixels works the same), and right is background map - white is where there is signal, and black is where there is background.

Now take all pixels that are background - and you can do several things with it - one is to calculate mean value of those pixels and that is your b. Other is to examine if you can do linear interpolation of those pixels (put a plane thru them), or maybe higher order polynomial. That is used for gradient removal, here is example of that:

image.thumb.png.81502665e1bcdaf3fb37c62c6bf258f7.png

Here is plugin ran on same stack that is not wiped yet - it produces background map and also gradient to be removed (just subtracting gradient from stack will do background and gradient removal). If gradient is not linear, you need multiple rounds of this to get best results ( get gradient, remove gradient, get gradient again (a bit different this time), remove new gradient, repeat multiple times until gradient is negligible in magnitude).

Now on to "new" method that I just thought of ....

We need to distinguish between parts of the image where there is signal and where there is background only, so we can take only background pixels and "normalize" on those only - by subtracting their mean / average value (or median, or putting mean/median to same value of those pixels across channels, but not much point in that if you can remove background signal completely - by subtracting it from each channel separately).

How do we know what is background? Maybe stack statistics can help there. When stacking we need to do both average value to get signal and noise value or standard deviation stacking. Ratio of the two will give us SNR of each pixel. For all intents and purposes, we can consider background - any pixel that has SNR low enough (that you can't pull any visible signal with stretching) - something like SNR of 1 or lower is definitively background.

Only issue is that LP is also signal, but LP is very predictable signal - it is uniform, either constant or like above linear gradient, or well approximated with low order polynomial function. Other signal - true target signal is much more complex.

What is left is "guessing game" - we guess LP level, subtract that LP level, check all pixels that have SNR less than 1 after signal adjustment for mean value - is that mean value 0? No, back to step one - guess LP level again (and next guess is going to be based on how far off mean value of "background" pixels is to 0 - closer to zero we are, less of adjustment of start LP level we need).

Still some things to work out, but I think it will work. Not sure if I can make it work in special case where whole image is covered with signal - but there we can use signal - to SNR relationship (including read/dark noise) to do estimation. Will need to think about it some more.

 

  • Like 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

Color cast over image is consequence of LP signal being in image and it's distribution over channels.

That's why I always do gradient removal and background neutralisation as two separate steps. In pi there are several methods to remove a colour cast.

  • Dbe which removes a gradient and can remove a colour cast
  • Linear fit that can "align" median pixel values between monochromatic images before combining them
  • Background neutralization that will shift and linearly stretch (a*x +b) pixel values based on a reference.
  • Colour calibration, used to create a white balance after background neutralization
  • Photometric colour calibration, similar to colour calibration

All methods have their own algorithm. Unfortunately I know too little about their differences to go into further detail.

@vlaiv what you describe in your post seems to me more or less what dbe does regarding gradient removal, and what background neutralization does regarding colour cast removal.

Edited by wimvb
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.