Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

SPCC and / or G2v?


Recommended Posts

If this is in the wrong place then I trust the Mods to move it.

I've been doing my stacking and post in AstroArt V8 up to now, using a first order G2v calibration to equalise exposures in the R, G, and B channels but I may be moving to PixInsight after watching Adam Block's video on the SPCC script.

The question I have is: Should I keep the G2v calibration in my exposures, or will this interfere with the SPCC process?

From the video it looked like he was using equal exposures for the separate RGB channels.

Link to comment
Share on other sites

TBH I'm not sure what I'm asking.

What I've been doing with the G2v calibration (Whish I should re-do to improve its accuracy) is adjusting the exposure through each filter to make the backgrounds at least approximately the same.

I'm thinking, will having different exposures affect the result, or doesn't it matter?

Link to comment
Share on other sites

I have to say that I've not really noticed folk changing the exposures per filter, whether that's broadband or narrowband apart from one exception that is Luminance.

Linearfit does exactly what you're doing with G2V by the sound of it. Typically folk choose the lowest background valued frame (usually Sii in narrowband) and use that to apply to Oiii and Ha.

For my narrowband stuff I'll expose each filter the same, say 300s, and if I want to will then use linearfit to balance them, using Sii as the reference.

  • Thanks 1
Link to comment
Share on other sites

17 hours ago, scotty38 said:

Linearfit does exactly what you're doing with G2V by the sound of it.

Actually no.

Linear fit is the wrong way to go about things.

It imparts color of light pollution onto the image.

One wants to remove background rather than "equalize" it for accurate color information.

I would say that following would be appropriate order of things:

1. remove background signal from each channel

2. (optionally) do G2V scaling

3. Do method of color calibration of the image data

In fact - G2V scaling is a color calibration method - although inaccurate and highly rudimentary but certainly beats doing no color calibration.

Better way would be to use range of star types and do computation of transform matrix rather than just 3 values.

3 scaling values that we get from G2V calibration is just "diagonal" matrix - or this:

image.png.4b16fd87a5c6eb89a02805ed35a9d251.png

Instead of computing all 9 values of matrix - we computer just main diagonal and treat other members as zeroes.

But this is still not the best way to do color calibration as star colors are rather limited set. Best way would be to derive general type of transform matrix for large chromaticity set and then do above photometric approach as correction factor.

  • Thanks 1
Link to comment
Share on other sites

9 hours ago, vlaiv said:

Actually no.

Linear fit is the wrong way to go about things.

It imparts color of light pollution onto the image.

One wants to remove background rather than "equalize" it for accurate color information.

I would say that following would be appropriate order of things:

1. remove background signal from each channel

2. (optionally) do G2V scaling

3. Do method of color calibration of the image data

In fact - G2V scaling is a color calibration method - although inaccurate and highly rudimentary but certainly beats doing no color calibration.

Better way would be to use range of star types and do computation of transform matrix rather than just 3 values.

3 scaling values that we get from G2V calibration is just "diagonal" matrix - or this:

image.png.4b16fd87a5c6eb89a02805ed35a9d251.png

Instead of computing all 9 values of matrix - we computer just main diagonal and treat other members as zeroes.

But this is still not the best way to do color calibration as star colors are rather limited set. Best way would be to derive general type of transform matrix for large chromaticity set and then do above photometric approach as correction factor.

Thanks Vlaiv but that's why I added "by the sounds of it" 🙂

It sounded to me like g2v was being used for equalising the channels which is what linearfit does if you look at the numbers. Whether it's the right tool that's needed or whether it does things correctly I wouldn't really know but it did sound like that's what was being asked for 🙂

  • Like 1
Link to comment
Share on other sites

5 minutes ago, scotty38 said:

Thanks Vlaiv but that's why I added "by the sounds of it" 🙂

It sounded to me like g2v was being used for equalising the channels which is what linearfit does if you look at the numbers. Whether it's the right tool that's needed or whether it does things correctly I wouldn't really know but it did sound like that's what was being asked for 🙂

Linear fit is extremely useful tool but it is being misused here.

It is actually very useful tool for preparing subs for stacking. Over the course of imaging session as target changes position in the sky - two things happen:

target changes brightness and level of light pollution changes from sub to sub.

Target changes brightness as it changes altitude while earth rotates and "atmosphere number" changes. This part is multiplicative in nature.

LP levels also depend on part of the sky in question and as target "tracks" across the sky - LP levels change.

In general sense, image signal can be written in form:

a * target + b

where a is constant that depends on atmosphere thickness and sky transparency and b is constant that represents average level of light pollution - or background signal.

It is easy to see that above equation is linear (ax+b) and linear fit changes a and b coefficients in one sub to match those of other sub - thus making them stack compatible (same signal strength and same background level).

This is misused as color calibration when linear fit is performed on 2 other color channels against selected one (like fitting R and B to G). It leads to "gray world" type of artificial white balance and also tries to make background color gray as well. "Gray world" white balance algorithm is based on premise that average color in any image is gray (there is as much blue as red and green so that average pixel value is really gray) - but that is flawed assumption.

Also - assuming that background LP is gray is also flawed. In most cases it is orange in color and should not be scaled like that - but rather removed altogether from the image.

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

10 minutes ago, vlaiv said:

Linear fit is extremely useful tool but it is being misused here.

It is actually very useful tool for preparing subs for stacking. Over the course of imaging session as target changes position in the sky - two things happen:

target changes brightness and level of light pollution changes from sub to sub.

Target changes brightness as it changes altitude while earth rotates and "atmosphere number" changes. This part is multiplicative in nature.

LP levels also depend on part of the sky in question and as target "tracks" across the sky - LP levels change.

In general sense, image signal can be written in form:

a * target + b

where a is constant that depends on atmosphere thickness and sky transparency and b is constant that represents average level of light pollution - or background signal.

It is easy to see that above equation is linear (ax+b) and linear fit changes a and b coefficients in one sub to match those of other sub - thus making them stack compatible (same signal strength and same background level).

This is misused as color calibration when linear fit is performed on 2 other color channels against selected one (like fitting R and B to G). It leads to "gray world" type of artificial white balance and also tries to make background color gray as well. "Gray world" white balance algorithm is based on premise that average color in any image is gray (there is as much blue as red and green so that average pixel value is really gray) - but that is flawed assumption.

Also - assuming that background LP is gray is also flawed. In most cases it is orange in color and should not be scaled like that - but rather removed altogether from the image.

 

I've noticed it can be a bit dodgy though- assuming it's valid for use in mosaic images. I could often get two images to linear fit together, but quite often when you stretched the merged linear-fitted images, one would get a black background before the other anyway...

It also couldn't handle one of my panels that accidentally had 768 offset instead of 0 (before i knew what offset to actually use). Linear fit just could not correct that and i got my python-coding friend to make a script to just subtract 768 from each pixel of a fits file, which did work. Sadlly the mosaic was impossible for me to complete regardless as I couldn't balance each panel's brightness... Not sure why linear fit wasn't working. I even tried the DNA linear fit script too I believe.

  • Like 1
Link to comment
Share on other sites

Thanks for all the replies guys, plenty for me to take on board.

Regarding LP it's mainly to the east (Weymouth), and west (Bridport). To the south the nearest town of any size is in France, about 350 km away.

I will give the replies the attention they deserve, and try to come back with a sensible post tomorrow.

Link to comment
Share on other sites

7 minutes ago, pipnina said:

I've noticed it can be a bit dodgy though- assuming it's valid for use in mosaic images. I could often get two images to linear fit together, but quite often when you stretched the merged linear-fitted images, one would get a black background before the other anyway...

It also couldn't handle one of my panels that accidentally had 768 offset instead of 0 (before i knew what offset to actually use). Linear fit just could not correct that and i got my python-coding friend to make a script to just subtract 768 from each pixel of a fits file, which did work. Sadlly the mosaic was impossible for me to complete regardless as I couldn't balance each panel's brightness... Not sure why linear fit wasn't working. I even tried the DNA linear fit script too I believe.

Depends how linear fit is implemented.

I created my own version (needs aligned and cropped subs to work properly). It is a bit more advanced as it also handles first order background gradient - it corrects the "tilt" or gradient direction between subs so it makes gradient look the same in two subs.

If you choose sub with minimum gradient as reference - all others will have their gradient minimized as well.

It also makes gradient removal much easier on the whole stack as gradient direction will be aligned and the same - so it will look linear in the final image as well.

My version works quite well. Not sure how PI or other software have this implemented.

Edited by vlaiv
Link to comment
Share on other sites

23 minutes ago, vlaiv said:

Depends how linear fit is implemented.

I created my own version (needs aligned and cropped subs to work properly). It is a bit more advanced as it also handles first order background gradient - it corrects the "tilt" or gradient direction between subs so it makes gradient look the same in two subs.

If you choose sub with minimum gradient as reference - all others will have their gradient minimized as well.

It also makes gradient removal much easier on the whole stack as gradient direction will be aligned and the same - so it will look linear in the final image as well.

My version works quite well. Not sure how PI or other software have this implemented.

I suppose in PI you might do background extraction before the fit? I don't know, I tried it but maybe some small residual gradients were left (the scope was a bit dodgy in that regard)

Link to comment
Share on other sites

10 hours ago, vlaiv said:

Linear fit is extremely useful tool but it is being misused here.

It is actually very useful tool for preparing subs for stacking. Over the course of imaging session as target changes position in the sky - two things happen:

target changes brightness and level of light pollution changes from sub to sub.

Target changes brightness as it changes altitude while earth rotates and "atmosphere number" changes. This part is multiplicative in nature.

LP levels also depend on part of the sky in question and as target "tracks" across the sky - LP levels change.

In general sense, image signal can be written in form:

a * target + b

where a is constant that depends on atmosphere thickness and sky transparency and b is constant that represents average level of light pollution - or background signal.

It is easy to see that above equation is linear (ax+b) and linear fit changes a and b coefficients in one sub to match those of other sub - thus making them stack compatible (same signal strength and same background level).

This is misused as color calibration when linear fit is performed on 2 other color channels against selected one (like fitting R and B to G). It leads to "gray world" type of artificial white balance and also tries to make background color gray as well. "Gray world" white balance algorithm is based on premise that average color in any image is gray (there is as much blue as red and green so that average pixel value is really gray) - but that is flawed assumption.

Also - assuming that background LP is gray is also flawed. In most cases it is orange in color and should not be scaled like that - but rather removed altogether from the image.

 

Apologies @DaveS if this is going OT.

Thanks again Vlaiv, as always.... Hopefully I'm not misunderstanding but are you then saying that if you have three masters, either RGB or SHO then using linearfit to equalise them before combination is not a good thing to do?

Link to comment
Share on other sites

36 minutes ago, scotty38 said:

Apologies @DaveS if this is going OT.

Thanks again Vlaiv, as always.... Hopefully I'm not misunderstanding but are you then saying that if you have three masters, either RGB or SHO then using linearfit to equalise them before combination is not a good thing to do?

Yes, I would not do it - there is really no need to do it and it skews the data.

  • Like 1
Link to comment
Share on other sites

On 28/12/2022 at 18:36, DaveS said:

The question I have is: Should I keep the G2v calibration in my exposures, or will this interfere with the SPCC process?

 

On 28/12/2022 at 19:48, scotty38 said:

If I understand what you're asking you can use LinearFit in PI to match the mean background between the channels. Then you can use SPCC on the resulting combined image if you wish.

As I wrote before, spcc together with background neutralisation will in essence undo any G2V exposure matching. The disadvantage of G2V matching exposures is that each channel needs its own dark master. Since any colour calibration/background neutralisation step will equalise the channels to get a neutral background, you are implementing a more complex workflow in data acquisition that will be undone in post processing. Also, there is no need to match RGB channels with Linear Fit in PI, because even this will be undone in colour calibration. The only possible benefit of doing Linear Fit would be if you have a dominating colour that affects DBE/ABE. The excessive dominance of one channel may make it necessary to set such a high tolerance during background extraction, that it can affect the background extraction of the other two channels.

(Just as a side note, ZWO once designed a set of RGB filters to match the QE curve of the then popular ASI1600MM, in order to achieve equal flux for all three channels with one single exposure time. They haven't done that for any newer model, afaIk.)

Edited by wimvb
Link to comment
Share on other sites

2 minutes ago, wimvb said:

Since any colour calibration/background neutralisation step will equalise the channels to get a neutral background

Why would this be?

If you remove background from each of channels separately without scaling the signal - it won't mess up color calibration at all.

Link to comment
Share on other sites

1 minute ago, vlaiv said:

Why would this be?

If you remove background from each of channels separately without scaling the signal - it won't mess up color calibration at all.

AfaIk, that's how colour calibration processes in PI work.

Link to comment
Share on other sites

6 minutes ago, wimvb said:

 

As I wrote before, spcc together with background neutralisation will in essence undo any G2V exposure matching. The disadvantage of G2V matching exposures is that each channel needs its own dark master. Since any colour calibration/background neutralisation step will equalise the channels to get a neutral background, you are implementing a more complex workflow in data acquisition that will be undone in post processing. Also, there is no need to match RGB channels with Linear Fit in PI, because even this will be undone in colour calibration. The only possible benefit of doing Linear Fit would be if you have a dominating colour that affects DBE/ABE. The excessive dominance of one channel may make it necessary to set such a high tolerance during background extraction, that it can affect the background extraction of the other two channels.

(Just as a side note, ZWO once designed a set of RGB filters to match the QE curve of the then popular ASI1600MM, in order to achieve equal flux for all three channels with one single exposure time. They haven't done that for any newer model, afaIk.)

I am not sure anyone was looking at using g2v AND linearfit. As far as I was aware it was a question about doing something similar to g2v in PI. Whether the anser is linearfit has generated more discussion though.

Link to comment
Share on other sites

Ok, I think that most of confusion in threads like these is coming from (me) following:

- I tend to advocate approach that is mathematically and physically correct, even if it is not implemented in software

- people tend to discuss what is available in software as a means to get the result they want, even if it is not completely physically correct

Sorry that I contributed to confusion, as thread title clearly says what is being discussed - SPCC vs G2V approach and not the correctness of either method.

Link to comment
Share on other sites

OK, to get back to my original question, which I think has been answered.

What I have taken from this discussion is that if I use the SPCC script in PI then I don't need to worry about G2v calibration and can use the same exposure for each filter.

NB is another ball game entirely, and needs another thread.

  • Like 1
Link to comment
Share on other sites

You'd probably want to test both and see which you prefer.

G2V is calibrating against a sun like star whereas SPCC, from my understanding, is looking at your image and determining what the colour calibration should be against a "standard" galaxy colour.  There will be differences to both really.  It would be interesting to see whether either fall down in non-standard conditions though (for example a narrow field view of a cluster of hot stars etc.)

I don't think you need to change the individual exposures though for the different colours as you can just change the ratio of the number of exposures I believe.   However strictly speaking you should expose a G2V on the same night / same area as your target etc.

SPCC is ultimately simpler but arguably is processing based rather than your own "data" based.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.