Jump to content

More intergration-worse image!


Recommended Posts

So I wanted to take another look at processing some data today and Ive noticed something I'm a bit stumped by. When i first captured some Ha on the rosette neb, I processed the Ha data which im pretty sure had less integration time. it didnt take much to stretch out and see some nice details. Fast forward to having collected some more Ha and stacked to produce an SHO image, the ha data is worse and ive only just noticed this. It doesnt stretch out that well and becomes more noisy when stretched to the same level as the first stacked data. So why I'm still learning I would like to know what may be causing this! I believe there has been no change in settings in the DDS, so why would more integration time produce a worse stack with what appears to be lacking in detail!?

Second is the latest stack with more integration time where you can clearly see more noise

 

To add something that is even stranger, the first stack has a noisier background but less noisy ha details, the second stack is the other way round!

Cropfirststack.jpg

Cropsecoundstack.jpg

Backgroundfirst.jpg

Background2nd.jpg

Edited by Rustang
Link to comment
Share on other sites

If you collect data with a new moon and with a full moon about, you should expect more noise in the full moon subs. If you then integrate first the new moon stack, and then the full stack, you may very well end up with the results you show here. That's one explanation.

I don't know the inner workings of dss, but the image integration routine of pixinsight normalizes the calibrated lights to a reference frame during integration. The brightness (and I assume also the noise) in the stacked image varies if you use a different calibrated reference. Dss does, most likely, something similar. This is also a possible explanation for your results.

  • Thanks 1
Link to comment
Share on other sites

5 hours ago, tooth_dr said:

Conditions?  Moon can drastically increase the noise of subs, and these may be affecting the second image?

They were taken at and around full moon so that's probably it! It seems the main nights we are getting keep falling on the full moon at the moment! 😊

Edited by Rustang
Link to comment
Share on other sites

3 hours ago, wimvb said:

If you collect data with a new moon and with a full moon about, you should expect more noise in the full moon subs. If you then integrate first the new moon stack, and then the full stack, you may very well end up with the results you show here. That's one explanation.

I don't know the inner workings of dss, but the image integration routine of pixinsight normalizes the calibrated lights to a reference frame during integration. The brightness (and I assume also the noise) in the stacked image varies if you use a different calibrated reference. Dss does, most likely, something similar. This is also a possible explanation for your results.

I will have a look into that, thanks. 

Link to comment
Share on other sites

So i had a play around this morning in DSS and I think I have found the issue! I would definitely say I have noisy subs anyway, moon and integration time play a part in this mostly but after playing around with different stacking settings I have found what made my first stack less noisy than my second and also realised that both were the same integration time not one with more than the other (long story!) Anyway, what made the difference was the 'Light' stacking setting, when changed from Kappa-sigma-clipping to Median is produces less noise in the stack but at the sacrifice of some details, problem solved! I must have changed the setting on the first stack a while back then changed back recently and didnt think it would have made any differences!

Link to comment
Share on other sites

11 minutes ago, michael.h.f.wilkinson said:

I often combine data from different sessions in APP, and there are several options to weight different quality subs. I usually find weighting by quality or s/n works well. 

Except there is a problem with that - there is no single S/N value for the image and hence single weight can't describe contribution of particular sub.

Link to comment
Share on other sites

5 minutes ago, vlaiv said:

Except there is a problem with that - there is no single S/N value for the image and hence single weight can't describe contribution of particular sub.

PSNR is well defined. In practice, weighting by quality or estimated noise in the background works. More in general if you add signal weighted by the inverse of the variances you should get an optimal result, assuming independent noise.

Link to comment
Share on other sites

1 minute ago, michael.h.f.wilkinson said:

PSNR is well defined. In practice, weighting by quality or estimated noise in the background works.

Say we have a session that is conducted in dark skies (no or minimal LP) with target being tracked from zenith down to say 30° latitude.

Atmospheric extinction is going to produce considerably different target signal intensity - but empty space is going to have pretty much constant a) SNR - being 0 as signal is 0 and b) Noise - mostly read noise

In this case - background is best averaged with equal coefficients - any change in coefficients is going to produce lower noise reduction and higher resulting noise in background.

Signal will benefit from different coefficients in order for it to get the best SNR in signal areas. We end up with noisier background if we base our coefficients on Peak SNR.

In any case, what I'm trying to say is that there are better ways of doing stacking of uneven SNR data. I developed an approach that gives very good results, and here is outline of it:

1. image is stacked using regular average

2. resulting stack is divided into zones based pixel intensity (similarly to how you produce histogram by putting pixel values into bins - bins can be linearly arranged over pixel range or in quadratic fashion - which I believe is better as it gives fine grain control in low signal areas versus high signal areas where SNR is high already)

3. for each image in stack and for each zone, noise is estimated - by stacking all other subs with their current respective weights and then subtracting current sub. Standard deviation of result is taken and divided with coefficient (1-sub_zone_coefficient) / (sum_other_coefficients) - that gives us noise estimation in current zone of current sub

4. once we have each noise estimation for each zone we then solve minimization problem where we find new coefficients that maximizes SNR of weighted stack for that zone

5. New image is stacked using zones and calculated weights - and we return to step 2.

Only couple of iterations are needed to get stable coefficients

 

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

Say we have a session that is conducted in dark skies (no or minimal LP) with target being tracked from zenith down to say 30° latitude.

Atmospheric extinction is going to produce considerably different target signal intensity - but empty space is going to have pretty much constant a) SNR - being 0 as signal is 0 and b) Noise - mostly read noise

In this case - background is best averaged with equal coefficients - any change in coefficients is going to produce lower noise reduction and higher resulting noise in background.

Signal will benefit from different coefficients in order for it to get the best SNR in signal areas. We end up with noisier background if we base our coefficients on Peak SNR.

In any case, what I'm trying to say is that there are better ways of doing stacking of uneven SNR data. I developed an approach that gives very good results, and here is outline of it:

1. image is stacked using regular average

2. resulting stack is divided into zones based pixel intensity (similarly to how you produce histogram by putting pixel values into bins - bins can be linearly arranged over pixel range or in quadratic fashion - which I believe is better as it gives fine grain control in low signal areas versus high signal areas where SNR is high already)

3. for each image in stack and for each zone, noise is estimated - by stacking all other subs with their current respective weights and then subtracting current sub. Standard deviation of result is taken and divided with coefficient (1-sub_zone_coefficient) / (sum_other_coefficients) - that gives us noise estimation in current zone of current sub

4. once we have each noise estimation for each zone we then solve minimization problem where we find new coefficients that maximizes SNR of weighted stack for that zone

5. New image is stacked using zones and calculated weights - and we return to step 2.

Only couple of iterations are needed to get stable coefficients

 

As you may have seen I edited the response. You can estimate local variance quite easily using the distribution of the squared gradients in a region surrounding each pixel, discarding high gradients, and fitting an exponential distribution to the results. No iterations needed. In any flat area of the image, the squared gradients have an exponential distribution under the assumption of Gaussian noise (Poisson is close enough for large enough photon counts). This can be used if no calibration information is available. If we assume photon noise dominates (i.e. read noise is negligible), then this estimate is large superfluous, assuming we know the gain settings. Therefore, weighting by the inverse of the vairance is optimal in a least-squared sense.

Link to comment
Share on other sites

3 minutes ago, michael.h.f.wilkinson said:

As you may have seen I edited the response. You can estimate local variance quite easily using the distribution of the squared gradients in a region surrounding each pixel, discarding high gradients, and fitting an exponential distribution to the results. No iterations needed. In any flat area of the image, the squared gradients have an exponential distribution under the assumption of Gaussian noise (Poisson is close enough for large enough photon counts). This can be used if no calibration information is available. If we assume photon noise dominates (i.e. read noise is negligible), then this estimate is large superfluous, assuming we know the gain settings. Therefore, weighting by the inverse of the vairance is optimal in a least-squared sense.

Why use approximate methods when we actually have variance for each pixel given that we have a set of images that we are trying to stack.

In fact - imagine you have 10 subs with unknown signal in it and each sub is polluted with Gaussian type noise. How are you going to estimate magnitude of that Gaussian noise? This is actually quite useful as a way of determining read noise of sensor for example.

Well you stack 9 subs using average method - that will keep the signal the same and reduce noise to 1/3 of original value. Now you subtract remaining sub from that stack - signals have the same magnitude and they cancel out and you are left with 1/3 noise + noise that adds in quadrature. Result will be sqrt(1/9 + 9 / 9) = sqrt(10) / 3

You measure standard deviation of resulting image and divide it with sqrt(10)/3 in order to get noise in each of these subs.

If you have large number of subs where noise is a bit different and you average them out and subtract the remaining sub - you'll get very close value of noise in that particular sub. This is because noise adds in quadrature and if you add two noises with significantly different magnitudes result will be very close to larger one.

Why not exploit this?

Link to comment
Share on other sites

41 minutes ago, vlaiv said:

Say we have a session that is conducted in dark skies (no or minimal LP) with target being tracked from zenith down to say 30° latitude.

Atmospheric extinction is going to produce considerably different target signal intensity - but empty space is going to have pretty much constant a) SNR - being 0 as signal is 0 and b) Noise - mostly read noise

In this case - background is best averaged with equal coefficients - any change in coefficients is going to produce lower noise reduction and higher resulting noise in background.

Signal will benefit from different coefficients in order for it to get the best SNR in signal areas. We end up with noisier background if we base our coefficients on Peak SNR.

In any case, what I'm trying to say is that there are better ways of doing stacking of uneven SNR data. I developed an approach that gives very good results, and here is outline of it:

1. image is stacked using regular average

2. resulting stack is divided into zones based pixel intensity (similarly to how you produce histogram by putting pixel values into bins - bins can be linearly arranged over pixel range or in quadratic fashion - which I believe is better as it gives fine grain control in low signal areas versus high signal areas where SNR is high already)

3. for each image in stack and for each zone, noise is estimated - by stacking all other subs with their current respective weights and then subtracting current sub. Standard deviation of result is taken and divided with coefficient (1-sub_zone_coefficient) / (sum_other_coefficients) - that gives us noise estimation in current zone of current sub

4. once we have each noise estimation for each zone we then solve minimization problem where we find new coefficients that maximizes SNR of weighted stack for that zone

5. New image is stacked using zones and calculated weights - and we return to step 2.

Only couple of iterations are needed to get stable coefficients

 

Could you put this in less of a 'this has gone way over my head' kind of way!? 😊

Link to comment
Share on other sites

14 minutes ago, vlaiv said:

Why use approximate methods when we actually have variance for each pixel given that we have a set of images that we are trying to stack.

In fact - imagine you have 10 subs with unknown signal in it and each sub is polluted with Gaussian type noise. How are you going to estimate magnitude of that Gaussian noise? This is actually quite useful as a way of determining read noise of sensor for example.

Well you stack 9 subs using average method - that will keep the signal the same and reduce noise to 1/3 of original value. Now you subtract remaining sub from that stack - signals have the same magnitude and they cancel out and you are left with 1/3 noise + noise that adds in quadrature. Result will be sqrt(1/9 + 9 / 9) = sqrt(10) / 3

You measure standard deviation of resulting image and divide it with sqrt(10)/3 in order to get noise in each of these subs.

If you have large number of subs where noise is a bit different and you average them out and subtract the remaining sub - you'll get very close value of noise in that particular sub. This is because noise adds in quadrature and if you add two noises with significantly different magnitudes result will be very close to larger one.

Why not exploit this?

Because there may be systematic drift in the system, like a rising moon, or you are combining two or more stacked images from two different sessions without resorting to restacking all the subs. In that case, either use the noise information in the FITS header (as computed by the stacking software in each session) or use the above-mentioned estimation method. For a derivation of the exponential distribution see this paper:

http://www.cs.rug.nl/~michael/caip2003.pdf

Link to comment
Share on other sites

35 minutes ago, Rustang said:

Could you put this in less of a 'this has gone way over my head' kind of way!? 😊

Not sure if I can.

Maybe I put it like this: say you have subs containing galaxy and background. Background does not contain any signal - it contains only noise and background should be stacked using one set of weights that depends on noise in the background in each sub in order to get smoothest possible background form your stacking.

Galaxy contains signal - and it has S/N ratio and it needs to be stacked using different set of weights. This is basic scenario that shows that using single weight per sub is not optimal solution. In this case better solution can be had by using two different weights for each sub - one relevant for background and one for galaxy.

There are number of noise sources in the image that vary during the night - one is level of light pollution - as people turn on/off the lights or the Moon rises sets. There is transparency that limits how much signal you are getting, but there is also atmosphere and as target moves across the sky - light needs to pass thru different thickness of atmosphere and attenuation of light from the target changes.

You don't need to have two separate sessions in order to have subs with different levels of S/N ratio in same regions - and S/N ratio varies from part of the image to other parts (as we have seen in galaxy / background example).

All of this shows that stacking with single weight is not optimum solution and better solution can be used - not only for stacking multiple sessions under different circumstances - but also in single session.

I presented way on how to do that that I came up with. As far as I know - no software other than the plugin for ImageJ that I've written - does not support it, and since I did not publish paper on it - it is unlikely that any software will unless they read this or similar discussions where I mention it :D

However, I've been writing about it previously and actually tested it on real data - I had a night of imaging when my RC was affected by dew (very poor conditions - and single night that it happened - as that usually doesn't happen). Let me see if I can find that thread.

 

  • Thanks 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.