Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Dithering to prevent colour noise


smr

Recommended Posts

Hi,

Is this colour noise, and can it be cured by dithering ? at the moment I am just using APT default dithering values. And it's been fine for hot pixels etc. on other targets, but on this target there seems to be a lot of what I presume is colour noise? 

If so do I need to dither more aggressively to get rid of it or is it a different type of noise ? (over stretching to see the neb etc.?)

 

dithering2.thumb.jpg.db1dbb2659e2670019ab14b9cc28cbde.jpg

Link to comment
Share on other sites

3 minutes ago, tooth_dr said:

Is that the stacked image after processing?

Edit- just seen the last line of your post.

PS camera raw filter does a good job of reducing that colour noise. 

Yes its a stack of 3 and a half hours. I was surprised by the amount of red that got picked up actually. But then I read a post by Olly earlier today that said it's a broadband red so easier to pick up.

I think I'll just leave the dithering as is then thanks.

Link to comment
Share on other sites

Photoshop used to reduce colour noise on right 1/2 of image, with left half as post above.  Plus a GIF cycling between the original and colour-noise reduced image, both processed identically, and enlarged 200%.  Finally a copy with some light processing.  Has this helped with the colouration problem you were talking about?

 

Thanks

Adam.

 

image.thumb.png.c8358a7c9aef924d64e5d1d392dbfea0.png

 

Colour-Noise-GIF.gif

 

image.thumb.png.fb80c4a0fd3d2792a00537437624b6f2.png

Edited by tooth_dr
Link to comment
Share on other sites

Thank you guys. I've decided I'm going to try and get as much data on this as I can. I've got 7 hours now and am seeing a difference in the data. I'm going to try and get 20-30 hours for the first time (longest I've gone is 7 hours on any DSO) to see what the result should look like then.

  • Like 1
Link to comment
Share on other sites

1 hour ago, smr said:

Thank you guys. I've decided I'm going to try and get as much data on this as I can. I've got 7 hours now and am seeing a difference in the data. I'm going to try and get 20-30 hours for the first time (longest I've gone is 7 hours on any DSO) to see what the result should look like then.

Look forward to seeing that. I actually did 2 hours on the Leo triplets and then up to 8 hours and then added another 16 hours.  The biggest difference was from 2 to 8. The 8 to 24 was worthwhile but I would say for my DSLR 8 hours was the sweet spot. 

  • Like 1
Link to comment
Share on other sites

2 minutes ago, tooth_dr said:

Look forward to seeing that. I actually did 2 hours on the Leo triplets and then up to 8 hours and then added another 16 hours.  The biggest difference was from 2 to 8. The 8 to 24 was worthwhile but I would say for my DSLR 8 hours was the sweet spot. 

Nice. Mind linking to the image? Would like to see that.

Link to comment
Share on other sites

14 minutes ago, tooth_dr said:

Look forward to seeing that. I actually did 2 hours on the Leo triplets and then up to 8 hours and then added another 16 hours.  The biggest difference was from 2 to 8. The 8 to 24 was worthwhile but I would say for my DSLR 8 hours was the sweet spot. 

Quick question, when you add extra data, do you re-stack the individual subs, or do you integrate the first stack with the extra individual subs?  

Link to comment
Share on other sites

9 minutes ago, Scooot said:

Quick question, when you add extra data, do you re-stack the individual subs, or do you integrate the first stack with the extra individual subs?  

restack everything into the groups in DSS. So night 1 data group 1, night 2 group 2 etc.

  • Thanks 1
Link to comment
Share on other sites

20 minutes ago, smr said:

restack everything into the groups in DSS. So night 1 data group 1, night 2 group 2 etc.

Thanks, I’ve never used DSS, I use Pixinsight. So I’d end up with a stacked image for night 1, & a stacked image for night 2. Etc. So after 3 nights I’d end up with 3 stacked images to combine into one?

Link to comment
Share on other sites

1 minute ago, Scooot said:

Thanks, I’ve never used DSS, I use Pixinsight. So I’d end up with a stacked image for night 1, & a stacked image for night 2. Etc. So after 3 nights I’d end up with 3 stacked images to combine into one?

Not best way to do it.

Ideally you want all your subs stacked into single stack. You can create "mini stack" out of already stacked data, but chances are that it's not going to be optimally stacked, especially if you captured different number of subs on different evenings.

Imagine you have 3 subs on first evening, and 5 subs on second - you create your stack for first and second night and you will have roughly 1/3, 1/3, 1/3 in first stack and 1/5, 1/5, 1/5, 1/5, 1/5 in second

Now you combine those two, and you will end up with this: 1/6, 1/6, 1/6, 1/10, 1/10, 1/10, 1/10, 1/10 (each again divided by two) - so subs from first night are "weighted" more in result than subs from second evening, where in reality you want close to 8x 1/8 to be the weights.

PI has weighting as an option and it works better if you put all your subs "in the same basket" rather than create separate averages and then weigh them against each other.

If you for some reason don't have previous subs and only final stack (still linear without any processing) then yes - do it like that - create "mini stack" out of 2 or 3 stacks from each night. Result might not be optimal, but it will still improve things.

  • Like 2
Link to comment
Share on other sites

6 minutes ago, vlaiv said:

Not best way to do it.

Ideally you want all your subs stacked into single stack. You can create "mini stack" out of already stacked data, but chances are that it's not going to be optimally stacked, especially if you captured different number of subs on different evenings.

Imagine you have 3 subs on first evening, and 5 subs on second - you create your stack for first and second night and you will have roughly 1/3, 1/3, 1/3 in first stack and 1/5, 1/5, 1/5, 1/5, 1/5 in second

Now you combine those two, and you will end up with this: 1/6, 1/6, 1/6, 1/10, 1/10, 1/10, 1/10, 1/10 (each again divided by two) - so subs from first night are "weighted" more in result than subs from second evening, where in reality you want close to 8x 1/8 to be the weights.

PI has weighting as an option and it works better if you put all your subs "in the same basket" rather than create separate averages and then weigh them against each other.

If you for some reason don't have previous subs and only final stack (still linear without any processing) then yes - do it like that - create "mini stack" out of 2 or 3 stacks from each night. Result might not be optimal, but it will still improve things.

Thanks Vlaiv, well explained and good to know. I had two nights worth earlier in the week & did it the way you suggested but wondered if there’d be a short cut. :) 

Edited by Scooot
Link to comment
Share on other sites

22 minutes ago, vlaiv said:

Not best way to do it.

Ideally you want all your subs stacked into single stack. You can create "mini stack" out of already stacked data, but chances are that it's not going to be optimally stacked, especially if you captured different number of subs on different evenings.

Imagine you have 3 subs on first evening, and 5 subs on second - you create your stack for first and second night and you will have roughly 1/3, 1/3, 1/3 in first stack and 1/5, 1/5, 1/5, 1/5, 1/5 in second

Now you combine those two, and you will end up with this: 1/6, 1/6, 1/6, 1/10, 1/10, 1/10, 1/10, 1/10 (each again divided by two) - so subs from first night are "weighted" more in result than subs from second evening, where in reality you want close to 8x 1/8 to be the weights.

PI has weighting as an option and it works better if you put all your subs "in the same basket" rather than create separate averages and then weigh them against each other.

If you for some reason don't have previous subs and only final stack (still linear without any processing) then yes - do it like that - create "mini stack" out of 2 or 3 stacks from each night. Result might not be optimal, but it will still improve things.

So just stack all data from each night into group 1 basically improves the final stacked image?

Link to comment
Share on other sites

38 minutes ago, smr said:

So just stack all data from each night into group 1 basically improves the final stacked image?

Not sure how the DSS works. I think groups are calibration groups and it produces single image at the end? If so, then continue using that approach. If you get three separate images as result (provided that you used 3 groups) then use one group (but not sure about calibration - can you add different calibration files to one group?)

Link to comment
Share on other sites

11 minutes ago, vlaiv said:

Not best way to do it.

Ideally you want all your subs stacked into single stack. You can create "mini stack" out of already stacked data, but chances are that it's not going to be optimally stacked, especially if you captured different number of subs on different evenings.

Imagine you have 3 subs on first evening, and 5 subs on second - you create your stack for first and second night and you will have roughly 1/3, 1/3, 1/3 in first stack and 1/5, 1/5, 1/5, 1/5, 1/5 in second

Now you combine those two, and you will end up with this: 1/6, 1/6, 1/6, 1/10, 1/10, 1/10, 1/10, 1/10 (each again divided by two) - so subs from first night are "weighted" more in result than subs from second evening, where in reality you want close to 8x 1/8 to be the weights.

PI has weighting as an option and it works better if you put all your subs "in the same basket" rather than create separate averages and then weigh them against each other.

If you for some reason don't have previous subs and only final stack (still linear without any processing) then yes - do it like that - create "mini stack" out of 2 or 3 stacks from each night. Result might not be optimal, but it will still improve things.

Hi vlaiv.  You've written something similar recently, and I tried it out.  I usually process and combine each night's data, and then combine the sub-totals.  I did a comparison with a single process of several night's images, and there was a very noticeable improvement.

 

The bit that I cannot understand is why a single batch of 21 subs of 600s should produce a better result than  3 "super-subs" of (7*600s).

 

Link to comment
Share on other sites

1 minute ago, don4l said:

Hi vlaiv.  You've written something similar recently, and I tried it out.  I usually process and combine each night's data, and then combine the sub-totals.  I did a comparison with a single process of several night's images, and there was a very noticeable improvement.

 

The bit that I cannot understand is why a single batch of 21 subs of 600s should produce a better result than  3 "super-subs" of (7*600s).

 

What did you use to stack your data? I presume PI?

Noticeable improvement if you have same sized batches usually comes from the fact that there was difference in the conditions on particular night compared to other nights.

Let's say that a single night out of three had poor transparency. Subs of that particular night will differ somewhat between each other but overall weights will be similar - close to 1/7 each. Same will be on other nights. When you combine stack from all three nights you will end up with weights close to 1/21 for each sub.

If one night was poorer than others - this is not optimal since there can be significant difference between quality of subs on different nights and if you put them all in one stack - each will be given appropriate weight, so you can end up having 1/18 for good subs and something like 1/25 for poor subs - poor subs will contribute less to final result this way and good subs more.

In case of stacking stacks from each night, because subs on each night are close in quality to each other you end up with each contributing about the same to final result - and that is not what you want if you have different quality subs.

This is a part of explanation.

Another part is related to use of sigma clip stacking. More subs you have sigma clip works better. Let's say sigma clip "decides" to reject one or two pixels. In stack of 7 subs this leaves you with 5 subs stacked, or 40% difference! In stack of 21 subs you are left with 19 subs - maybe even 18 subs if sigma clip rejects three values - but this time it is only 16% "loss". More subs you have it is better with sigma clip.

It really boils down on your stacking algorithm. If you have equal number of subs per group - like in your case 3x7 but you don't use any sort of advanced stacking methods, and use simple average instead - result will be the same. On the other hand if you have mismatched groups - like 8 subs on first night, 7 on second, 6 on third or similar - this straight average will give worse results then advanced methods when stacking stacks, but in general best approach is to stack single subs.

  • Like 2
Link to comment
Share on other sites

27 minutes ago, vlaiv said:

Not sure how the DSS works. I think groups are calibration groups and it produces single image at the end? If so, then continue using that approach. If you get three separate images as result (provided that you used 3 groups) then use one group (but not sure about calibration - can you add different calibration files to one group?)

Yes it produces a single image at the end. I'm not sure if it would work as well by importing multiple nights worth of data into one group. How would it know which flats belonged to which night for instance? Or would that even matter?

Link to comment
Share on other sites

1 minute ago, smr said:

Yes it produces a single image at the end. I'm not sure if it would work as well by importing multiple nights worth of data into one group. How would it know which flats belonged to which night for instance? Or would that even matter?

Yes, if it produces single image in the end, then you are doing it right - groups in DSS are "calibration" groups rather than separate stacks

  • Like 1
Link to comment
Share on other sites

4 minutes ago, vlaiv said:

What did you use to stack your data? I presume PI?

Noticeable improvement if you have same sized batches usually comes from the fact that there was difference in the conditions on particular night compared to other nights.

Let's say that a single night out of three had poor transparency. Subs of that particular night will differ somewhat between each other but overall weights will be similar - close to 1/7 each. Same will be on other nights. When you combine stack from all three nights you will end up with weights close to 1/21 for each sub.

If one night was poorer than others - this is not optimal since there can be significant difference between quality of subs on different nights and if you put them all in one stack - each will be given appropriate weight, so you can end up having 1/18 for good subs and something like 1/25 for poor subs - poor subs will contribute less to final result this way and good subs more.

In case of stacking stacks from each night, because subs on each night are close in quality to each other you end up with each contributing about the same to final result - and that is not what you want if you have different quality subs.

This is a part of explanation.

Another part is related to use of sigma clip stacking. More subs you have sigma clip works better. Let's say sigma clip "decides" to reject one or two pixels. In stack of 7 subs this leaves you with 5 subs stacked, or 40% difference! In stack of 21 subs you are left with 19 subs - maybe even 18 subs if sigma clip rejects three values - but this time it is only 16% "loss". More subs you have it is better with sigma clip.

It really boils down on your stacking algorithm. If you have equal number of subs per group - like in your case 3x7 but you don't use any sort of advanced stacking methods, and use simple average instead - result will be the same. On the other hand if you have mismatched groups - like 8 subs on first night, 7 on second, 6 on third or similar - this straight average will give worse results then advanced methods when stacking stacks, but in general best approach is to stack single subs.

Thank you for that.  Your comment about different conditions on different nights makes sense to me.

I use CCDStack to calibrate, align and stack.  I tried PI some years ago and couldn't understand it at all.

I'm using "STD sigma reject" for the data rejection.  I presume this is CCDStack's name for  sigma clip.

Link to comment
Share on other sites

On 28/08/2019 at 23:47, don4l said:

 

The bit that I cannot understand is why a single batch of 21 subs of 600s should produce a better result than  3 "super-subs" of (7*600s).

 

Sigma identifies rogue pixels (eg plane trails) in a single sub and gives them the average value found in the rest of the stack.  But how can it identify the rogues? By comparing them with the values for those pixels in the other images. The more 'other images' there are, the better the rogues will be identified as rogues. If the 'other images' are combined into mini stacks first there will not be as many to be used for identifying rogues.

Maybe this analogy would also work to some extent, but we'd need to ask a mathematician: You toss a coin 100 times. You can be pretty sure you'll get something very close to 50/50, which is the statistical norm. Now toss the coin 25 times and note whether it gave you a heads or tails majority. Repeat this 3 more times. Now you'll have - maybe- 2 heads and 2 tails giving the same 50/50 result. But it would be no great surprise to find it gave you 3 heads and 1 tails or even 4 tails. In imaging we are after the best possible statistically calculated value for each pixel so we should do the equivalent of counting the 100 throws and not the groups of 25.

Regarding colour noise reduction, the key thing is to do it only where you need it. The same applies to any processing intervention, be it saturation, sharpening or noise reducing. PI uses masks which can also be created in Ps. However, Ps (and presumably other like minded programs) has a much easier way. I'd use the 'Colour Select' tool to pick up just the background sky then reduce the saturation, apply colour noise reduction or whatever, and save that.  It would be a shame to mute the colour everywhere.

Olly

Edited by ollypenrice
Typo
Link to comment
Share on other sites

On 28/08/2019 at 22:44, vlaiv said:

Not best way to do it.

Ideally you want all your subs stacked into single stack. You can create "mini stack" out of already stacked data, but chances are that it's not going to be optimally stacked, especially if you captured different number of subs on different evenings.

Imagine you have 3 subs on first evening, and 5 subs on second - you create your stack for first and second night and you will have roughly 1/3, 1/3, 1/3 in first stack and 1/5, 1/5, 1/5, 1/5, 1/5 in second

Now you combine those two, and you will end up with this: 1/6, 1/6, 1/6, 1/10, 1/10, 1/10, 1/10, 1/10 (each again divided by two) - so subs from first night are "weighted" more in result than subs from second evening, where in reality you want close to 8x 1/8 to be the weights.

PI has weighting as an option and it works better if you put all your subs "in the same basket" rather than create separate averages and then weigh them against each other.

If you for some reason don't have previous subs and only final stack (still linear without any processing) then yes - do it like that - create "mini stack" out of 2 or 3 stacks from each night. Result might not be optimal, but it will still improve things.

But could you not just merge the stacks with different weights. I PS I would layer the 3 sub stack on top of the 5 sub stack and then give that layer 3/8 = 37.5% transparency. Maybe I am missing something (as usual).

Link to comment
Share on other sites

1 hour ago, gorann said:

But could you not just merge the stacks with different weights. I PS I would layer the 3 sub stack on top of the 5 sub stack and then give that layer 3/8 = 37.5% transparency. Maybe I am missing something (as usual).

Yes you can if you know quality of each sub in comparison to all other subs. Transparency changes from night to night (and even during the course of one night), and although you know number of subs in each separate stack and their relative weights in one night (for example 1/3, 1/3, 1/3 - because they are of the same quality) - you have no way of knowing how quality of subs on one night compares to subs on different night.

Weighting works by comparing all the subs in stack and assigning weight based on relative quality within the stack, so if you want proper weights with respect to all the data - put everything in one stack.

There is also matter of other algorithms that we use - like sigma clipping and such - read other posts to get the idea of how it works, but in principle - more subs you have, better statistics of the data and more precision in statistical methods like sigma clip.

Link to comment
Share on other sites

13 minutes ago, vlaiv said:

Yes you can if you know quality of each sub in comparison to all other subs. Transparency changes from night to night (and even during the course of one night), and although you know number of subs in each separate stack and their relative weights in one night (for example 1/3, 1/3, 1/3 - because they are of the same quality) - you have no way of knowing how quality of subs on one night compares to subs on different night.

Weighting works by comparing all the subs in stack and assigning weight based on relative quality within the stack, so if you want proper weights with respect to all the data - put everything in one stack.

There is also matter of other algorithms that we use - like sigma clipping and such - read other posts to get the idea of how it works, but in principle - more subs you have, better statistics of the data and more precision in statistical methods like sigma clip.

Of course but my idea was an approximation for the lazy person that does not feel like stacking everything again - but obviously that is the best way to do it.

Link to comment
Share on other sites

Might be worth having a look at this if you're using PI, or even if you're not, to understand the general principles of weighting.

https://www.lightvortexastronomy.com/tutorial-pre-processing-calibrating-and-stacking-images-in-pixinsight.html#Section5

Bottom line is that if you combine two stacked masters containing 20 subs each, simplistically you'd weight them 50:50, but what if 10 of the subs in the first master are "poor" and only 2 of the subs in the second master are "poor"? Would you now think 50:50 is the correct weighting? The only practical way to ensure correct weighting is to stack everything in one go.

Edited by IanL
  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.