Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

New algorithm for subs normalization - minimizes LP gradient - first test results


Recommended Posts

Hi all,

Just wanted to share first (promising) test results from new algorithm that I've implemented.

Algorithm deals with normalizing / equalizing subs prior to stacking. Because target position changes over the course of shooting session we end up with frames that are a bit different one to another. Each frame has some (usually different) amount of LP, depending on target position in relation to LP sources, and also signal recorded depends on altitude of target at a given time due to atmospheric extinction. It can also vary with transparency, fog or passing clouds.

For advanced stacking methods to work well frames need to be adjusted - signal multiplied with appropriate coefficient and LP subtracted. Simple average method does not need this, but algorithms like sigma clip, or auto adaptive weighted average do. In general case we can't determine proper values, but we can do frame normalization based on a reference frame.

My new algorithm deals with this and as a bonus it has capability to minimize LP gradient, depending on reference frame selected. So if reference frame with low LP gradient is selected, whole stack will match that level of LP gradient. In ideal case if we have single frame without LP - all other frames would be adjusted accordingly.

For test case I've chosen some data I shot about a year ago (actually on December 30th 2016). Target was M78 and conditions less then ideal - a lot of fog/haze (and smoke from neighborhood chimneys) and the usual amount of LP but really showing because of all the problems with transparency.

Data is shot with OSC camera, and I've selected green channel for the test. Data is of low quality, hence so much noise in examples - it is not due to algorithm :D

I've chosen 3 different reference frames to do normalization upon, one that I've judged as having the least LP gradient, one with LP gradient somewhere "in the middle" of the bunch, and one with rather high LP gradient showing. After normalization I used basic mean / average stacking, and did basic exposure stretch in gimp. Here are results:

Stack of frames normalized based on frame with least LP gradient:

least_gradient.png

Medium LP gradient:

mid_gradient.png

High LP gradient:

most_gradient.png

And in the end just for reference, composition of these 3 stacks, because difference in level of LP is a bit difficult to judge unless blinking images (then it is really obvious), from left to right - the least, medium, the most lp gradient reference frame:

composition.png

For the end, I would like to stress that this is not "gradient removal tool" - it does not operate on already stacked images, it is part of preparation of subs for stacking. It will not fully remove LP gradient, it will reduce it to levels found in reference frame - so reference frame needs to be chosen accordingly.

Any comments and questions are welcome,

Thanks for looking.

Link to comment
Share on other sites

12 hours ago, vlaiv said:

I used basic mean / average stacking

Hi Vlaiv,

Trying to get my head around this! I am no expert in stacking algorithms but do have a professional interest in image processing. I think I understand what you are doing here. I notice you said that you used average stacking for the examples but in your description you said "Simple average method does not need this". Would the examples therefore be more representative if you had used Sigma Clipping?

Many thanks

Vern

Link to comment
Share on other sites

45 minutes ago, vernmid said:

Hi Vlaiv,

Trying to get my head around this! I am no expert in stacking algorithms but do have a professional interest in image processing. I think I understand what you are doing here. I notice you said that you used average stacking for the examples but in your description you said "Simple average method does not need this". Would the examples therefore be more representative if you had used Sigma Clipping?

Many thanks

Vern

No, not really, both sigma clipping and auto adaptive weighted average algorithms address other issues than gradient, so all algorithms are well suited to demonstrate LP gradient removal, because it is feature of frame normalization rather than stacking process / algorithm.

LP gradient is "smooth" noise - meaning that it is noise by definition - unwanted signal, but it differs in nature to "ordinary" noise - it has structure and it is not random. It has random component associated with it as all signal does (shot noise).

Sigma clip algorithm deals well with outliers - if one or couple of frames contain something that other frames don't - like hot pixel, cosmic ray, or airplane passing in frame, sigma clip will remove those because such pixels are effectively outliers - they don't fall well into normal distribution.

Auto adaptive weighted average - well I'm not sure about that one, because I have not read the paper on it and I don't know implementation details, but I've used its name and idea of what I would do to understand what that algorithm is about. It is about stacking images with different level of SNR. So noisier image should contribute less than less noisy image in final stack, and this algorithm addresses that. If you think about it, two frames with different level of LP, even if gradients are not present will have different SNR because more LP signal means more shot noise associated with LP signal, and target signal remains roughly the same - so different SNR-s.

Both of above algorithms, along with average stacking - operate on pixel level, so they are local in nature (not considering image as whole). Normalization of frames, especially gradient matching operates on image as whole - you can't conclude anything about gradient from a single pixel.

Also both of above algorithms depend on pixels having distribution as close to Gaussian distribution as possible. Having any kind of offset / signal multiplier between frames skews normal distribution for each pixel - hence lowering performance of these algorithms. Simple average is not affected by this in terms of SNR - for example any offset to each image will just be in final image as average offset. But LP gradient minimization has benefit to average stacking as well - since final stack will have average of all LP gradients it can happen that final LP gradient is no longer linear and is thus much harder to remove in post processing. With LP gradient minimization - or better to call it "normalization" - each gradient is "aligned" to gradient in reference frame - so average of gradients will still be linear in nature - easier to remove in post processing.

So for demonstration of LP gradient minimization - any stacking method is good to show effect.

On the other hand, to show that normalization works in improving stacking performance of these algorithms one would need to run different set of tests. Ideally we would have access to pure signal image, and set of noisy images with given signal  included in following way: Signal * some random coefficient + Offset (and offset need not be constant over the image but can be linear gradient with random x and y slope) and all of that "spiced" with appropriate noise (Poisson for signal/LP and added regular Gaussian to simulate read noise). Once we have such set, and reference frame with pure signal, we can then run stacking algorithms against: a ) not normalized frames, b ) naive frame normalization (such as bringing median of each image to same value) and c ) my normalization implementation. For each of those normalizations we then run set of stacking algorithms (average, sigma clip, auto adaptive weighted average) and we compare result stack with reference image - sum differences squared- or distance to original - and see what method and with which normalization gives stack that is closest to original pure signal image.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.