Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Getting the most of the data captured ...


Recommended Posts

I want to present and discuss results of my recent work on algorithms to get the most out of captured data.

I've been doing NB of Bubble nebula, and conditions were not that favorable. I had LP to contend with (in spite of NB filters, LP is strong in the young one ...), and also changing in local transparency and fogging up of secondary mirror (very moist evenings with high pressure - first time I had my secondary on RC fog up).

LP is very strong in one direction - city center. I live on south-west part of the town, close to river, and in south / south west, LP is at its weakest. Due to target position at this time of the year, I was forced to shoot early in the evening - right after astro dark had started - and that is also bad - most of the lights in households are still shining. Here is example (center crop, calibrated and aligned) of 3 distinctly different subs of the same target taken on the same night. Left one is at the beginning of the session - target was in direction of the most LP. Middle one is right after meridian flip, so LP had died down quite a bit, but another problem (which I had not noticed) started to show - secondary fogging up. Right frame is from the end of the session (the least LP but secondary pretty much fogged up completely).

base_frames_comparison.jpg.7ee5da937f1187c55e64fc8ea14fe9ba.jpg

Subs are as they are without any processing or normalization - linear stretch to show faint signal captured (and scaled at 50% for easier viewing).

First step in optimization of the data (after calibration, and some tweaks to bin x2 process - I bin x2 to get effective resolution of around 1"/pixel, and this time I used "smart" binning that accounted for FPN in bias - will need to further develop this at the moment it is just "a hunch" technique without precise calculations to maximize SNR - it just looks at noise in read / dark subs and not full SNR "picture" - no signal information is yet available) was to apply "TTLR" algorithm.

TTLR just stands for "Tip-Tilt Linear Regression" and it is something I developed to normalize frames and equalize any background gradients present in the image. Here is result of this applied on selected 3 frames.

normalized_frames_comparison.jpg.b2a7e54e58136b6e5ae546ac05cf5cde.jpg

As you can see, algorithm works pretty well regardless of low SNR in last image. It also shows how much noise there is in last image compared to other two - when signal is equalized noise tends to pop out in low SNR images. Again images are linearly stretched as in above case and scaled down 50% for easier viewing.

TTLR works like this:

Reference frame is selected and other frames are normalized based on reference frame. It is iterative process. First solution is found for linear function to minimize difference between reference and linearly scaled and shifted frame to be matched. For this I use simple least squares method but not on individual pixels - this works poorly in low SNR scenarios - too low signal for algorithm to pick up. I actually bin image on a grid of values (take average in "zones" that are cells of NxN grid placed on image) - N is adjustable parameter. Rationale is that signal from target is multiplied with certain coefficient - attenuation based on air mas / transparency / etc and certain level of LP signal is added. So result value of image is in form a*X+b - which is linear function.

Next step is to take residual from reference frame minus scaled target frame and fit planar function on it - this is Tip-Tilt phase - trying to match background gradients of both image. After this gradient has been accounted for, we return to step 1. In this iteration we can now more precisely determine linear a and b. After that we again do Tip-Tilt phase, and so on. After just a couple of rounds values start to stabilize. Above example has been done with 8 rounds on 8x8 grid for linear phase.

Now that we have matching subs in term of signal strength and hence varying noise due to difference in SNR we can think how to combine them. Simple average (or sigma clip based on average) is not optimum approach. It works well when SNR of subs is matched. Thus I developed stacking algorithm that accounts for different levels of noise. Here is how it works:

It is again iterative method. To determine weight of each sub in weighted average we need to asses level of noise in that particular sub. Since we don't have exact signal, we use next best thing - we know that stack of frames will have much better SNR. So we set our initial weights to equal parts and we compute weighted average. Then for each sub we look at standard deviation of weighted_average - particular_sub. Since we assume that both have equal signal, and that weighted_average will have much lower noise - difference of these two will be approximately noise of particular sub (not quite, but best we can do). In this way we determine noise of each sub, and based on a bit of math (just a simple expression for total noise and finding minimum of that expression via first partial derivative equal to 0 - system of equations, a bit of matrix math) we can determine weights for each sub. Now we can proceed to next iteration, since we in the first round estimated that straight average had best SNR, but now we have better approximation to it due to new weights for each sub. And we rinse and repeat - each time getting better estimate for noise in particular sub.

Above is general description of the process, there is another component to this - not each part of image will have same noise, so in high signal areas weights will be different than in low or none signal areas (some of the noise comes from shot noise of target). For this reason we apply above logic to certain regions that have the same (or very similar signal levels). This is again done by doing master average and then splitting image into zones based on signal level - in this particular case I used 24 masks with quadratic distribution of signal levels.

Here is the result of above mumbo jumbo:

NBWA_comparison.gif.25e8946e19a1a919d1166e794de3f349.gif

It is animated gif composed of two frames - first is result of above algorithm, and second is straight average. It can be seen that SNR is improved with new approach.

Even some of hot pixels are less visible - probably due to fact that they appear in low contribution frames (low weight, or high noise).

At the moment algorithm does not include Sigma clip for anomalous pixels (hot pixels, satellite trails, airplanes, etc ...) but I will be working on it.

Another thing that I still have not figured out is how to best subdivide image into areas of similar signal - currently I've implemented straight linear and quadratic approach (current weighted average image is inspected, min and max found and that interval subdivided into smaller intervals - then zone is created out of all pixels falling into particular sub interval. Sub intervals are either linear - same width, or quadratic, meaning low intervals are the narrowest and as signal strength is going up they get wider and wider - 1:4:9:16 .... -ratio of interval widths - above result is with quadratic approach).

Possible other method would be to sort pixel values in image and subdivide zones so that each zone contains certain number of pixels. Not yet sure what is best approach, we want signal in the zone to be the same, but we also want zone to contain enough pixels so that noise statistics for each sub / weights can be effectively calculated.

This is it for now, I need to reprocess other channels with this method and to additional tweaks - like sigma reject inclusion, maybe feed signal strength back into adaptive bin method for even higher precision there, and finally complete rework of this NB target.

BTW all of this has been implemented as plugin to ImageJ / Fiji not as a custom software (yet? :D ).

 

Link to comment
Share on other sites

Just now, Wiu-Wiu said:

Looks like a valuable script to be added to PI! Great job!

Don't have an instance of PI but will look to make public plugins for ImageJ (sort of open source them after some polishing) so someone could do a port to PI script

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.