Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Boosting stack integration with artificially created subs - would it work?


tomato

Recommended Posts

I’m still clouded out here so had some thoughts on how the recent AI based advances in processing could be applied to the problem of insufficient data:

I know you can’t simply copy a single sub 100 times and stack them because the noise element of the pixels would not be random, but what if we had say 100 real subs then analysed them with some form of AI based algorithm to identify those pixels which had a signal element from those which were background. The signal pixels would be copied to a new image and crucially the background pixels would be filled in with randomly generated pixels, with the baseline value being determined by analysis of the background pixels in the original subs.

More subs could then be generated on this basis, but each one having a randomly generated noise element so they could be added to real subs in a larger integration.

Leaving aside the ethics of this exercise for the moment my questions are:

1. Is the principle sound?
2. Is it technically possible, i.e. could the software be trained to distinguish the signal element of the pixels from the background?

Link to comment
Share on other sites

In PS you can automate the opening of files, applying a mild noise/noise reduction filter and then moving onto the next one and repeating. As each image is different the application of the noise should add randomisation.

Don't know if it will work, as signal needs to be received from a source (for simplification say a single atom), and it's strengthened by that single atom constantly emitting light and being received by the camera sensor. By randomising this it's muddling up the received signal.

You do get a slight boost when processing images and duplicating the finished layer and changing the blend mode, but only very very slight, sometimes it can do without.

Edited by Elp
Link to comment
Share on other sites

I think you could get an AI to do a better job of identifying the signal from a stack of subs as it could consider the neighbouring pixels and spot structure emerging, but I don’t see what advantage there would be to then stacking it back in with the original subs; the AI frame would be your final image 

Link to comment
Share on other sites

The idea is that if you could identify the signal element of the image, which is not random, from the random noise element, you could then make (in theory) an infinite number of artificial subs to stack, although in reality only a finite number would be needed.

I’m pretty sure there is a flaw in this argument, but I’d like to know what it is.

Link to comment
Share on other sites

No, it would not work.

If you can identify "signal" - then you don't need to stack, you are done. You create image consisting out of pure signal and you have image with infinite SNR.

Problem is that you can't identify signal or even "pixels containing signal" completely. For pixels with high SNR - it is fairly easy to say that they contain signal. Their value will be obviously larger than average background value. With other pixels - you simply won't know if they have no signal and just noise or they have some useful signal. Thing is - every pixel in the image will contain some signal, but probably most of it is unwanted signal in form of light pollution. To detect true signal that is superimposed on that background sky glow - you need to have noise low enough so that variation between the two is mostly signal and not mostly noise.

In any case - if SNR is low enough - there is no AI that will be able to distinguish signal, as it self won't be distinguishable from the noise.

  • Like 1
Link to comment
Share on other sites

It might not be obvious from my previous answer, why is it so, so I'll try to explain it more directly.

Imagine we have 4 types of pixels in the image after AI examined it.

1. Signal and identified as signal by AI

2. Signal that was misidentified as signal and labeled as noise by AI

3. Noise - properly identified as noise by AI

4. Noise - but identified as signal by AI

You then propose that pixels type 1 and 4 are left as is - or rather "copied" to new subs from existing subs, while pixels type 2 and 3 are replaced by pure noise (at background level) in artificial subs, right?

Let's examine what happens to each type of pixel in final stack (original + artificial subs).  By type:

1. Nothing happens here - we have same SNR for signal pixels as using copies of subs won't affect final SNR

2. Here we have drop in SNR as we mix subs containing signal with subs with pure noise

3. We here have "improvement" in the noise - but SNR is still 0 - since there is no signal in these pixels. Result is much like using average filter on these pixels, or even replacing all of those pixels with constant value of what we determine to be background level

4. Nothing happens here - same as type 1 - except this one remains noise with SNR 0 and we even don't make it constant signal

As you can see, out of 4 types of pixels - only one to have anything that we might call improvement - is pure background noise, but we don't need to use stacking for that - we can simply identify those pixels - and set them to same value and get very smooth background - but we can do that with denoising anyway and results don't look particularly good.

Link to comment
Share on other sites

Thanks for the comments. So on type 2 pixels the rub is how well the AI can differentiate true signal from the noise, I guess the BlurXterminator and NoiseXterminator how a similar problem but they seem to make a reasonable job of it.

Ah well, back to waiting for the clouds to clear…

Link to comment
Share on other sites

33 minutes ago, tomato said:

Thanks for the comments. So on type 2 pixels the rub is how well the AI can differentiate true signal from the noise, I guess the BlurXterminator and NoiseXterminator how a similar problem but they seem to make a reasonable job of it.

Ah well, back to waiting for the clouds to clear…

Yes, that is the main problem - but neither of the two are capable of extracting faint galaxy from background noise if SNR is below certain threshold.

You can try it yourself - just take one of your images that has background galaxy that is faint - but shows in stack. Take a single sub and see if you can spot that faint galaxy in single sub. If you can't - that is prime candidate to see if either of the two AI tools will be able to pull it out of the noise - just run them on that single sub and look if that galaxy appears.

There is no feasible way to do this - just think about if your SNR is below certain threshold - say for sake of argument we set threshold SNR to 1. This means that signal is below average value of noise. If we have some sort of gaussian type of noise - about 83% of values will be below noise value (noise value is just standard deviation - and ~66% is between +/- one standard deviation - so 66 + 33/2 of them will be below +standard deviation) - but so will signal.

Given pixel value - how can you tell if it's noise or signal if majority of both signal and noise values behaves the same - have the same value?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.