Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Using non-perfect subs in stack?


Recommended Posts

I've been on vacation with no access to processing (but still access to shoot with my remote setup), so I have been messing around with one single target and figuring out perfect focus. This has now led me to the point where I have 50 hours of HaLRGB on NGC6888 and I wonder what to do with it when I get home.

The questions are: should I use non-perfect subs while stacking or will it only do harm? Can the stacking algorithms figure out to cancel out the non-focused bits when applying weights? Will I get the reduced noise I think I will by adding them?

I have about 28 hours perfect (to my low standards) and 27 hours of not-perfect.

Example. First what I consider good:

Perfect_focus1.png.5e4cd08ea2fd6736f1bd08640b291ae6.png

Perfect_focus2.thumb.png.f7622a177ce73207ec9b85a9e12f7984.png

And here's an example of bad:

Perfect_focus3.png.c36250660d1610f65c478157e59c6deb.png

Perfect_focus4.thumb.png.837b18543d132e90ffc87aeec8257a2b.png

As you can see, zoomed out the "bad" pictures look sort of ok, but the stars are obviously not good. Would you separate or stack together?

Link to comment
Share on other sites

In ideal world you could use all frames, but that would involve following:

1. Detecting suitable frames

2. Stacking suitable frames to create reference frame

3. Figuring out blur kernel for each of non suitable frames based on star shapes and that of reference frame

4. Deconvolving non suitable frames with blur kernel / making them "suitable" (but also increasing noise content)

5. Stacking all frames weighted based on SNR of each (subs that were once non suitable were corrected and their SNR changed - hence they will have lower weight but will still contribute to final result).

Above is a "general" approach - that can be used even if you have "all suitable" frames of different FWHM. You can select target FWHM and all frames with smaller FWHM can actually be a bit blurred to raise their FWHM (and SNR) and frames that have larger FWHM will be deconvolved to bring their FWHM to target one.

Notice that we live in real world, and no one yet implemented above approach when stacking - but it is possible (and in ideal world I would find time to implement and test approach and turn it into usable software :D )

 

  • Like 1
Link to comment
Share on other sites

Pixinsight has the subframe selector process that objectively measures the lights and then you can tell it to throw out the worst X% or ones with a FWHM above a certain threshold.

You have to be ruthless with dodgy subs.  It hurts to throw them away in our limited imaging conditions but if you are not ruthless enough and keep too many poor ones it will spoil your final picture.

  • Like 1
Link to comment
Share on other sites

40 minutes ago, kirkster501 said:

Pixinsight has the subframe selector process that objectively measures the lights and then you can tell it to throw out the worst X% or ones with a FWHM above a certain threshold.

You have to be ruthless with dodgy subs.  It hurts to throw them away in our limited imaging conditions but if you are not ruthless enough and keep too many poor ones it will spoil your final picture.

That's the answer I fear. I do this in PI already.

3 hours ago, vlaiv said:

no one yet implemented above approach when stacking

It seems to me the algorithms should be able to take FWHM and elongation into account, reduce the weight drastically in the direction of the elongation compared to the reference image and still add "something" in the area matched by the reference. 

This actually is so intuitive to me that I assumed this is what happened in the winsorized algorithm? 

Link to comment
Share on other sites

1 minute ago, Datalord said:

It seems to me the algorithms should be able to take FWHM and elongation into account, reduce the weight drastically in the direction of the elongation compared to the reference image and still add "something" in the area matched by the reference. 

This actually is so intuitive to me that I assumed this is what happened in the winsorized algorithm? 

You can try and if there are couple of distorted frames (those having distorted stars), in principle it should work. Algorithm will reject pixel values out of place, so most stars will be considered out of place (central part too dim, outer parts too bright compared to majority of frames). Background will be the same in both so it will improve noise of the background.

Problem is however number of such frames - anything more than few and you need to raise your clip threshold and that will impact other regions as well (too much rejection will lead to poor results) - in the end you will end up with worse result than including only suitable frames.

Better approach is to try to correct distortion. It can be done - deconvolution is the way to do it. It is used to reduce blur / sharpen image. In this case one is guessing blur kernel (usually Gaussian of certain sigma) - and it works well on high SNR Gaussian blurred data. In general case, problem with deconvolution is finding proper kernel (way star image was distorted) - as it is random in nature (can be due to seeing, or wind gusts, or cable snag or poor mount tracking / guiding, ....). Luckily, with this sort of images there are plenty of subs that are "decent" - and one can use those to "extract" blur kernel for any particular sub.

That involves matching stars from sub that we try to correct to reference stack (it is better to do it against the reference stack rather than single reference frame because stack has better SNR). You take each of distorted stars and deconvolve with matching star from reference frame (just take small ROI around each star for operation not whole frame) - for each of those stars you will get blur kernel, but it will be noisy because original frame that we are trying to correct is noisy, so we proceed by stacking/averaging blur kernels and we end up with something acceptable which we can use to deconvolve poor sub.

Link to comment
Share on other sites

12 minutes ago, vlaiv said:

deconvolution is the way to do it.

Right, in PixInsight I do it by getting a PSF image from a selection of stars.

Here's what I will try :

1. Stack the good frames. Get a psf from this stack. 

2. Weight everything in subframeselector. 

3. Stack all the weighted subs. 

4. Deconvolute on the big stack using the psf from the good stack. 

Does that make sense? 

Link to comment
Share on other sites

6 hours ago, carastro said:

I wouldn't use the ones where the stars are trailing.  But the rest of them could be used, barring any with huge plane tramlines.

Carole 

You should be able to use these. 'Remove line' in AstroArt will do some of the work on the individual sub and sigma clip will do the rest if you have enough subs in the stack. Always worth a try. The other way to do it is to stack the lot and then stack only the ones without plane trails. Give both a basic stretch (saved as an action is the easiest way), paste the full stack with trails over the reduced stack without and erase the trails. (This is where Photoshop wipes the floor with Pixinsight. 👹:D)

Olly

Link to comment
Share on other sites

2 minutes ago, ollypenrice said:

This is where Photoshop wipes the floor with Pixinsight. 👹:D)

Now now, plane trails are simple. It's one parameter in the normalising script. 

Or are you suggesting this method for star trails as shown in my bad image? 

Link to comment
Share on other sites

2 hours ago, Datalord said:

Right, in PixInsight I do it by getting a PSF image from a selection of stars.

Here's what I will try :

1. Stack the good frames. Get a psf from this stack. 

2. Weight everything in subframeselector. 

3. Stack all the weighted subs. 

4. Deconvolute on the big stack using the psf from the good stack. 

Does that make sense? 

I would try following provided PI offers such functionality (and I'm guessing here since I don't use PI, but going by what you said you can do - it should be doable).

1. Split frames into two groups - use frame selector or something like that and create two proper stacks by using weighted average and sigma reject and all. One stack should be with tight stars (let's call it good stack) and other containing all other frames (let's call that one poor stack).

2. Extract PSF from both good stack (good PSF) and poor stack (poor PSF). I'm guessing that actual PSFs are again image - rather small image, something like 32x32px or similar but still image that you can process as any other image?

3. Deconvolve poor PSF with good PSF. I'm guessing here that you can do RL deconvolution with arbitrary PSF (being an image - small in size like above mentioned 32x32px). Let result be known as blur kernel

4. Deconvolve poor stack with blur kernel (again same thing, hope you can do deconvolution with arbitrary image) - let result be known as deconvolved poor stack

5. Combine good stack with deconvolved poor stack using weighted average (do SNR based weights on each).

Hopefully this will produce something resembling decent result - do post it if you do it, I'm keen to see how it worked out with above approach (which is not optimal, but in theory if you can do all the steps - it should work and produce a result).

 

Edited by vlaiv
added a bit detail to make sure everything is understandable
Link to comment
Share on other sites

9 hours ago, Datalord said:

The questions are: should I use non-perfect subs while stacking or will it only do harm? Can the stacking algorithms figure out to cancel out the non-focused bits when applying weights? Will I get the reduced noise I think I will by adding them?

Personally, I think it depends on what you aiming for in the final image.

If you use subs which have trails then if you include them and you use a sigma clipping algorithm then you will find that your final signal to noise will be higher that if you had excluded them but this approach will effectively introduce a slight blur into the final image.  The more you include the more blur and the more chance that you will end up with non round stars.  However, this may not be noticeable, particular if you are considering an LRGB approach and the non perfect subs are in the RGB data.  

If you have lots of subs (> 15) of a particular channel then I would suggest you use the Pixinsight stacking option of Winsorized signal clipping - provided that the number of non-perfect subs is relatively small in relation to the total number, then I would suggest including the trailed subs, obtain the result and then compare it to the result when you only include perfect subs.  

If you have the data, then I would encourage you to experiment. 

Alan

 

Edited by alan4908
Link to comment
Share on other sites

Think: If you are imaging a nebula, say, and subsequently discover that you have star trails, the nebula will be trailing as well.......  No amount of tinkering in deconvolution or sharpening is going to change that fact.  Past a certain point, the sub is useless and you are defeating your purpose of creating a nice picture by trying force dodgy subs into your final composite.

  • Like 1
Link to comment
Share on other sites

19 minutes ago, kirkster501 said:

Think: If you are imaging a nebula, say, and subsequently discover that you have star trails, the nebula will be trailing as well.......  No amount of tinkering in deconvolution or sharpening is going to change that fact.  Past a certain point, the sub is useless and you are defeating your purpose of creating a nice picture by trying force dodgy subs into your final composite.

Absolutely.

Was the bad stuff captured while you were fighting with the mount or are you still working on that?

Olly

Link to comment
Share on other sites

49 minutes ago, kirkster501 said:

Think: If you are imaging a nebula, say, and subsequently discover that you have star trails, the nebula will be trailing as well.......  No amount of tinkering in deconvolution or sharpening is going to change that fact.  Past a certain point, the sub is useless and you are defeating your purpose of creating a nice picture by trying force dodgy subs into your final composite.

Well, I agree with two parts of your statement: :)

1.  "If you are imaging a nebula, say, and subsequently discover that you have star trails, the nebula will be trailing as well".....which translates into my comment above that this will introduce a slight blur. 

2.   "Past a certain point, the sub is useless" - absolutely - subs which have very large star trails or very poor signal to noise I'd definitely exclude.

However......

What Vlaiv is suggesting is performing a convolution on the suspect subs to try to minimize this blur. In theory this should work, in Pixinsight (for example) you can set use a deconvolution option which allows you to set the angle of the blur and (hopefully) improve the result.  I personally tried this but found it quite difficult to get good results.

What I am suggesting is that it depends on what you want to achieve and on the type of subs in which the blur is occurring. Also remember that if you are constructing a LRGB image, it is quite common post processing practice to blur the RGB data before combining the data with the Lum to reduce colour noise. This is because most of the detail comes the Lum rather than RGB layer.  So, here, you are deliberately introducing a blur in order to produce a better looking image. So, if you have the data, all I am saying is that it only takes a few minutes of experimentation before you make the decision to throw away all the trialed subs. 

Alan

Edited by alan4908
Link to comment
Share on other sites

1 hour ago, kirkster501 said:

Think: If you are imaging a nebula, say, and subsequently discover that you have star trails, the nebula will be trailing as well.......  No amount of tinkering in deconvolution or sharpening is going to change that fact.  Past a certain point, the sub is useless and you are defeating your purpose of creating a nice picture by trying force dodgy subs into your final composite.

I agree that no amount of deconvolution is going to change the fact that original sub is blurred all over the place :D

But I don't agree that proper deconvolution is not going to remove the blur, after all that is what deconvolution is - inverse operation from convolution. In mathematical terms, given original function and "blur" function when you convolve the two (blurring is convolution of two functions), you can take result and deconvolve the result with "blur" function you are going to get original function - not approximation but true original function.

There is well known relationship between operation of convolution and Fourier transforms of each original function and blur function that goes like this:

Convolution of A and B is equal to Inverse Fourier Transform of product of Fourier Transforms of A and B. In another terms - convolution in spatial domain is the same as product in frequency domain. It is therefore easy to see that you can easily inverse the operation, because there is simple inverse of product - division.

In fact basic deconvolution algorithm, known as inverse filtering is doing just that - you take both functions, do Fourier transform, divide the two and do inverse Fourier transform. This works well for noise free images that are band limited and sampled properly.

When we work with image data, there are couple of constraints to this - we work with incomplete functions - sampled functions, and there is no guarantee that either or both functions are sampled properly (band limited / Nyquist), and there is addition of noise. For that reason we can't do complete/proper deconvolution, since we lack information on both original function and blur function, but we can do a very good estimate of original function given that we have good estimate of blur function (with a bit of math magic - statistics and such).

Link to comment
Share on other sites

I have to confess to using slightly trailed stars data on occasions when I don't have a hope of getting any more data in the foreseeable future or the target is really difficult for me to acquire needing a dark site in the winter, or even acquired abroad. 

In this case I just remove the stars from the trailed version before combining.  But this will only work on slightly trailed stars because as stated above the nebulosity could be affected as well.

If I had copious access to the target then of course I would not do this. 

Carole 

 

  • Like 1
Link to comment
Share on other sites

3 hours ago, ollypenrice said:

Was the bad stuff captured while you were fighting with the mount or are you still working on that?

No, the mount is working like a charm now that I just use normal guiding and the occasional MLTP sent to the mount. 

IMG_20190717_132417.jpg.25cec4084dfdc6889e9da0264b823a0d.jpg

3 hours ago, kirkster501 said:

Past a certain point, the sub is useless

Agree, but... Determining that point is exactly what this post is about. The above bad sub looks quite fine in the nebula on the zoomed out version, which is why I'm even contemplating getting this data in. 

 

2 hours ago, carastro said:

But this will only work on slightly trailed stars

Do you consider the above image "slight" or worse? 

Link to comment
Share on other sites

I'd give the above data a "go" as it's not that bad, but as I did remove the stars from the "bad subs" after registering of course.  Then combining to two you will have to experiment with this depending on what software you use.

27 out of 28 hours worth of data is too much to chuck away.

If you want to up load the saved tiff's I'll give it a go for you.

Carole 

Edited by carastro
Link to comment
Share on other sites

7 minutes ago, Datalord said:

No, the mount is working like a charm now that I just use normal guiding and the occasional MLTP sent to the mount. 

IMG_20190717_132417.jpg.25cec4084dfdc6889e9da0264b823a0d.jpg

Agree, but... Determining that point is exactly what this post is about. The above bad sub looks quite fine in the nebula on the zoomed out version, which is why I'm even contemplating getting this data in. 

 

Do you consider the above image "slight" or worse? 

Personally I wouldn't use the sub you descirbed as 'bad' in the first post.

Regarding guiding, everything I've seen as a robotic host has told me that not guiding is more hassle than guiding. Seven years on, my first Mesu, guided, has still to drop a sub.

Olly

Link to comment
Share on other sites

Quote

Personally I wouldn't use the sub you descirbed as 'bad' in the first post.

Quote

I have about 28 hours perfect (to my low standards) and 27 hours of not-perfect.

Olly, it's not just one sub but 27 hours worth out of 28 hours. 

Carole 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.