Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Imaging small galaxies with focal length under 800mm


TheDadure

Recommended Posts

@vlaiv sorry I was not clear enough.  However, on your point about noise adding clarity some character recognition software adds noise to aid in the recognition process. I certainly agree we enforce structure on images. Any sharp edges looks like clarity even if they are due to noise.

The point I was trying to make was does software render images in different ways as they zoom in? If so it could explain some of the different perceptions about images.

Regards Andrew

Link to comment
Share on other sites

Ok, I went back to my M106 project. Took a look at my lum files, which were originally bin1. I took all files from after staralignment and resampled them to bin4 with average setting. Then I processed the image with as close to possible parameters as possible (they are different because of scale). Finally, I resampled the binned image back up to full scale again.

Here are the results.

L_bin4_resampled.thumb.png.5c5afaf99afb13ca6686c8a4fda8bc0b.png

These have both been through Topaz DeNoise.

L_bin4_resampled_denoised.thumb.png.8bd89e61c355718638fbf7a380a18a74.png

There are differences in the end results. Even with being careful to reproduce my steps. The lower resolution does lose information that is used in the intermediate steps, and that loss adds up over the many processing steps. The noise is particularly troublesome in the dark areas for some reason, but that could probably be further processed away. The bright areas are very similar and I'm curious enough to want to try an imaging session with bin4 on the camera. Trouble being that I don't have bias and dark frames for it and I can't just take them.

Anyone want to be a judge here?

Link to comment
Share on other sites

11 minutes ago, vlaiv said:

I'm having trouble distinguishing which one is which?

image.png.7ed91999182f48765368b55c7f125342.png

image.png.1aaf83759afd18c1d69f6ce535f3983e.png

Which one is binned then up sampled and which one is original?

That's intentional. Which one do you think? 

Link to comment
Share on other sites

I have no idea - not much difference to my eye in these two images. Maybe just a bit of processing - not equal stretch. Bottom one has some areas darker and hence a bit more contrast.

I wanted to recommend one approach that will take processing out of the equation - it's a bit more work for you, but I wonder what results will be.

Take binned / upsampled image and align it to original image - while still linear (both images).

Select half of binned / upsampled image and simply paste over original image - thus making "split screen" of linear images. Then just process that. That way we can be certain that any stretching and denoising and sharpening will be applied exactly the same to both images.

Link to comment
Share on other sites

8 minutes ago, vlaiv said:

Then just process that. That way we can be certain that any stretching and denoising and sharpening will be applied exactly the same to both images.

Well, that's not what I would do on a real image, so I don't think it is a path forward. To make this experiment work, for me at least, I want the processing to happen before upsampling, just as I would in real life. It makes no sense to me to do it the other way around.

To answer the question, the left one was the original bin1 process, the right is the bin4 process.

What I will try to do when I get the chance, is to collect bin4 images off my rig and process them. First with the regular processing, then with a drizzle. Then I can compare bin4+upsample to bin4+drizzle.

Link to comment
Share on other sites

3 minutes ago, andrew s said:

@Datalord do you think one is significantly better than the other and if so how?

Regards Andrew 

 

1 hour ago, Datalord said:

There are differences in the end results. Even with being careful to reproduce my steps. The lower resolution does lose information that is used in the intermediate steps, and that loss adds up over the many processing steps. The noise is particularly troublesome in the dark areas for some reason, but that could probably be further processed away. The bright areas are very similar and I'm curious enough to want to try an imaging session with bin4 on the camera. Trouble being that I don't have bias and dark frames for it and I can't just take them.

Pretty much this.

Link to comment
Share on other sites

@Datalord I had read what you posted before but that was not what I was asking. If you save the two crops @vlaiv posted and looked at them in the future having had time to forget them do you think I you could pick which was which in an unbiased test?

If so what about the final images would enable you to do this?

Regards Andrew 

Link to comment
Share on other sites

1 hour ago, Datalord said:

Well, that's not what I would do on a real image, so I don't think it is a path forward. To make this experiment work, for me at least, I want the processing to happen before upsampling, just as I would in real life. It makes no sense to me to do it the other way around.

To answer the question, the left one was the original bin1 process, the right is the bin4 process.

What I will try to do when I get the chance, is to collect bin4 images off my rig and process them. First with the regular processing, then with a drizzle. Then I can compare bin4+upsample to bin4+drizzle.

I think that again we got derailed in what we are trying to accomplish here, and I would like to point out few things:

1. Proposal for split image processing approach was to show how much difference if any there is in detail - in support of above theoretical approach and because you mentioned that you actually see the difference - that would give us the chance to inspect that difference in detail

2. I'm not trying to push a certain approach on you. If you feel comfortable doing it like you have done so far and are happy just continue to do so. I would personally bin the image and process it like that and would leave it at said resolution. I would not upsample it back, as I think there is no point in doing so.

3. In my view drizzling is not going to produce anything sensible in this case (but that is just my view).

Link to comment
Share on other sites

1 hour ago, andrew s said:

do you think I you could pick which was which in an unbiased test?

In this particular case, yes, I think I would. The main difference for this particular image is the noise in the black. Whether that is an artifact of me rushing the processing a bit or if it is because of a difference in how binning and subsequent upscaling treats low signal parts of the image is something I can't say on this one.

image.thumb.png.1ceeb352741034aa8c9f80076189b5a3.png

I also want to do this in a real test with true bin4 captured images where the full implications of well depths and lower exposure time is in play. This becomes a completely academic exercise if I can't save time imaging.

36 minutes ago, vlaiv said:

I think that again we got derailed in what we are trying to accomplish here, and I would like to point out few things:

1. Proposal for split image processing approach was to show how much difference if any there is in detail - in support of above theoretical approach and because you mentioned that you actually see the difference - that would give us the chance to inspect that difference in detail

2. I'm not trying to push a certain approach on you. If you feel comfortable doing it like you have done so far and are happy just continue to do so. I would personally bin the image and process it like that and would leave it at said resolution. I would not upsample it back, as I think there is no point in doing so.

3. In my view drizzling is not going to produce anything sensible in this case (but that is just my view).

1. For the sake of this specific test, I see difference, but it is so little that I will consider the bin4 on par with bin1. I need the real world bin4 test to come to proper conclusion as to how it will influence a real image.

2. Well, there is quite a lot of reason for that if you want to print or put it on a 4K monitor. If I do it myself, I can at least control the upscaling and not let it be a random driver who does whatever it wants.

3. I'll let that be up to a test as well. If I can shoot in bin4, with same detail captured, but get maybe 10 times more frames, I'll let another experiment dictate. I remember another thread where I compared drizzle to non-drizzle and concluded that the black parts was where the biggest benefit was. But, I must experiment.

  • Thanks 1
Link to comment
Share on other sites

22 hours ago, Datalord said:

These images have stars with low snr (especially Ha one) and a lot of hot pixels which are being picked up as stars by the PixInsight script and very significantly lowering the reported FWHM. Manually measuring each of the stars with dynamic PSF gives an average of about 5pixels (2.6arcsec) FWHM for Ha and 3.2 pixels (3.3 arcsec) for OIII, so I'm  in agreement that by most measures you're considerably oversampling here.

From my understanding, optimal sampling rate for DSO imaging (i.e. to record maximum detail without oversampling) is a continued source of debate on this and other astro forums; Vlaiv has made a detailed argument for a relatively low rate of about 1.6x smaller than the real-world FWHM of your system whereas folks on Cloudy Nights generally seem to go in the other direction with rates 2-3.5 x smaller than seeing with their own technical rationalisation (e.g. see  https://www.cloudynights.com/topic/650493-understanding-criteria-for-what-is-proper-sampling-of-imaging-system/?p=9150558 ).

Personally I guess I'm of the opinion that for long FL DSO imaging it's better to slightly oversample than to undersample (which can easily result from increasing binning level) as you're never going to recover lost resolution fully from the undersampled subs (even with drizzle which also is a controversial topic here!) but you can (eventually) get the snr up from oversampled subs by taking more of them.

Paul

  • Like 1
Link to comment
Share on other sites

I'm too tired now but will revisit this over the weekend. I think it needs concentrated attention. :)

This subject looks like a very interesting read.  Posting to keep the link ;) 

Link to comment
Share on other sites

6 minutes ago, Ikonnikov said:

These images have stars with low snr (especially Ha one) and a lot of hot pixels which are being picked up as stars by the PixInsight script and very significantly lowering the reported FWHM. Manually measuring each of the stars with dynamic PSF gives an average of about 5pixels (2.6arcsec) FWHM for Ha and 3.2 pixels (3.3 arcsec) for OIII, so I'm  in agreement that by most measures you're considerably oversampling here.

From my understanding, optimal sampling rate for DSO imaging (i.e. to record maximum detail without oversampling) is a continued source of debate on this and other astro forums; Vlaiv has made a detailed argument for a relatively low rate of about 1.6x smaller than the real-world FWHM of your system whereas folks on Cloudy Nights generally seem to go in the other direction with rates 2-3.5 x smaller than seeing with their own technical rationalisation (e.g. see  https://www.cloudynights.com/topic/650493-understanding-criteria-for-what-is-proper-sampling-of-imaging-system/?p=9150558 ).

Personally I guess I'm of the opinion that for long FL DSO imaging it's better to slightly oversample than to undersample (which can easily result from increasing binning level) as you're never going to recover lost resolution fully from the undersampled subs (even with drizzle which also is a controversial topic here!) but you can (eventually) get the snr up from oversampled subs by taking more of them.

Paul

Very nice!

I was rather surprised to find out that there is simple mathematical solution for diffraction limited system. I did simulations and came up with very similar figure. I'm talking about planetary critical sampling rate, or this part in post you linked to:

Quote

😎  The diameter of the Airy disk in the focal plane is 2.44*lamda/(F/#).  So to achieve Nyquist optimum sampling relative to the Airy disk, you need 4.88 samples across the Airy diameter.   Sampling at a higher rate than this cannot provide any additional information--you've reached the limit imposed by the optical system.  Sampling at more than this rate is called "over-sampling."  Sampling at less than this rate sacrifices high spatial frequency information and is called "under sampling."  Remember that sampling rate trades spatial information (up to the Nyquist limit) for SNR.  Simply put: sampling at a higher rate with a smaller pixels might provide higher spatial information but it always does so at the expense of SNR.

Results of my simulation gave 4.8 samples per Airy diameter - or 2.4 per Airy radius. Here is thread that I made about that some time ago:

However, there is a problem with this statement:

Quote

Seeing will obviously affect the information content as well.  I had an opportunity to have dinner with Joe Goodman last week and we discussed this issue.  (If you don't know, Joe Goodman is a professor emeritus at Stanford who wrote one of THE books on Fourier optics.)  I proposed a fairly simply way to look at the atmospheric transfer function by realizing that the MTF is simply the inverse transform of the PSF.  Therefore all you have to do is to simply model the form of a long exposure star image using either a Moffat or Gaussian function, inverse transform it, and multiply by the diffraction limited MTF to clamp the result to the limits imposed by the perfect circular aperture.  Joe indicated that this idea turns out to be a common approach for systems with large amounts of optical aberration, which is exactly what the atmosphere generates.  This is a method that can be used to show practical sampling limits across the long exposure seeing blur function.  I'm still working on it but I believe that it should be straightforward to show that the sampling limit in that case will fall somewhere in the range of 2-3.5 samples across the blur diameter; though it may vary slightly with the conditions.

I don't really understand what blur diameter is supposed to be. If you follow up above thread - and I think I written about it elsewhere - I use the same approach to reach FWHM/1.6 figure.

Only difference being that I don't clamp with MFT - as it is significantly "smaller" compared to seeing blur. Most telescopes have resolving power far greater than atmosphere allows in long exposure - thus it is not necessary to take into account that resolution as we will stop much sooner.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.