Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

IMX571 on 8" EdgeHD


Recommended Posts

I'm trying to wrap my head around this oversampling thing. I really want to get an ASI2600MCP for that lovely large FOV and pair it with my 61EDPH. I'm undersampled here but I kind of get that - my images are a bit soft. At the same time I'd love to get an 8" EdgeHD for galaxy season to really start to get in closer.

I believe this is going to end up oversampled at 0.55"/px, but what's the deal here? Guiding on my EQ6-R is rarely  above that on the worst nights, so tracking errors shouldn't be an issue. Seeing? What about my sub frames - what will they look like? What's inherently bad about oversampling? 

Link to comment
Share on other sites

On 01/09/2023 at 22:08, OK Apricot said:

I'm trying to wrap my head around this oversampling thing. I really want to get an ASI2600MCP for that lovely large FOV and pair it with my 61EDPH. I'm undersampled here but I kind of get that - my images are a bit soft. At the same time I'd love to get an 8" EdgeHD for galaxy season to really start to get in closer.

I believe this is going to end up oversampled at 0.55"/px, but what's the deal here? Guiding on my EQ6-R is rarely  above that on the worst nights, so tracking errors shouldn't be an issue. Seeing? What about my sub frames - what will they look like? What's inherently bad about oversampling? 

Oversampling means small pixels, and, depending on FL, small plate-scale.

The only issue with oversampling is that light is spread over many pixels, and this will lower the signal/pixel. This translates into more imaging time for the same desired SNR compared to an optimal sampling. This is important because you are unnecessarily increasing exposure time.

As an example for my setup, a chosen target with 23 surface mag (M33), a camera with 6 um pixels, plate scale 0.93, and a desired SNR of 40:

  • total exposure time: 22 hours for Luminance, 97 hours B, 86 hours V, 80 hours R

Same setup but sensor with 3.76 um pixels and a plate scale of 0.58:

  • total exposure time: 36 hours for Luminance, 134 hours B, 133 hours V, 120 hours R

So you see, simply changing the pixel size has a considerable impact of exposure time

Edited by dan_adi
  • Thanks 1
Link to comment
Share on other sites

It should be added that you can always adjust your pixel scale by binning. With my RC8 I image at the native FL but bin2 or bin3 to give about 1 or 1.5" per pixel. So oversampling can be rectified with the current small pixel CMOS cameras.

  • Like 1
Link to comment
Share on other sites

On 01/09/2023 at 21:08, OK Apricot said:

I believe this is going to end up oversampled at 0.55"/px, but what's the deal here? Guiding on my EQ6-R is rarely  above that on the worst nights, so tracking errors shouldn't be an issue. Seeing? What about my sub frames - what will they look like? What's inherently bad about oversampling? 

There's nothing wrong with  oversampling (within reason), as long the level of read noise from the sensor is well below the photon noise from the sky background.

There is no benefit in binning with CMOS cameras except that it demands less computer resources (storage and CPU) but they are cheap these days. If you want to adapt to seeing conditions rescaling the final processed image gives always better results.

Regarding the ideal sampling you should first evaluate your seeing conditions, or more accurately what the combination of your seeing conditions and tracking accuracy allows you.

My current setup (200/750 newt with ASI183mm) gives me 0.67"/pix which is a perfect match for my seeing condtions. Indeed the average of the FWHM values that I got on my luminance stacks this year (24 imaging sessions) is 2", i.e. three times the sampling (ranging from 1.46" to 2.6"). On those nights with good seeing my best subs are around 1.3" so  I could sometimes benefit from a tighter sampling like 0.55".

My previous camera was an ASI1600mm which gave me 1"/pix. It gave me clearly inferior results, resolution-wise.

Instead of  an 8" EdgeHD, you could consider a 200/1000 Newtonian with a Paracorr (effective focal length 1150mm), which gives an ideal sampling for 2" seeing. In those conditions it would give a larger FOV than the EdgeHD, without sacrificing resolution.

Edited by Dan_Paris
Link to comment
Share on other sites

6 minutes ago, Clarkey said:

Is this not binning? It might not be every frame or sub, but the same effect.

Rescaling is not the same as binning and in principle - it is worse.

It does not produce same SNR improvement and it introduces pixel to pixel correlation.

However, @Dan_Paris claims that there is no benefit of binning CMOS data, and that optimum sampling is x3 FWHM, so I'm not sure there is much point going into discussion about all of that, as I've experienced similar claims regardless of all the evidence and resources presented:

 

  • Like 1
Link to comment
Share on other sites

35 minutes ago, vlaiv said:

Rescaling is not the same as binning and in principle - it is worse.

It does not produce same SNR improvement and it introduces pixel to pixel correlation.

Ok. I will accept your much better understanding of the subject.

At least I can still see a benefit in binning😄

Link to comment
Share on other sites

5 minutes ago, Clarkey said:

Ok. I will accept your much better understanding of the subject.

At least I can still see a benefit in binning😄

Don't trust everything that you read and try for yourself 😉

42 minutes ago, vlaiv said:

@Dan_Paris claims that there is no benefit of binning CMOS data, and that optimum sampling is x3 FWHM, so I'm not sure there is much point going into discussion about all of that, as I've experienced similar claims regardless of all the evidence and resources presented:

I don't make an unsubstantiated claim as you seem to suggest but share my experience of several years of galaxy imaging. The resolution increase when I swap a camera with 1"/pix to 0.66"/pix was just plain obvious. And none of the serious imagers that I know personally shares your point of view. 

Btw I don't think that the link to a grossly overprocessed image is relevant for the debate 😉

 

 

Edited by Dan_Paris
Link to comment
Share on other sites

3 minutes ago, Dan_Paris said:

Don't trust everything that you read and try for yourself

Everything I say on this matter is backed by relevant sources and by examples.

Here is one - you don't think that software binning works to improve SNR, right?

image.png.600fd3870ac4c118f07ba7eb199908c1.png

Here is test image - it has measured signal of ~1.01675 and noise of 1.0171 so SNR is

1.01675 / 1.0171 = ~1

Let's bin that data x2 and x3 - according to mathematical theory, SNR improvement will be x2 and x3 - regardless of the fact we are binning in software after acquisition.

image.png.c9b9acb3048b20a919d9a11e2530f55f.png

Interestingly enough we now get 1.0144 / 0.504 = ~2 and 1.1044 / 0.3484 = ~3.17

Pretty much as math predicted.

We don't have to do this on synthetic data - we can use real data. Here is an example:

image.png.c14301bb8cfffe46798477e749fe5565.png

This is part of M51. Single sub. Look what happens when I bin it 8x8

image.png.91be91249fcb79c7a58be919951c35ff.png

That is single sub! suddenly I can show tidal tails around galaxy while on single sub I was struggling to show spiral arms.

Link to comment
Share on other sites

16 minutes ago, Dan_Paris said:

Don't trust everything that you read and try for yourself 😉

Believe me, you will not find a bigger cynic. However, I have tried processing binned and unbinned data and I have seen a benefit of getting something nearer an optimal sampling rate. Admittedly the seeing here is far from good normally, so I probably have a fair bit of room for maneuver.

Link to comment
Share on other sites

So here are some comparisons, using a set of 120 luminance frames (2" seeing, Bortle 7).

For the left panel, the subs were binned down 2x before registration (to 1.32"/pix) while for the right panel the subs were kept in bin1 (0.66"/pix). I did exactly the same basic processing on both : BlurX, GHS, a bit of HDR, no noise reduction. The bin1 image is obviously more detailed, as one may have anticipated, while the difference in SNR is not obvious.

 

Capturedcrandu2023-09-0323-18-34.thumb.png.1aab4b716f51de65539a08007ad2ed65.png

 

To quantify the noise levels, the relevant comparison is between the stack of bin2 subs (left) and the stack of bin1 subs which is subsequently scaled down 2x (right), in linear state, with no processing at all:

image.thumb.png.402efa637a95107845c35c12a92952b6.png

According to measurements (StdDev on the background) the right image has 7% more noise than left image (but is smoother in particular regarding star shapes).

 

So indeed binning before stacking give a bit lower noise leveI on unprocessed image, I must concede that!

 

An interesting question is whether the denoise algorithms work better with the bin1 images or the bin2 images (which have, in a sense, being already filtered for small-scale noise). Here is a completely unscientific comparison, with bin2 on the left and bin1 on the right, with no processing except NoiseX:

image.thumb.png.ae7c6fd2ea2ee01e90f4022385d677eb.png

The difference in detectivity of faint structure is rather small to my eyes.

 

In any case, for me the significant resolution increase of the bin1 image is much more important than its possible 7% noise penalty (that could be compensated easily by shooting more subs).

Edited by Dan_Paris
Link to comment
Share on other sites

8 hours ago, Dan_Paris said:

The bin1 image is obviously more detailed,

Can you point to the detail that is present in bin1 image and not bin2 image?

Pair of stars that is resolved in one vs the other, or a feature visible in bin1 vs bin2?

 

Only thing that I can see in bin2 image are signs that you did not handle it properly. It was scaled up to the size of bin1 image using crude nearest neighbor resampling instead of sophisticated algorithm like Lanczos.

Link to comment
Share on other sites

@Dan_Paris

Just in case you don't understand what I've meant by proper handling - have a look at this:

image.png.a6a94039caf9ef5b177be9bb6b2dd4a6.png

Same image can be enlarged by crude algorithm like nearest neighbor - where pixelation artifacts are seen, but it can also be enlarged by sophisticated algorithm like Lanczos

There is quite striking difference between the two, right?

Your image of bin2 data shows artifacts of pixelation - which means you were not careful of how you enlarge it for comparison with other image.

Link to comment
Share on other sites

14 minutes ago, vlaiv said:

Your image of bin2 data shows artifacts of pixelation - which means you were not careful of how you enlarge it for comparison with other image.

 

Well, here a  comparison with the bin2 image enlarged with Lanczos :

Capturedcrandu2023-09-0408-23-22.thumb.png.9219ab81560c8d0709777ad47b92cc4a.png

 

To my eyes the difference in resolution between bin1 and bin2 is still obvious.

 

Edited by Dan_Paris
Link to comment
Share on other sites

7 minutes ago, vlaiv said:

Ones in the first post you made.

I suspect it is from the last one from yesterday which is not relevant for the discussion about resolution (bin1 was scaled down to bin2 after noise reduction for noise comparison purposes).

If you view the image from today full screen, don't you agree that the difference in resolution is plain obvious?

Edited by Dan_Paris
Link to comment
Share on other sites

@Dan_Paris

This is why I asked you to show me a feature or star pair that is resolved in one image and not the other.

Level of processing and handling of the data will make apparent sharpness different. Even presence of noise can make something look sharper - but that does not mean you actually have distinction in the data.

 

Link to comment
Share on other sites

@Dan_Paris

Have a look at this and tell me what you think:

image.png.372f4c7d4ed9b6de771a72c32532d174.png

Top image is bin1 image and its Fourier transform.

Bottom right image is that same Fourier transform with me setting to zero all the values higher than half sampling frequency (frequency that corresponds to x2 coarser sampling) to zero. I effectively removed all the higher frequencies that would not be captured if you sampled at half the current rate.

Then I did inverse FT of that - which is bottom left image. Can you tell the difference in resolution between top and bottom image?

And mind you - this is even done on processed data - not even on 32bit float point with much higher precision, yet results are evident.

Link to comment
Share on other sites

51 minutes ago, vlaiv said:

Just for reference - this happens to properly sampled image when you do it:

image.png.431d32526e52927ee214b725db94487e.png

Not sure exactly what your trying to show but on my device the top leaf is much sharper by eye than the lower one which also shows artifacts. Regards Andrew 

Edited by andrew s
  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.