Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Resolution ("dummy" level responses, please)


Recommended Posts

2 hours ago, drjolo said:

Here is 100 minutes of Ha made with the same camera, filter, mount, location and similar conditions (however not the same night). In my notes I have written that FWHM was in the range 2.3-2.4" in long exposed frames. The only difference is the scope used:

 - at left it is 130mm APO triplet with pixel scale 1.04"/px 
 - at right it is Meade ACF 10" scope with pixel scale 0.44"/px

ACF image is resized 50% and APO image is aligned to ACF.

130vsACF.jpg.7597af01bafb68671e09a67522df948a.jpg

What is immediately visible is of course lower noise in 10" scope image due to larger aperture. But also stars are smaller and in my opinion detail is better in the right image - the one from 10" scope. 

I agree with you that the right hand image is better, but is that down to better SNR or better resolution?

Cheers, Ian

Link to comment
Share on other sites

  • Replies 43
  • Created
  • Last Reply
17 minutes ago, kens said:

It depends on where the binning is happening. With a CCD camera it is on the sensor and before debayering. For the ASI1600 (ignoring its hardware binning option) it happens in the driver. I suspect/expect that is also before any debayering. In post-processing it is after debayering.

I'm going to have to go away and try to get my mind around that and its implications. My gut-reaction (always liable to be wrong) is that it shouldn't make any difference. The pixels are still 24-bit and hold the RGB data in their respective 8-bit sections. I will go away and create a hypothetical superpixel with some numbers, combining them at different stages and see if that makes it clearer to me.

I may be some time ...

Link to comment
Share on other sites

13 minutes ago, Demonperformer said:

So there is no saving in time (whether 1/4 or 1/2 as discussed in the previous posts)?

Correct - for CMOS anyway. You could try increasing gain to lower the read noise and thereby shorten exposures.

13 minutes ago, Demonperformer said:

So this is not going to be a major contributing factor to my imaging - I would normally want to be exposing to a point where the histogram is further over to the right than that anyway (assuming it runs from 0 to 65535, as you have converted 12-bit to 16-bit in your calculation)?

Yes - I always work in 16bits (0-65535) since thats what you measure on your images. Note that 10.RN^2 is the most you really need. Less can still give good results. By exposing further to the right you are unnecessarily increasing your sub-exposure time and decreasing dynamic range. I suspect that is a relic of high read noise cameras.

Link to comment
Share on other sites

2 minutes ago, Demonperformer said:

I'm going to have to go away and try to get my mind around that and its implications. My gut-reaction (always liable to be wrong) is that it shouldn't make any difference. The pixels are still 24-bit and hold the RGB data in their respective 8-bit sections. I will go away and create a hypothetical superpixel with some numbers, combining them at different stages and see if that makes it clearer to me.

I may be some time ...

No. Look at this way, first there is bayer matrix, and it does not hold 8 bit data, but it holds 12 or 16 bit data.

So we have

R G

G B

for example.

There is no "color" at this stage, it is like looking at black and white image, so one 16bit (let's stick with 16bit vs 12bit or something else for the moment). Only way that you can think of color at this stage is like using normal R, G and B filters and shooting mono image, but R filter is applied to top left pixel, G filter to top right and bottom left, and B filter is applied to bottom right.

Now you want to create RGB triplet of values, and you can do it in several ways. One way to do it is to "make up" (it is educated guess - this is normal debayering, like linear or something else) missing values to create triplets in each place.

So you start with above and look at it like this

( R , _ , _ ) ( _ , G , _ )

( _ , G , _ ) ( _ , _ , B )

Now debayering algorithm calculates missing values and make each triplet complete thus giving you "normal" color pixels that you are used to, but still not 24 bit (8, 8, 8) but rather (16, 16, 16) making it 48bit per pixel.

Other option to do it is "superpixel".

Instead of making up values, group of 4 pixels is replaced with single pixel thus lowering resolution by 2 (it looks like binning but it is not, there is however similarity between G component and software binning). How is this super pixel triplet formed?

Very simply:

( R, (G1+G2)/2, B ) - R and B are 16 bit values from above matrix and G1 and G2 are top right and bottom left 16 bit value - resulting pixel will have (16, 17, 16) bits, or (16,16,16) if software truncates LSB of resulting green color.

 

 

Link to comment
Share on other sites

8 minutes ago, Demonperformer said:

I'm going to have to go away and try to get my mind around that and its implications. My gut-reaction (always liable to be wrong) is that it shouldn't make any difference. The pixels are still 24-bit and hold the RGB data in their respective 8-bit sections. I will go away and create a hypothetical superpixel with some numbers, combining them at different stages and see if that makes it clearer to me.

I may be some time ...

The pixels on the sensor don't have any bits. They are a "well" that hold electrons converted from photons. CCD binning dumps the electrons from the 4 pixels into one well which is then converted to a voltage and in turn that is converted to an ADU number. That, by the way, is the read out process that introduces read noise.

Luckily Vlad has described debayering better than I could.

Link to comment
Share on other sites

21 minutes ago, iansmith said:

I agree with you that the right hand image is better, but is that down to better SNR or better resolution?

Hard to say. I would need to stack different number of subframes to get similar SNR and then compare. I can try to do it. 

Link to comment
Share on other sites

36 minutes ago, Demonperformer said:

either way, it would appear to be worth the effort of using the bigger scope.

I guess from a practical point of view it answers your original question. A bigger scope gets you a better picture :)

Cheers, Ian

Link to comment
Share on other sites

17 minutes ago, drjolo said:

Hard to say. I would need to stack different number of subframes to get similar SNR and then compare. I can try to do it. 

If you have the time it would be interesting to see if there's still a difference with similar SNR.

Cheers, Ian

Link to comment
Share on other sites

58 minutes ago, kens said:

Correct - for CMOS anyway. You could try increasing gain to lower the read noise and thereby shorten exposures.

Yes - I always work in 16bits (0-65535) since thats what you measure on your images. Note that 10.RN^2 is the most you really need. Less can still give good results. By exposing further to the right you are unnecessarily increasing your sub-exposure time and decreasing dynamic range. I suspect that is a relic of high read noise cameras.

Thanks. This is definitely two things I have learned new on this thread.

Link to comment
Share on other sites

Another strange thing is that aperture has different consequences for point sources than for extended ones. I don't think the nebulosity is better resolved in the big scope image but the stars are smaller and the noise is lower.

When I image using the TEC140 and the Tak 106 pair I find the most striking difference is that the 140 produces much smaller stars and more numerous ones - considerably more numerous. 

Interesting that Drjolo is emphasizing the imortance of aperture, not F ratio. I wonder why that gladdens my heart??? :evil4:

Olly

 

Link to comment
Share on other sites

3 hours ago, vlaiv said:

I see that sampling rate of seeing / 3 is mentioned as good sampling rate.

I would disagree with this. Proper sampling considering star FWHM in image should be FWHM * 0.622, so roughly double the suggested arc seconds per pixel (meaning lower resolution). This level of sampling cuts off frequencies that are 1% or less in power spectrum, resulting in contrast loss of only few percent (sum of all lost frequencies) - it will certainly be buried in the noise (very rarely we get SNR over 100).

While higher resolution will capture those frequencies with power =<1% it will not show up in the image, because in post processing we tend to bring out signal while keeping noise down, so any difference in contrast that is in level with noise (or even less) will not be noticeable.

As a newbie imager, I wonder if you would mind clarifying whether you are saying that there is actually a disadvantage to oversampling or simply no advantage. 

Link to comment
Share on other sites

2 hours ago, vlaiv said:

No. Look at this way, first there is bayer matrix, and it does not hold 8 bit data, but it holds 12 or 16 bit data.

So we have

R G

G B

for example.

There is no "color" at this stage, it is like looking at black and white image, so one 16bit (let's stick with 16bit vs 12bit or something else for the moment). Only way that you can think of color at this stage is like using normal R, G and B filters and shooting mono image, but R filter is applied to top left pixel, G filter to top right and bottom left, and B filter is applied to bottom right.

Now you want to create RGB triplet of values, and you can do it in several ways. One way to do it is to "make up" (it is educated guess - this is normal debayering, like linear or something else) missing values to create triplets in each place.

So you start with above and look at it like this

( R , _ , _ ) ( _ , G , _ )

( _ , G , _ ) ( _ , _ , B )

Now debayering algorithm calculates missing values and make each triplet complete thus giving you "normal" color pixels that you are used to, but still not 24 bit (8, 8, 8) but rather (16, 16, 16) making it 48bit per pixel.

Other option to do it is "superpixel".

Instead of making up values, group of 4 pixels is replaced with single pixel thus lowering resolution by 2 (it looks like binning but it is not, there is however similarity between G component and software binning). How is this super pixel triplet formed?

Very simply:

( R, (G1+G2)/2, B ) - R and B are 16 bit values from above matrix and G1 and G2 are top right and bottom left 16 bit value - resulting pixel will have (16, 17, 16) bits, or (16,16,16) if software truncates LSB of resulting green color.

Thank you for this response.

Firstly, I totally accept what you say about not being 8-bit. I'm used to working with 24-bit rgb, so that is where I naturally drifted, but, of course, if each colour is 12 or 16 bit, the combined result has that much greater bit-width.

Your explanation of how the system puts the data together makes a lot of sense. Indeed, your "superpixel" explanation does a much better job of explaining what I was thinking of when I started this thread (how four pixels become one coloured pixel in the end result) than I did. Obviously I was thinking of binning!

I note that in the first option (normal pixel) 1/3 of the data is observed and 2/3 is calculated, whereas in the second option (superpixel) 2/3 of the data is observed and only 1/3 calculated. I guess that with a decent algorithm, that doesn't make a lot of difference. The method I thought was being applied (basically first option 1 then follow that by option 2) would end up with 0 observed data and all of it calculated - not recommended for improving results!

It also explains why only 2*2 "binning" is offered for my colour camera, compared to upto 4*4 that is offered on my mono.

Thanks.

Link to comment
Share on other sites

I would just like to take the opportunity of thanking everyone who has contributed to this thread. If I haven't responded individually to your comment, that does not mean that it has not been appreciated.

It has come quite a long way from my starting point and has provided information that (I hope) will gradually improve my imaging.

(1) There is a difference between "seeing" and "image scale" - it's always good to get one's terminology straight

(2) The Bayer matrix is a lot more complex than I thought. Although an understanding of how it works is not essential for using it, an improved understanding will no doubt help me avoid trying to do stupid things with it that it was not designed to do. [Don't you just admire optimism?!]

(3) The extra effort of loading the bigger scope onto the mount will be worth it in terms of better results.

(4) I need to be taking subs with much shorter length than I had previously realised, so that the peak of the histogram is only 2% (1350/65536) of the way across. This is one of those things that is totally counter-intuitive to me (a victim of the "longer duration subs are better" propoganda!). Just how much shorter they need to be, I will have to experiment. One major advantage of this may be that I can do away with guiding altogether (expecially with the small frac). One major disadvantage will be the enormous number of 32MB subs I am going to accumulate during a session. Still, lots of subs = less noisy results (go on, someone, burst that bubble as well!).

(5) Because of the above, binning (as a general rule) is something I should avoid. But then, with such short subs being optimal anyway, why would I need it?

Anyway, thanks to everyone for all your help on this.

Link to comment
Share on other sites

I'll probably need to revise that figure of 0.622 it is based on 10% rather than 1% (error in calculation, I forgot one 0 and ... :D ), but the principle is the same, there is formula behind it, one just plugs in desired percentage and gets needed sampling rate to record such percentage.

Here is explanation. Mathematically when we talk about seeing (and guiding error) blur it is approximated with Gaussian function - this is why we use FWHM and programs that locate stars use Gaussian fitting (some other functions have been proposed to maybe even better fit data like Moffat, Lorentzian, but for this purpose Gaussian is good enough).

When you blur the image with certain kernel (just an image of a function) you are in essence doing mathematical operation of convolution. There is theorem that says convolution of functions is equivalent to multiplication of their Fourier transforms. This really means that blurring is in fact multiplication of different frequencies with some factor (being less than 1) - so you are turning down some higher frequencies. It is a bit like old stereo having equalizer - where you could amplify or mute certain frequencies - and it would be very noticeable in sound that you are listening. If you remember those, high frequencies in sound correspond to hiss and clarity of voices and instruments for example, while in image high frequencies represent smaller and smaller detail and contrast (again clarity of sorts).

Now there are two important things about imaging with telescope. First is that there is cut off frequency after which no detail is going to be visible no matter what you do. This is due to physics of light and size of aperture of telescope. But there is graph that explains it:

image.thumb.png.e16fe951e9b1f5cccbbe03e14e3b0863.png

This particular graph compares theoretical maximum (blue line) and measured (red line) - which depends on quality of optics, central obstruction, etc ...

Again you can see resemblance to equalizer:

image.png.d2a0e4ee4adee373906ca66e7e59df94.png

So this graph shows that certain frequencies are cut down by certain percentage but also that after cut off frequency there are no more higher frequencies in signal (image) that telescope gives.

Now there is Nyquist theorem that says if you want to record all frequencies up to certain frequency you need to sample 2 times per highest frequency component. From this is derived critical resolution / sampling for planetary photography. There is simply no point in going higher and over sampling as no additional detail is going to be visible. It is also related to maximum magnification of the telescope.

Now back to long exposure astrophotography. Above graph shows "equalizer" for airy function, but in long exposure astrophotography main blur comes from seeing / guiding errors and behaves like Gaussian curve / blur. What is interesting is that Fourier transform of Gaussian curve is again Gaussian curve, so above "equalizer" would look like this:

image.png.4a81671f2fdfc8fbdbe23c73dbd7f956.png

So it is not straight line like with airy, but it slopes a bit, and there is important thing about Gaussian - it is never 0, it just gets lower and lower as you go to right, but never reaches 0.

If we are talking about pure Gaussian function this means there is no limiting resolution, and using ever higher resolution would always show some additional detail. But true PSF of stars is not perfect Gaussian - as it consists of both seeing, guide error and airy disk. This means that we are only using Gaussian function as approximation. Now what we do with it? It is very simple. We choose small number like 1% or 5% and we decide that all those frequencies that are attenuated more than this are not going to end up in the image because of noise and the fact that we are only using approximation. This is our cut off frequency for sampling. We then take 2 samples per cut off frequency cycle and we will be recording that frequency and all lower frequencies in the image. So we've got most (like 99%) of detail that is theoretically possible (and in practice even 99% is questionable).

Now, why is it bad thing to oversample? It is not necessarily bad thing, I'm working on some techniques to actually try to recover all detail up to limit resolution of telescope (critical resolution mentioned earlier), and oversampling could prove to help there, but that is very special case. In general case when imaging and trying to capture nice and smooth image, one wants as much SNR as can be achieved in given imaging time.

Over sampling will always lower SNR, and that is the main reason why we want to go for optimum - not waste SNR but still capture what can sensibly be captured in the image.

Link to comment
Share on other sites

12 minutes ago, Demonperformer said:

Thank you for this response.

Firstly, I totally accept what you say about not being 8-bit. I'm used to working with 24-bit rgb, so that is where I naturally drifted, but, of course, if each colour is 12 or 16 bit, the combined result has that much greater bit-width.

Your explanation of how the system puts the data together makes a lot of sense. Indeed, your "superpixel" explanation does a much better job of explaining what I was thinking of when I started this thread (how four pixels become one coloured pixel in the end result) than I did. Obviously I was thinking of binning!

I note that in the first option (normal pixel) 1/3 of the data is observed and 2/3 is calculated, whereas in the second option (superpixel) 2/3 of the data is observed and only 1/3 calculated. I guess that with a decent algorithm, that doesn't make a lot of difference. The method I thought was being applied (basically first option 1 then follow that by option 2) would end up with 0 observed data and all of it calculated - not recommended for improving results!

It also explains why only 2*2 "binning" is offered for my colour camera, compared to upto 4*4 that is offered on my mono.

Thanks.

Actually in superpixel mode everything is observed rather than calculated. It is just that G is observed twice and average of observations is taken, thus improving results :D

I don't really understand how hardware binning works with Bayer matrix (what is mechanics of it). I suspect how it might be working, but I'm not sure. Anyway I think that binning of color data happens before debayering and works pretty much the same as regular binning - just summing up respective adjacent values

(R1 G1) (R2 G2)

(G1 B1) (G2 B2)

(R3 G3) (R4 G4)

(G3 B3) (G4 B4)

Becomes

(R1+R2+R3+R4 G1+G2+G3+G4)

(G1+G2+G3+G4 B1+B2+B3+B4)

So regular debayering of binned image would lower resolution by 2 (binning would do that, and algorithm would calculate missing values), while superpixel of binned image would lower resolution by 4 - first binning would lower it by 2 and then superpixel would do that again with resulting picture (not even being aware it was binned in the first place).

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

Actually in superpixel mode everything is observed rather than calculated.

Ah, yes, the G is averaged, but that is still using observed data, whereas the other calculations create a new data value where none existed before.

Link to comment
Share on other sites

8 minutes ago, dph1nm said:

I believe there are/were some colour CCDs which could perform true hardware colour binning (e.g. KAI-10100 CCD) with some nifty clocking of the ouput.

NigelM

I guessed that it might be something like that, double clock rate with gates that direct electrons into separate measuring wells, but I suspect it has slightly higher read noise due to all of that than straight readout.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.