Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Does a longer exposure shot have more detail than multiple short ones?


IamLost

Recommended Posts

For 2000x1 sec exp it's an excellent image. No doubt that 16 inch mirror helped :)

9 minutes ago, vlaiv said:

I guess that is pretty good calculation?

Yes it is.

Did I fill in the data ok for a refractor?

test4.png.6cd2be84e2b67e7a5aa7a5b2a94fb83d.png

Link to comment
Share on other sites

12 minutes ago, dan_adi said:

For 2000x1 sec exp it's an excellent image. No doubt that 16 inch mirror helped :)

Yes it is.

Did I fill in the data ok for a refractor?

test4.png.6cd2be84e2b67e7a5aa7a5b2a94fb83d.png

That is about right. In fact, if you want to be precise, you should count in if it is a doublet or triplet and count each air/glass surface (4 for doublet, 6 for tripled - air spaced of course) and put something like 99.7% for each. But x2 0.99 is good approximation. Those numbers are there because my RC8" has dielectric mirrors that allegedly have 99% reflectivity.

Link to comment
Share on other sites

46 minutes ago, vlaiv said:

That is about right. In fact, if you want to be precise, you should count in if it is a doublet or triplet and count each air/glass surface (4 for doublet, 6 for tripled - air spaced of course) and put something like 99.7% for each. But x2 0.99 is good approximation. Those numbers are there because my RC8" has dielectric mirrors that allegedly have 99% reflectivity.

There's a lot more losses than just from the mirrors/lenses, don't forget to take into account flattener/coma corrector, filter and the cover glass on the sensor.

Link to comment
Share on other sites

On 29/02/2020 at 13:36, vlaiv said:

Depends on what you mean by detail.

If you mean signal or what we call depth - that is rather controversial topic - I'll will put forward my view on it:

- short exposure has as much signal as long exposure, even single short exposure - this is controversial part as many people will disagree with me on this one

- short exposure has much more noise.

- Signal to noise ratio is important for what we call depth of the image. Difference between many short exposures and few long exposures - all adding up to same total time is in read noise of the camera - everything else is the same. If one had 0 read noise camera - there would be no difference between the two (many short vs few long).

Since there is no such thing as camera without read noise - fewer long subs always win in terms of SNR over many short subs - but it does not always win by same "amount". If read noise term is small compared to other noise sources - and that happens when light pollution is very strong (LP noise) or target is very bright (target shot noise) or camera does not have cooling and is running hot - thermal noise. In all of these circumstances there will be very small difference between many short subs and few long subs.

If you have cooled camera and dark skies and going for faint target, then read noise is not negligible and there will be quite a bit of a difference between short subs and long subs.

But there is another meaning to detail - actual resolution or detail captured and here we can argue that more short subs have better detail than few long subs.

Again things are not as simple as that but we have two important factors to consider when we talk about level of detail - atmosphere and mount precision. Both reduce level of detail (in terms of sharpness) of the image, and here shorter subs have advantage because they don't accumulate as much of a blur as long sub. Of course, that depends on atmosphere and mount and respective exposures, but yes - there is a technique called lucky DSO imaging that exploits this and uses many, many very short images - like 0.5s images and tens of thousands of such to produce very good sharp images of DSO objects.

For example this:

20160505_M51_2000x1s_AutoStakkert_ASI160

This image is made out of 2000 one second exposures with dobsonian telescope mounted on EQ platform

Have a look at more images from Emil here:

http://www.astrokraai.nl/viewimages.php?t=y&category=7

(btw, that is author of planetary stacking software AS2/3!).

While this is obviously a very good image it is not an exceptionally deep one. There is considerably more outer halo than we see here. Total exposure time is short, at 50 minutes, but the aperture is large. It would be interesting to see if how much longer it would have taken to capture a fuller halo.

Olly

Link to comment
Share on other sites

36 minutes ago, Xplode said:

There's a lot more losses than just from the mirrors/lenses, don't forget to take into account flattener/coma corrector, filter and the cover glass on the sensor.

It makes sense obviously. But what is the percentage regarding light loss for each of those elements?  Is it really alot? (10-20%) or more like 1-2%

Edited by dan_adi
Link to comment
Share on other sites

34 minutes ago, Xplode said:

There's a lot more losses than just from the mirrors/lenses, don't forget to take into account flattener/coma corrector, filter and the cover glass on the sensor.

Yes, it's just an approximation. We did not take into account atmospheric extinction, nor do we have exact light pollution info. We did not account for calibration frames either.

As approximation - I would say it's pretty good. Whether actual SNR in that region is 2 or 3 - it does not matter. It is detectable and indeed - it shows in the image. Signal that is 23.5 would have SNR of 1 or less and that is just at noise level - hence it would not be seen in the image - and indeed it does not show.

8 minutes ago, ollypenrice said:

While this is obviously a very good image it is not an exceptionally deep one. There is considerably more outer halo than we see here. Total exposure time is short, at 50 minutes, but the aperture is large. It would be interesting to see if how much longer it would have taken to capture a fuller halo.

I think it only takes to switch from 1s exposure to something more meaningful for capturing deep faint stuff and not going after resolution - like 2-3 minutes to see significant improvement in outer regions.

Going with 10x200 seconds would make mag24 signal be captured at SNR 2.2 - that would start to show outer tails (although at threshold of detection).

 

  • Like 1
Link to comment
Share on other sites

2 minutes ago, dan_adi said:

It makes sense obviously. But what is the percentage regarding light loss for each of those elements?  

Uncoated glass loses around 4% due to reflections per surface, an uncoated camera lens could loose 50% of the light through reflections!

It's hard to say for exact because of the different coating used by different manufacturers, but i don't think 1-2% per surface would be a bad number to use.

Link to comment
Share on other sites

15 minutes ago, Xplode said:

Uncoated glass loses around 4% due to reflections per surface, an uncoated camera lens could loose 50% of the light through reflections!

I guess that should be taken into account by QE of sensor as you can't measure sensor efficiency without micro lens?

_quantum_asi1600mm-808.png.4e233cb7285368050b82f3c0b0b6f154.png

above in black line is measurement of absolute QE by Christian Buil. I doubt that he removed sensor cover glass or microlens from pixels. Peak QE is ~60% and I would say that average QE is around 50% in 400-700 nm range?

You'll see that I put 50% as QE in above calculations, so I guess it has been accounted for?

Link to comment
Share on other sites

If you guys don't mind me asking something ... :) The past few days I've been reading on the influence of seening on telescope resolution. As I understand from theory, in average seeing (as most of us have in our backyards 2-4 arcseconds ), it makes little sense to observe/image with a larger telescope then 8-10 inch. If you want a large scope you must pair that instrument with very good seeing, so the scope can operate as close as possible to its theoretical resolution. 

In the same time a bigger scope will present a brighter image. So if you have a  16 inch scope in average seeing, it will have a brighter image but no more detail/resolution than a 8 inch scope.

Did i understand the theory ok??

Link to comment
Share on other sites

20 minutes ago, Xplode said:

It's hard to say for exact because of the different coating used by different manufacturers, but i don't think 1-2% per surface would be a bad number to use.

Most coatings bring that down to less than one percent. Here is graph for one coating type:

image.png.989bd4d1539426e6662451d5698b0897.png

It also varies with angle - so slower scopes have advantage there :D

I think it is safe to say that we have something like 99.5 or if you want to be safe 99.2 per air glass surface. For coma corrector with three elements that is something like total of 5% loss. I don't think it is substantial in above approximation. Just using 406 instead of 400mm for aperture (true 16") will change amount of light about the same in other direction - about 3% more light.

  • Like 1
Link to comment
Share on other sites

1 hour ago, dan_adi said:

If you guys don't mind me asking something ... :) The past few days I've been reading on the influence of seening on telescope resolution. As I understand from theory, in average seeing (as most of us have in our backyards 2-4 arcseconds ), it makes little sense to observe/image with a larger telescope then 8-10 inch. If you want a large scope you must pair that instrument with very good seeing, so the scope can operate as close as possible to its theoretical resolution. 

In the same time a bigger scope will present a brighter image. So if you have a  16 inch scope in average seeing, it will have a brighter image but no more detail/resolution than a 8 inch scope.

Did i understand the theory ok??

Very complex topic.

I'll just outline some of the complexities of it and how it fits together without going too much into detail - because I don't know that detail, and to be honest, not sure if we do have good models that cover all cases.

Let's first assume that seeing effects don't depend on aperture size and can simply be modeled with gaussian distribution. This is not bad first order approximation. In each instant of time, actual seeing induced aberration is random but if you average them over time due to central theorem - distribution will tend to Gaussian. If you take star profile in long exposure and fit gaussian to it - you'll get very good fit.

Then there is second part in resolution equation - how good your tracking / guiding is. Again these are random displacements from where mount should be pointing and again, over time it adds up to Gaussian distribution.

Third part in resolution equation is aperture size - larger telescopes simply resolve more. There is Gaussian approximation to Airy disk profile.

Fourth part of resolution equation is quite complex and relates to images and not what can be captured - it has to do with pixel size - so called pixel blur (fact that pixels are not point sampling devices but rather have surface), so we'll skip that.

We have three Gaussian distributions (approximations) that convolve together to produce final blur in the image. We simply add standard deviations and get final standard deviation of "total" Gaussian blur.

If it were only that - larger scopes would always have edge over smaller scopes in what they can record resolution wise because their Airy disks are simply smaller - they resolve more, and although difference between resolving capability of say 16" and 8" scope is double - when we add guiding/tracking and seeing - it will not be twice any more, but maybe 10% or something like that (variations add in quadrature - like noise).

Now we return to beginning and examine what happens with actual scope aperture in a given seeing. Seeing is complex thing and it is best approximated as wavefront perturbation. For that you need to understand a bit Zernike polynomials that describe wavefront errors in general (or rather describe curved surfaces as sum of different basis shapes).

Telescope aberrations that we talk about are: tilt, piston, coma, spherical, ....

image.png.e6f95ccfa8568d45555ae40e4580bc6f.png

Above you can see math form of certain polynomial and classical name for it. This diagram on the other hand shows phase difference:

image.png.ae9127ede7177e8c88a165862186f978.png

Here color spectrum is used to denote different phase error (wavefront lagging or leading).

These polynomials make up orthonormal basis - similarly like 2d or 3d coordinates - you can't use X coordinate to describe height or depth - only width, so X, Y and Z are linearly independent. So are above polynomials for phase difference - you can decompose any phase into component "vectors" or above polynomials.

What does this has to do with seeing and telescope size? This diagram will explain:

image.png.4ee76bca2cde32754ce2cf4b8a80431d.png

Here we have 1d representation of aberrated wavefront - it should be just straight line for perfect front. We also have tree "apertures" - very small one, medium one and long one.

They "receive" different portions of wavefront. Large aperture has it all - 3 peaks, 2 troughs - it will need a lot of those polynomials to get good approximation of this wavefront.

Medium wavefront just has one valley  - it looks like defocus only - maybe some spherical (Z0,2 and Z0,4).

Smallest aperture has essentially flat wave front - almost without any aberrations.

Now it is important to realize that you can't observe wave front relative to aperture size - peaks and troughs don't became smaller as aperture increases - they are relative to wavelength (famous 1/4 wave or 1/6 wave).

We also need to realize that wave front deformation is changing in time frame of milliseconds.

In order to understand effects of seeing on certain aperture - we need to know how wavefront changes from instant to instant - limited by aperture and then "integrate" effect decomposed to Zernike polynomials and transform that via Fourier transform into PSF of seeing over that particular aperture.

It is really complex thing and worst part is that we don't know how atmosphere is changing and I'm not sure if we have good models for different types of seeing. I would be happy if someone came up with following:

For major Zernike polynomials - average value and standard deviation of parameters - over some surface. That is all you need to model (with computer of course) difference in seeing effects of small vs large aperture. But I'm not sure there is such research. Maybe there is - we do have adaptive optics on large instruments - it has to in some part rely on this theory.

And now for the grand finale - how much can we sharpen to recover blurred detail? It actually works - sharpening is not just making image prettier - it is about recovering of true detail. It turns out that sharpening is the same thing as amplifying certain frequencies of the image (think Fourier analysis of signal) - pretty much like using old audio equalizers - when you boost certain frequencies.

Blurring is attenuation of high frequency components. Amplifying back those components restores image. Problem is that we can't separate signal and noise and when we amplify certain frequencies of signal - we amplify certain frequencies of noise as well - we make result noisier.

In order to be able to do frequency restoration (fancy name for sharpening) - you need good SNR and large telescopes have advantage here - they can get larger SNR in same amount of time (if paired with suitable camera / matched for resolution and all).

Hope this answers your question?

Link to comment
Share on other sites

1 hour ago, vlaiv said:

Very complex topic.

I'll just outline some of the complexities of it and how it fits together without going too much into detail - because I don't know that detail, and to be honest, not sure if we do have good models that cover all cases.

Let's first assume that seeing effects don't depend on aperture size and can simply be modeled with gaussian distribution. This is not bad first order approximation. In each instant of time, actual seeing induced aberration is random but if you average them over time due to central theorem - distribution will tend to Gaussian. If you take star profile in long exposure and fit gaussian to it - you'll get very good fit.

Then there is second part in resolution equation - how good your tracking / guiding is. Again these are random displacements from where mount should be pointing and again, over time it adds up to Gaussian distribution.

Third part in resolution equation is aperture size - larger telescopes simply resolve more. There is Gaussian approximation to Airy disk profile.

Fourth part of resolution equation is quite complex and relates to images and not what can be captured - it has to do with pixel size - so called pixel blur (fact that pixels are not point sampling devices but rather have surface), so we'll skip that.

We have three Gaussian distributions (approximations) that convolve together to produce final blur in the image. We simply add standard deviations and get final standard deviation of "total" Gaussian blur.

If it were only that - larger scopes would always have edge over smaller scopes in what they can record resolution wise because their Airy disks are simply smaller - they resolve more, and although difference between resolving capability of say 16" and 8" scope is double - when we add guiding/tracking and seeing - it will not be twice any more, but maybe 10% or something like that (variations add in quadrature - like noise).

Now we return to beginning and examine what happens with actual scope aperture in a given seeing. Seeing is complex thing and it is best approximated as wavefront perturbation. For that you need to understand a bit Zernike polynomials that describe wavefront errors in general (or rather describe curved surfaces as sum of different basis shapes).

Telescope aberrations that we talk about are: tilt, piston, coma, spherical, ....

image.png.e6f95ccfa8568d45555ae40e4580bc6f.png

Above you can see math form of certain polynomial and classical name for it. This diagram on the other hand shows phase difference:

image.png.ae9127ede7177e8c88a165862186f978.png

Here color spectrum is used to denote different phase error (wavefront lagging or leading).

These polynomials make up orthonormal basis - similarly like 2d or 3d coordinates - you can't use X coordinate to describe height or depth - only width, so X, Y and Z are linearly independent. So are above polynomials for phase difference - you can decompose any phase into component "vectors" or above polynomials.

What does this has to do with seeing and telescope size? This diagram will explain:

image.png.4ee76bca2cde32754ce2cf4b8a80431d.png

Here we have 1d representation of aberrated wavefront - it should be just straight line for perfect front. We also have tree "apertures" - very small one, medium one and long one.

They "receive" different portions of wavefront. Large aperture has it all - 3 peaks, 2 troughs - it will need a lot of those polynomials to get good approximation of this wavefront.

Medium wavefront just has one valley  - it looks like defocus only - maybe some spherical (Z0,2 and Z0,4).

Smallest aperture has essentially flat wave front - almost without any aberrations.

Now it is important to realize that you can't observe wave front relative to aperture size - peaks and troughs don't became smaller as aperture increases - they are relative to wavelength (famous 1/4 wave or 1/6 wave).

We also need to realize that wave front deformation is changing in time frame of milliseconds.

In order to understand effects of seeing on certain aperture - we need to know how wavefront changes from instant to instant - limited by aperture and then "integrate" effect decomposed to Zernike polynomials and transform that via Fourier transform into PSF of seeing over that particular aperture.

It is really complex thing and worst part is that we don't know how atmosphere is changing and I'm not sure if we have good models for different types of seeing. I would be happy if someone came up with following:

For major Zernike polynomials - average value and standard deviation of parameters - over some surface. That is all you need to model (with computer of course) difference in seeing effects of small vs large aperture. But I'm not sure there is such research. Maybe there is - we do have adaptive optics on large instruments - it has to in some part rely on this theory.

And now for the grand finale - how much can we sharpen to recover blurred detail? It actually works - sharpening is not just making image prettier - it is about recovering of true detail. It turns out that sharpening is the same thing as amplifying certain frequencies of the image (think Fourier analysis of signal) - pretty much like using old audio equalizers - when you boost certain frequencies.

Blurring is attenuation of high frequency components. Amplifying back those components restores image. Problem is that we can't separate signal and noise and when we amplify certain frequencies of signal - we amplify certain frequencies of noise as well - we make result noisier.

In order to be able to do frequency restoration (fancy name for sharpening) - you need good SNR and large telescopes have advantage here - they can get larger SNR in same amount of time (if paired with suitable camera / matched for resolution and all).

Hope this answers your question?

Thank you for the detailed answer. This is a really complex issue, more that I’ve realized. So seeing, guiding, tracking accuracy, all conspire to mess up a given resolution for a given scope. If the above is true, then the bigger the scope is, one will need better tracking i.e bigger and preciser mount, better guiding, better seeing, in order to take advantage of the large aperture. 

Edited by dan_adi
Link to comment
Share on other sites

Just now, dan_adi said:

Thank you for the detailed answer. This is a really complex issue, more that I’ve realized. So seeing, guiding, tracking accuracy, all conspire to mess up a given resolution for a given scope. If the above is true, then the bigger the scope is, one will need better tracking i.e bigger and preciser mount, better guiding, better seeing, in order to take advantage off the large aperture. 

I would not generalize it like that. This is closer to the truth:

- you always want better seeing

- you always want better tracking/guiding

- you always want darker skies

- you don't necessarily always want a bigger scope (but many people do :D )

 

  • Like 1
Link to comment
Share on other sites

4 minutes ago, dan_adi said:

I guess the concept of ‘bigger is better’ is very relative in astronomy :)) 

You really always want a bigger scope - you just don't want to spend much on it, haul it around, store it, set it up and all that :D

Link to comment
Share on other sites

2 hours ago, vlaiv said:

You really always want a bigger scope - you just don't want to spend much on it, haul it around, store it, set it up and all that :D

If somebody will take care of the first one, I’ll gladly handle the rest.☺️

  • Haha 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.