Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Reducer Myth: Some data.


Rodd

Recommended Posts

2 hours ago, Stub Mandrel said:

Quantum physics. The workings of the system rely on the photo giving an atom a big enough 'kick' to transfer one electron into a 'storage well'. Each photon increases the count by one subject to (a) it getting through any filter and (b) the probability of the electron being kicked and captured, called the quantum efficiency.

So brightness is a count of the  number of photos not their energy.

The energy determines their wavelength (and therefore colour), our sensors typically work from IR via visible to UV with an energy range of about 1:3. The wavelength/colour affects what filters they will pass through.

Thanks Neil, you have provided a much better description than my somewhat brief and over simplified one. Your much better description even better shows that it is not possible to make the case that adding a FR to a system increases the brightness of an image if nothing else changes. In this thread the example uses different scopes but adding a FR to "speed things up" is also a part of the overall myth. 

Link to comment
Share on other sites

  • Replies 133
  • Created
  • Last Reply
Just now, Rodd said:

To answer--the Televue.  But they were not processed at all.  They were given precisely the same alignment, stacking and stretch (pre-set STF). 

Rodd

Exactly - that is the whole point - signal levels can be "manipulated" and image can be made to look brighter (not intentionally, just by using normal processing if care is not taken to preserve certain quantities), but SNR can't be cheated - otherwise we would all be producing great images in no time.

So if I finally get what this discussion is all about :D, in order to establish difference in signal level in these two images here is what you should really do:

1. Choose single sub from each image - stacking can skew results. Take care that subs are shot at roughly same position in sky (at least ALT) - just check sub times against planetarium program and choose subs where target was roughly at same altitude above horizon.

2. Choose uniform part of nebula, where there is not much variation in brightness - best to select a single point that you can identify in both images (midway between two features), and some circular area around that point (make point center, and radius in each image with respect to resolution so you don't have to resample the images).

3. Calculate mean pixel value in selection for both images.

4. Divide both mean pixel values with scale coefficient representing pixel area expressed in terms arcsec squared.

Compare what you get and draw conclusion from that.

Link to comment
Share on other sites

1 minute ago, Rodd said:

stretch (pre-set STF). 

I don't think you can stretch them with the same preset. It applies a non-linear stretch so if a pixel value is higher in one image (because of the effects of the smaller focal length) it will appear much higher in the second, i.e. inflating the apparent brightness. If you go back to the linear images and crop to the same area, can you do a sum of the total pixel values in each image? I would then expect the TOA to show a higher count by 1.66x (all things light pollution, atmosphere, etc being equal).

The reason I ask about the 35% brighter is that the Teleview image should appear 35% brighter because it's shorter focal length has a higher impact than the TOA's bigger aperture by that 35% amount.

15 minutes ago, Rodd said:

The only difference between this comparison and a straight focal reducer on 1 scope experiment is 1 inch of aperture.

This is what's breaking the whole myth. You cannot discount the change in aperture. If aperture had no effect on exposure why would we desire larger scopes? Why are we building 20m+ scopes to "see" deeper into space. If the larger aperture didn't result on more photons being collected then what would be the point? In your case, that 1" increased light collection by 66%. That's not insignificant. It's just not significant enough to combat the other effect which is the different focal lengths.

Link to comment
Share on other sites

4 minutes ago, Filroden said:

I don't think you can stretch them with the same preset. It applies a non-linear stretch so if a pixel value is higher in one image (because of the effects of the smaller focal length) it will appear much higher in the second, i.e. inflating the apparent brightness. If you go back to the linear images and crop to the same area, can you do a sum of the total pixel values in each image? I would then expect the TOA to show a higher count by 1.66x (all things light pollution, atmosphere, etc being equal).

The reason I ask about the 35% brighter is that the Teleview image should appear 35% brighter because it's shorter focal length has a higher impact than the TOA's bigger aperture by that 35% amount.

This is what's breaking the whole myth. You cannot discount the change in aperture. If aperture had no effect on exposure why would we desire larger scopes? Why are we building 20m+ scopes to "see" deeper into space. If the larger aperture didn't result on more photons being collected then what would be the point? In your case, that 1" increased light collection by 66%. That's not insignificant. It's just not significant enough to combat the other effect which is the different focal lengths.

If the scopes were the same aperture my example would even be worse--that is my point.  The extra aperture just makes my point more to the point.

Rodd

Link to comment
Share on other sites

8 minutes ago, vlaiv said:

Exactly - that is the whole point - signal levels can be "manipulated" and image can be made to look brighter (not intentionally, just by using normal processing if care is not taken to preserve certain quantities), but SNR can't be cheated - otherwise we would all be producing great images in no time.

So if I finally get what this discussion is all about :D, in order to establish difference in signal level in these two images here is what you should really do:

1. Choose single sub from each image - stacking can skew results. Take care that subs are shot at roughly same position in sky (at least ALT) - just check sub times against planetarium program and choose subs where target was roughly at same altitude above horizon.

2. Choose uniform part of nebula, where there is not much variation in brightness - best to select a single point that you can identify in both images (midway between two features), and some circular area around that point (make point center, and radius in each image with respect to resolution so you don't have to resample the images).

3. Calculate mean pixel value in selection for both images.

4. Divide both mean pixel values with scale coefficient representing pixel area expressed in terms arcsec squared.

Compare what you get and draw conclusion from that.

I will try--thanks,

Rodd  PS--Then, if I get the same results--what?

Link to comment
Share on other sites

Just now, Rodd said:

I will try--thanks,

Rodd  PS--Then, if I get the same results--what?

Report your findings and if you get same results like with eyeball method (image looks brighter) - then we can have discussion of what is going on, have a peer review of your work, and if your method checks out (no system error) and is reproducible by other members - we will need to change our understanding of maths / physics / optics :D

 

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

Report your findings and if you get same results like with eyeball method (image looks brighter) - then we can have discussion of what is going on, have a peer review of your work, and if your method checks out (no system error) and is reproducible by other members - we will need to change our understanding of maths / physics / optics :D

 

Just to let you know--Just checked and the Televue image had a SNR of almost 4x the TOA (7 and change compared to 2.4 or something).  ?????--original unresampled image.

Rodd

Link to comment
Share on other sites

5 minutes ago, Rodd said:

If the scopes were the same aperture my example would even be worse--that is my point.

I agree. If they were the same aperture and no change to their focal lengths, the Televue would have appeared MUCH brighter (but would have been of a lower resolution).

6 minutes ago, Rodd said:

The extra aperture just makes my point more to the point.

This is where I don't agree. The extra aperture has started to achieve the appearance of equal brightness. You would have to have the TOA's aperture reach 153mm (at the same focal length) to see it overtake the Teleview in appearance.

3 minutes ago, Rodd said:

Just to let you know--Just checked and the Televue image had a SNR of almost 4x the TOA (7 and change compared to 2.4 or something).  ?????--original unresampled image.

On the single, unstretched subs covering the same area of sky?

Link to comment
Share on other sites

4 minutes ago, Filroden said:

I agree. If they were the same aperture and no change to their focal lengths, the Televue would have appeared MUCH brighter (but would have been of a lower resolution).

This is where I don't agree. The extra aperture has started to achieve the appearance of equal brightness. You would have to have the TOA's aperture reach 153mm (at the same focal length) to see it overtake the Teleview in appearance.

On the single, unstretched subs covering the same area of sky?

But according to Olly--the images should be the same brightness when normalized by upsample.  In this case--the TOA should be BRIGHTER.

No--I did a SNR of the above images (the original ones)

Rodd

Link to comment
Share on other sites

2 minutes ago, Rodd said:

But according to Olly--the images should be the same brightness when normalized by upsample.  In this case--the TOA should be BRIGHTER.

This would only be true if both images have not been stretched with a non-linear function (not true of the original images) and the upsample method did not apply a scaling function (unknown, as I don't know what function PixInsight uses to upsample).

3 minutes ago, Rodd said:

No--I did a SNR of the above images (the original ones)

It can only work by looking at the non-linear image. As soon as you stretch the image, you're affecting the pixel values in each image differently (even with the same pre-set STF function).

Link to comment
Share on other sites

Just now, Filroden said:

This would only be true if both images have not been stretched with a non-linear function (not true of the original images) and the upsample method did not apply a scaling function (unknown, as I don't know what function PixInsight uses to upsample).

It can only work by looking at the non-linear image. As soon as you stretch the image, you're affecting the pixel values in each image differently (even with the same pre-set STF function).

Didn't know the upsample had to occur in the linear state.  makes sense I guess--should have known.  Thanks

Rodd

Link to comment
Share on other sites

9 hours ago, ollypenrice said:

Anyone still addicted to the idea that F ratio is the be-all and end-all of exposure time should look at Wim and Gorann's Liverpool Telescope data processing jobs, noting the shortness of the exposures and the slowness of the telescope.

Actually I did the maths on that too a while back and with the spec of the sensor (mainly the huge pixels and low dark current and very very high QE) long story short the length of the exposure are to be expected. I mean 15um pixels and 95% QE (red) cooled to -40c binned 2x2, if you switched that sensor out for a KAF8300 its exposures would need to be much much much longer.

 

Link to comment
Share on other sites

My understanding is:

The image for a given extended object that is projected on the camera chip by a telescope is the same brightness at the same focal ratio regardless of aperture, but the size of the projected image increases as aperture increases (keeping focal ratio constant) so there are more total photons for the object. You can take advantage of the extra size/photons of the projected image from a larger scope by using a finer image scale to try to gain more detail compared with a smaller scope (if you use the same camera as with the smaller scope the image from the larger scope will be the same brightness but contain finer detail, seeing permitting). If instead you used a similar image scale to that achieved with the smaller scope (by switching to a camera with appropriately larger pixels and corresponding chip area) you would get a brighter image with the same resolution as that obtained from the smaller scope.

For point sources then brightness increases with aperture regardless of image scale. :smile:

 

 

 

Link to comment
Share on other sites

1 hour ago, Ikonnikov said:

My understanding is:

The image for a given extended object that is projected on the camera chip by a telescope is the same brightness at the same focal ratio regardless of aperture, but the size of the projected image increases as aperture increases (keeping focal ratio constant) so there are more total photons for the object. You can take advantage of the extra size/photons of the projected image from a larger scope by using a finer image scale to try to gain more detail compared with a smaller scope (if you use the same camera as with the smaller scope the image from the larger scope will be the same brightness but contain finer detail, seeing permitting). If instead you used a similar image scale to that achieved with the smaller scope (by switching to a camera with appropriately larger pixels and corresponding chip area) you would get a brighter image with the same resolution as that obtained from the smaller scope.

For point sources then brightness increases with aperture regardless of image scale. :smile:

 

 

 

Nope--not true.  Aperture has no impact on the size of an object, only the number of photons collected.  An image of the ring nebula at 1 arcsec/pixel in a 4" scope and a 20"scope will be the same size.  Pixel scale effects object size.  It is true that larger apertures for given pixles will typically decrease pixel scale--but that is a function of focal length, not aperture.  Also--A 20"  scope at F5 will produce a much, much, much brighter object than a 4" scope at F5 (a ridiculously large  focal length on the 20" scope may impact this statement).

Rodd

Link to comment
Share on other sites

8 hours ago, Ikonnikov said:

My understanding is:

The image for a given extended object that is projected on the camera chip by a telescope is the same brightness at the same focal ratio regardless of aperture, but the size of the projected image increases as aperture increases (keeping focal ratio constant) so there are more total photons for the object.

 

...and if there are more total photons from the object, what happens to them? This may be the crux of the matter. They don't disappear, unless they fail to make it above the noise floor, which is a caveat well understood by all who argue that there is an F ratio fallacy.

In my earlier question I asked how a one hundredfold reduction in photons could divide the exposure time by four (comparing a 50mm F5 with a 500mm F10.

Olly

Link to comment
Share on other sites

37 minutes ago, ollypenrice said:

...and if there are more total photons from the object, what happens to them? This may be the crux of the matter. They don't disappear, unless they fail to make it above the noise floor, which is a caveat well understood by all who argue that there is an F ratio fallacy.

In my earlier question I asked how a one hundredfold reduction in photons could divide the exposure time by four (comparing a 50mm F5 with a 500mm F10.

Olly

Olly, if you imaged a perfectly uniform sky onto a perfect detector with perferct scopes then the image from the two scopes would be uniform and of the same average intensity if you exposed the F5 scope for 1/4 the time of the F10 scope.

What is different is the image scale. For a given finite detector size the extra photons from the 500mm F10 miss the detector.

Regards Andrew 

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

...and if there are more total photons from the object, what happens to them? This may be the crux of the matter. They don't disappear, unless they fail to make it above the noise floor, which is a caveat well understood by all who argue that there is an F ratio fallacy.

In my earlier question I asked how a one hundredfold reduction in photons could divide the exposure time by four (comparing a 50mm F5 with a 500mm F10.

Olly

I answered that question, with a mathematical proof two pages back. People keep saying brightness though and that is wrong, its signal to noise ratio you gain not brightness, the object always has the same surface brightness. You gain signal to noise because you are the concentrating photons onto less pixels. But as I said you lose information because of the reduction in resolution and as a result in terms of overall image quality there is a balance between a low resolution high signal to noise image and a high resolution poor signal to noise image. 

The time taken for pixels to gather sufficient signal (photons) to push the pixel value above the noise floor is the crux of the matter. The photons don't vanish they just get divided over more pixels each of which has its own inherent noise level. 

Link to comment
Share on other sites

1 hour ago, andrew s said:

Olly, if you imaged a perfectly uniform sky onto a perfect detector with perferct scopes then the image from the two scopes would be uniform and of the same average intensity if you exposed the F5 scope for 1/4 the time of the F10 scope.

What is different is the image scale. For a given finite detector size the extra photons from the 500mm F10 miss the detector.

Regards Andrew 

Exactly, if you take a sky flat and two identical camera it will saturate faster on a 50mm objective F5 scope than it will on a 500mm objective F10 scope. If you use a focal reducer on the 500mm F10 scope to make it F5 it will saturate the camera in 1/4 the time it took at F10 (and the same amount of time that it took to saturate the 50mm F5). Both the sky and nebula are extended objects so the result will be the same. 

Link to comment
Share on other sites

46 minutes ago, Adam J said:

I answered that question, with a mathematical proof two pages back. People keep saying brightness though and that is wrong, its signal to noise ratio you gain not brightness, the object always has the same surface brightness. You gain signal to noise because you are the concentrating photons onto less pixels. But as I said you lose information because of the reduction in resolution and as a result in terms of overall image quality there is a balance between a low resolution high signal to noise image and a high resolution poor signal to noise image. 

The time taken for pixels to gather sufficient signal (photons) to push the pixel value above the noise floor is the crux of the matter. The photons don't vanish they just get divided over more pixels each of which has its own inherent noise level. 

Objects have the same surface brightness--but they do not always appear equally bright when imaged. 

Rodd

Link to comment
Share on other sites

26 minutes ago, Adam J said:

Exactly, if you take a sky flat and two identical camera it will saturate faster on a 50mm objective F5 scope than it will on a 500mm objective F10 scope. If you use a focal reducer on the 500mm F10 scope to make it F5 it will saturate the camera in 1/4 the time it took at F10 (and the same amount of time that it took to saturate the 50mm F5). Both the sky and nebula are extended objects so the result will be the same. 

I dont see why extended or not extended makes a difference.  If a region on a chip gets saturated faster it does not matter what object is being imaged--the pixels either saturate faster or they don't.  The chip either fills with electrons faster or it does not--why does it matter what one is imaging?

Rodd

Link to comment
Share on other sites

5 minutes ago, Rodd said:

Objects have the same surface brightness--but they do not always appear equally bright when imaged. 

Rodd

Its a relative digital pixel level as opposed to a brightness, you can make something appear brighter by adjusting the monitor back light, I am just being pedantic with the use of the word brightness.  I think the key is as you say that they 'appear' unequally bright. 

Link to comment
Share on other sites

29 minutes ago, Rodd said:

I dont see why extended or not extended makes a difference.  If a region on a chip gets saturated faster it does not matter what object is being imaged--the pixels either saturate faster or they don't.  The chip either fills with electrons faster or it does not--why does it matter what one is imaging?

Rodd

Its to do with pixel sampling. The pixels are cover a finite area and so you get a situation where a star may only be covered by say 4 pixels (above the noise floor) meaning that a small reduction in focal length will not change the number of pixels sampling the star. Normally it only applies to stars and depending on how bloated the star is sometimes not even then, it could also apply to a bright filament within a nebula.

Most of the time an object would be considered extended. Its more a counter to people saying they see more stars on a larger aperture at lower F-ratio...which can happen.  

Here is a diagram to explain:

59e48bd8cd646_StarExample.jpg.f811387c69a5c46f58769633b5fab057.jpg

This is something of a large approximation / extreme example however:

4 x Squares are pixels the circle is the focused star. Irrespective the star is still sampled by 4 pixels and so the photons collected from the star will be spread over 4 x pixels. As such the values of the 4x pixels stay the same and the apparent brightness in the image remains unaffected. But this is not always the case and not the case for an extended object where more photons are sampled per pixel if the F-ratio is reduced. You can also get the situation where you have more aperture and a slower f-ratio but the star is still sampled by the same number of pixels in which case you will get more photons collected per pixel and that goes counter to the effect observed on an extended object. 

Link to comment
Share on other sites

2 hours ago, ollypenrice said:

They don't disappear, unless they fail to make it above the noise floor, ...

They don't really disappear then either, they can't be seen on single sub because SNR is too low, but stack enough of such exposures and they will be detectable once SNR achieves certain level.

18 minutes ago, Rodd said:

I dont see why extended or not extended makes a difference.

We make couple assumptions when we derive maths that explains what happens. Extended sources are mentioned because light is assumed to be spread over multiple pixels on each resolution pretty much evenly. With point sources, it can more often be the case where the star is covering just a few pixels, and then change of resolution does not alter number of pixels significantly - take for example case where star is located in the center of four adjacent pixels and resolution is such that 99.999% of light falls on those 4 pixels (think gaussian curve and 3 sigma radius) - reducing resolution in this case will leave the same 4 pixels covering the star profile - so no change in brightness occurs because light is again evenly spread over 4 pixels.

Link to comment
Share on other sites

7 minutes ago, vlaiv said:

They don't really disappear then either, they can't be seen on single sub because SNR is too low, but stack enough of such exposures and they will be detectable once SNR achieves certain level.

We make couple assumptions when we derive maths that explains what happens. Extended sources are mentioned because light is assumed to be spread over multiple pixels on each resolution pretty much evenly. With point sources, it can more often be the case where the star is covering just a few pixels, and then change of resolution does not alter number of pixels significantly - take for example case where star is located in the center of four adjacent pixels and resolution is such that 99.999% of light falls on those 4 pixels (think gaussian curve and 3 sigma radius) - reducing resolution in this case will leave the same 4 pixels covering the star profile - so no change in brightness occurs because light is again evenly spread over 4 pixels.

Oh--I was equating "extended" with whether or not the object fits on the chip or not-- the statement that is often attached to whether a focal reducer will reduce exposure time.  That is how this whole thing got started.  Those that believe in the myth claim that using a focal reducer will not reduce exposure times if one is only interested in, say the ring nebula in the FOV (always smaller than FOV).  The only time a reducer will lesson exposure time is if you want the additional FOV it affords, then it WILL lesson the time it takes to achieve a certain signal strength over the whole FOV.   Will a focal reducer lesson exposure time or not....that is the question that I really want to know.  I usually use one to achieve a bigger FOV--so its target based.  But my exposure times are creeping up toward 30 hours and I really would like to finish images in, say 6-8.  Hence my desire to get to the bottom of exposure times.  

Rodd

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.