Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Reducer Myth: Some data.


Rodd

Recommended Posts

20 minutes ago, Adam J said:

Its to do with pixel sampling. The pixels are cover a finite area and so you get a situation where a star may only be covered by say 4 pixels (above the noise floor) meaning that a small reduction in focal length will not change the number of pixels sampling the star. Normally it only applies to stars and depending on how bloated the star is sometimes not even then, it could also apply to a bright filament within a nebula.

Most of the time an object would be considered extended. Its more a counter to people saying they see more stars on a larger aperture at lower F-ratio...which can happen.  

Here is a diagram to explain:

59e48bd8cd646_StarExample.jpg.f811387c69a5c46f58769633b5fab057.jpg

This is something of a large approximation / extreme example however:

4 x Squares are pixels the circle is the focused star. Irrespective the star is still sampled by 4 pixels and so the photons collected from the star will be spread over 4 x pixels. As such the values of the 4x pixels stay the same and the apparent brightness in the image remains unaffected. But this is not always the case and not the case for an extended object where more photons are sampled per pixel if the F-ratio is reduced. You can also get the situation where you have more aperture and a slower f-ratio but the star is still sampled by the same number of pixels in which case you will get more photons collected per pixel and that goes counter to the effect observed on an extended object. 

I see.  I was equating extended with FOV coverage.  Nice diagram.

Rodd

Link to comment
Share on other sites

  • Replies 133
  • Created
  • Last Reply
4 minutes ago, Rodd said:

Oh--I was equating "extended" with whether or not the object fits on the chip or not-- the statement that is often attached to whether a focal reducer will reduce exposure time.  That is how this whole thing got started.  Those that believe in the myth claim that using a focal reducer will not reduce exposure times if one is only interested in, say the ring nebula in the FOV (always smaller than FOV).  The only time a reducer will lesson exposure time is if you want the additional FOV it affords, then it WILL lesson the time it takes to achieve a certain signal strength over the whole FOV.   Will a focal reducer lesson exposure time or not....that is the question that I really want to know.  I usually use one to achieve a bigger FOV--so its target based.  But my exposure times are creeping up toward 30 hours and I really would like to finish images in, say 6-8.  Hence my desire to get to the bottom of exposure times.  

Rodd

As I already commented above, using focal reducer on a scope / camera combination (same scope, same camera, only focal reducer added) will always reduce time necessary to achieve target SNR.

How much time will be reduced is rather complex dependence on many things which include: scope aperture, reduction factor - imaging resolution (with vs without reducer), target brightness, LP, ....

So it is not straight forward like in normal photography where there is rule of thumb: for each reduction F/ratio, I can't remember how many stops vs how many times halves exposure time .... (not into photography that much). In normal photography there is such rule because level of light is such that all other factors fade in comparison with it and actual values are well approximated with such rule.

General rule for using focal reducer (it does not matter if target fits whole on the frame or not) in astrophotograpy is:

Time will be less, but never quite match that of normal photography, also the darker skies you have time will be reduced more, brighter the target is, time will be reduced more.

So the least effect is when imaging faint target in LP skies, and best effect is on bright target in dark skies.

Link to comment
Share on other sites

1 minute ago, Rodd said:

I see.  I was equating extended with FOV coverage.  Nice diagram.

Rodd

However, why is the star smaller at F5?  The bigger the aperture the smaller the star.  Usually the bigger the aperture the longer the focal length (forget hyperstar).  If I throw a reducer on my scope will the stars get smaller?   What happens when the FWHM is less than the pixel scale (say 1.5 arcsec when the pixel scale is 2 arcsec/pix?) or in the atacama desert say .5 arcsec?

Rodd 

Link to comment
Share on other sites

1 minute ago, Rodd said:

However, why is the star smaller at F5?  The bigger the aperture the smaller the star.  Usually the bigger the aperture the longer the focal length (forget hyperstar).  If I throw a reducer on my scope will the stars get smaller?   What happens when the FWHM is less than the pixel scale (say 1.5 arcsec when the pixel scale is 2 arcsec/pix?) or in the atacama desert say .5 arcsec?

Rodd 

I think that diagram shows F/10 and F/5 for same aperture, and yes, using focal reducer will "tighten" up the stars in the same way it will "tighten up" rest of the image (things will be smaller) or to put it another way: if object is 100" then it will be 100 pixels at 1"/pixel and 50 pixels at 2"/pixel - same is for star diameter, if FWHM is 3" and you image at 1"/pixel - FWHM will be 3 pixels, but at 3"/pixel it will be one pixel.

Star is almost never single pixel, because we never use such fast optics - star profile is gaussian in nature (combination of airy disk, seeing and tracking errors) and when you decrease resolution for normal pixel size you can only go so much in decreasing resolution before you start needing ever shorter focal lengths, and in doing so you end up using less aperture until at some point you hit airy size for given aperture (smaller aperture increases airy disk size) and star starts to expand regardless and again takes up more than one pixel. You can have situation where star is one pixel wide - just take any of your subs and bin it a lot - (decrease resolution without expanding airy disk) - you will end up with single pixel stars at some point. True Gaussian shape is never 0 and it goes on until infinity but you will hit noise floor much sooner than that :D

 

Link to comment
Share on other sites

1 minute ago, vlaiv said:

As I already commented above, using focal reducer on a scope / camera combination (same scope, same camera, only focal reducer added) will always reduce time necessary to achieve target SNR.

How much time will be reduced is rather complex dependence on many things which include: scope aperture, reduction factor - imaging resolution (with vs without reducer), target brightness, LP, ....

So it is not straight forward like in normal photography where there is rule of thumb: for each reduction F/ratio, I can't remember how many stops vs how many times halves exposure time .... (not into photography that much). In normal photography there is such rule because level of light is such that all other factors fade in comparison with it and actual values are well approximated with such rule.

General rule for using focal reducer (it does not matter if target fits whole on the frame or not) in astrophotograpy is:

Time will be less, but never quite match that of normal photography, also the darker skies you have time will be reduced more, brighter the target is, time will be reduced more.

So the least effect is when imaging faint target in LP skies, and best effect is on bright target in dark skies.

So I am trying to balance what people who know much more than I say.  As far as I can tell....You et al, are in one camp. Craig Stark et al, and Olly et al, are in another camp.  One camp believes in the myth, and one does not.  That is why I posted the images--trying to once and for all choose.  I botched it by using different scopes I suppose.  I plan to reduce the TOA soon--after I finish with M33, so at that time I will take a few 30min Ha subs of Gamma Cas and put this to bed.....or not:happy8:

Rodd

Link to comment
Share on other sites

17 minutes ago, Rodd said:

However, why is the star smaller at F5?  The bigger the aperture the smaller the star.  Usually the bigger the aperture the longer the focal length (forget hyperstar).  If I throw a reducer on my scope will the stars get smaller?   What happens when the FWHM is less than the pixel scale (say 1.5 arcsec when the pixel scale is 2 arcsec/pix?) or in the atacama desert say .5 arcsec?

Rodd 

The diagram a simplification, but in general the diffraction limited size of the stars will reduce with a larger objective. I was still thinking in terms of ollys example of 50mm vs 500mm objectives at different F-ratios. There is also bloat to consider and the star is unlikely to actually fall perfectly into the center of 4 pixels. 

For the same aperture at different f-ratios the diffraction limited size of the stars will be similar or smaller...I think. 

But even if the size of the imaged star is not changing the point remains the same. For stars the same rules do not apply as the size of the sampled / projected image is approaching the pixel scale. 

Link to comment
Share on other sites

5 minutes ago, vlaiv said:

I think that diagram shows F/10 and F/5 for same aperture, and yes, using focal reducer will "tighten" up the stars in the same way it will "tighten up" rest of the image (things will be smaller) or to put it another way: if object is 100" then it will be 100 pixels at 1"/pixel and 50 pixels at 2"/pixel - same is for star diameter, if FWHM is 3" and you image at 1"/pixel - FWHM will be 3 pixels, but at 3"/pixel it will be one pixel.

Star is almost never single pixel, because we never use such fast optics - star profile is gaussian in nature (combination of airy disk, seeing and tracking errors) and when you decrease resolution for normal pixel size you can only go so much in decreasing resolution before you start needing ever shorter focal lengths, and in doing so you end up using less aperture until at some point you hit airy size for given aperture (smaller aperture increases airy disk size) and star starts to expand regardless and again takes up more than one pixel. You can have situation where star is one pixel wide - just take any of your subs and bin it a lot - (decrease resolution without expanding airy disk) - you will end up with single pixel stars at some point. True Gaussian shape is never 0 and it goes on until infinity but you will hit noise floor much sooner than that :D

 

How about imaging with a pixel scale of 3.5 arcsec per pixel and achieving a FWHM value of 2".  To put it generally--what happens when the FWHM value of a image is less than the pixel scale of teh scope/camera system?

Rodd

Link to comment
Share on other sites

1 minute ago, Rodd said:

So I am trying to balance what people who know much more than I say.  As far as I can tell....You et al, are in one camp. Craig Stark et al, and Olly et al, are in another camp.  One camp believes in the myth, and one does not.  That is why I posted the images--trying to once and for all choose.  I botched it by using different scopes I suppose.  I plan to reduce the TOA soon--after I finish with M33, so at that time I will take a few 30min Ha subs of Gamma Cas and put this to bed.....or not:happy8:

Rodd

I'm not sure that we are in different camps, at least I did not get the impression that Olly thinks differently last time we had a discussion on this or similar topic.

At one time in the past I was trying to push out whole F/number idea associated with "speed" of imaging scopes, because it misleads people in thinking that F/8 scope for example is not "fast" scope and that they will need much more time to image an object with it. My idea was to ditch F/number in that role and use "aperture at resolution" rule for determining which scope is faster. But that idea is not really feasible in terms of general rule because very rarely people consider imaging resolution when comparing scopes - it would include idea of matching cameras, and people often have single camera, or in general think much about matching cameras to scopes.

So for clarity, because I'm sided into a camp on this :D , "my camp" advocates following statements:

"F/8 scope can be faster than F/5 scope" - in terms of time needed to achieve target SNR.

"Using focal reducer on single scope / camera combination will always yield shorter time needed to achieve target SNR regardless if target fits whole on sensor or not" - at expense of less resolution, and factor for time shortening is determined by myriad of factors (target brightness, LP levels, aperture, sub duration, resolution ....)

Link to comment
Share on other sites

1 minute ago, Rodd said:

How about imaging with a pixel scale of 3.5 arcsec per pixel and achieving a FWHM value of 2".  To put it generally--what happens when the FWHM value of a image is less than the pixel scale of teh scope/camera system?

Rodd

First you must be sure it says FWHM of 2" and not 2 pixels (in order to get FWHM in arcseconds you must enter imaging resolution into program that measures star FWHM). If you are sure that it says 2" FWHM and you are imaging at 3.5"/pixel it is still possible, just look at the graph:

360px-FWHM.svg.png

so measuring program uses pixel values and tries to fit gaussian profile to pixel values, and when it finds best fit it uses math formula to express FWHM for that gaussian profile - actual star size is bigger than that. Now while program can fit gaussian even when resolution is less than (huh confusing term, resolution is smaller but "/pixel value is bigger :D ) FWHM, it does so with less accuracy than it would do if resolution was higher (smaller "/pixel). Program can fit gaussian because it can look at the neighboring pixels that have "remnants" of star profile. When you over expose image in post processing - stars tend to get larger and larger - this is because you are actually exposing these "wings" that are otherwise low in intensity.

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

I'm not sure that we are in different camps, at least I did not get the impression that Olly thinks differently last time we had a discussion on this or similar topic.

At one time in the past I was trying to push out whole F/number idea associated with "speed" of imaging scopes, because it misleads people in thinking that F/8 scope for example is not "fast" scope and that they will need much more time to image an object with it. My idea was to ditch F/number in that role and use "aperture at resolution" rule for determining which scope is faster. But that idea is not really feasible in terms of general rule because very rarely people consider imaging resolution when comparing scopes - it would include idea of matching cameras, and people often have single camera, or in general think much about matching cameras to scopes.

So for clarity, because I'm sided into a camp on this :D , "my camp" advocates following statements:

"F/8 scope can be faster than F/5 scope" - in terms of time needed to achieve target SNR.

"Using focal reducer on single scope / camera combination will always yield shorter time needed to achieve target SNR regardless if target fits whole on sensor or not" - at expense of less resolution, and factor for time shortening is determined by myriad of factors (target brightness, LP levels, aperture, sub duration, resolution ....)

OK--one more statement...would you agree with this

" If a focal reducer is used on a scope/camera system, and the image obtained is equalized with an image of equal exposure time obtained from the same scope/camera system without the reducer (ie one is upsampled/downsampled so both are the same), a selected object in the image (say the ring nebula) will have the same signal strength in both images?" 

Rodd

Link to comment
Share on other sites

4 minutes ago, Rodd said:

OK--one more statement...would you agree with this

" If a focal reducer is used on a scope/camera system, and the image obtained is equalized with an image of equal exposure time obtained from the same scope/camera system without the reducer (ie one is upsampled/downsampled so both are the same), a selected object in the image (say the ring nebula) will have the same signal strength in both images?" 

Rodd

I disagree, not because it is wrong statement, but because I think it is incomplete statement (thus I can't agree with it). Proper / complete statement that I would agree with would go something like this:

If two subs (this means raw images from sensor without any stacking / processing), were brought to same resolution by means of binning (or algorithm with similar properties) using average method, then signal value in sub obtained without focal reducer would be lower than signal value obtained with focal reducer (on same scope / camera system). If two subs were brought to same resolution by binning but this time using add method instead of average, then for same resolution signal value would be the same in both images.

To add to above, if using different scopes and cameras with / without reducer combinations and using average resampling (instead of additive) you can end up with any of combinations: first "stronger", second "stronger", both equally "strong". For additive resampling there is always simple formula to tell you what sub will be "stronger" on same resolution - one with greater aperture.

Link to comment
Share on other sites

6 minutes ago, vlaiv said:

I'm not sure that we are in different camps, at least I did not get the impression that Olly thinks differently last time we had a discussion on this or similar topic.

At one time in the past I was trying to push out whole F/number idea associated with "speed" of imaging scopes, because it misleads people in thinking that F/8 scope for example is not "fast" scope and that they will need much more time to image an object with it. My idea was to ditch F/number in that role and use "aperture at resolution" rule for determining which scope is faster. But that idea is not really feasible in terms of general rule because very rarely people consider imaging resolution when comparing scopes - it would include idea of matching cameras, and people often have single camera, or in general think much about matching cameras to scopes.

So for clarity, because I'm sided into a camp on this :D , "my camp" advocates following statements:

"F/8 scope can be faster than F/5 scope" - in terms of time needed to achieve target SNR.

"Using focal reducer on single scope / camera combination will always yield shorter time needed to achieve target SNR regardless if target fits whole on sensor or not" - at expense of less resolution, and factor for time shortening is determined by myriad of factors (target brightness, LP levels, aperture, sub duration, resolution ....)

And that is the 'camp' that i am in. There are are a number of interacting factors at play here, but all things being equal then using a focal reducer will always yield a shorter to to achieve a given SNR but at the expense of image resolution....though i would note that if you are oversampling with a small pixel sensor then you may actually not even lose resolution as the pixel scale would not be the limiting factor.   

3 minutes ago, Rodd said:

OK--one more statement...would you agree with this

" If a focal reducer is used on a scope/camera system, and the image obtained is equalized with an image of equal exposure time obtained from the same scope/camera system without the reducer (ie one is upsampled/downsampled so both are the same), a selected object in the image (say the ring nebula) will have the same signal strength in both images?" 

Rodd

If I follow you then no i dont agree with that statement, it would have a higher signal strength in the reduced image but at lower resolution (in most cases). 

Link to comment
Share on other sites

4 minutes ago, vlaiv said:

If two subs (this means raw images from sensor without any stacking / processing), were brought to same resolution by means of binning (or algorithm with similar properties) using average method, then signal value in sub obtained without focal reducer would be lower than signal value obtained with focal reducer (on same scope / camera system). If two subs were brought to same resolution by binning but this time using add method instead of average, then for same resolution signal value would be the same in both images.

Yes the whole thing is almost exactly analogous to the increase in signal to noise gained by binning in a CCD. 

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

I disagree, not because it is wrong statement, but because I think it is incomplete statement (thus I can't agree with it). Proper / complete statement that I would agree with would go something like this:

If two subs (this means raw images from sensor without any stacking / processing), were brought to same resolution by means of binning (or algorithm with similar properties) using average method, then signal value in sub obtained without focal reducer would be lower than signal value obtained with focal reducer (on same scope / camera system). If two subs were brought to same resolution by binning but this time using add method instead of average, then for same resolution signal value would be the same in both images.

To add to above, if using different scopes and cameras with / without reducer combinations and using average resampling (instead of additive) you can end up with any of combinations: first "stronger", second "stronger", both equally "strong". For additive resampling there is always simple formula to tell you what sub will be "stronger" on same resolution - one with greater aperture.

I don't know what Pixinsight does when I resample.  But this is the crux of why I said you were in a different "camp" than Olly.  He was not as specific as you in his statement about "equalizing" (his term).  My statement was meant to approximate his.

Rodd 

Link to comment
Share on other sites

4 minutes ago, Adam J said:

And that is the 'camp' that i am in. There are are a number of interacting factors at play here, but all things being equal then using a focal reducer will always yield a shorter to to achieve a given SNR but at the expense of image resolution....though i would note that if you are oversampling with a small pixel sensor then you may actually not even lose resolution as the pixel scale would not be the limiting factor.   

Resolution may not change past what you like if you use a reducer with small pixels, but it will change if the focal length is changed.  If this statement is not true please explain.

Rodd

Link to comment
Share on other sites

6 minutes ago, Rodd said:

Resolution may not change past what you like if you use a reducer with small pixels, but it will change if the focal length is changed.  If this statement is not true please explain.

Rodd

Problem here is because we are using term resolution in two different contexts.

First context is: mapping between angular size and size on sensor - arc seconds per pixel

Second context for resolution is: level of recorded detail - meaning high resolution small features recorded, low resolution - only large scale features recorded. I think that @Adam J meant that you might even not loose resolution in sense that all small scale features that you recorded would be still recognizable if you oversampled when recording and then downsampled image - so resolution in terms of arc seconds per pixel would change, but resolution of features might not change (same size features identifiable in both images).

Link to comment
Share on other sites

Vlaiv, I think it does matter, when deciding whether or not to use a reducer, whether or not you want the whole field of view. If you do want the whole field you'll be happy to accept the reduced resolution in order to widen the field and you will get to an acceptable SN ratio at that resolution in less time. I take 'acceptable' to mean the noise level when the image is presented at 100%. It will never have been your intention to resample the reduced image upwards by the amount the reducer reduced its size. And you couldn't usefully do so anyway.

However, if you only want an 'area of interest' which will fit on the chip without reducer then you can make the image acceptable in noise terms by resampling it downwards to the size of the same area of interest as seen in the reduced one. I don't claim they will be precisely equivalent but they have had the same number of 'area of interest' photons in the same time and will not be substantially different.

Olly

Link to comment
Share on other sites

8 minutes ago, vlaiv said:

Problem here is because we are using term resolution in two different contexts.

First context is: mapping between angular size and size on sensor - arc seconds per pixel

Second context for resolution is: level of recorded detail - meaning high resolution small features recorded, low resolution - only large scale features recorded. I think that @Adam J meant that you might even not loose resolution in sense that all small scale features that you recorded would be still recognizable if you oversampled when recording and then downsampled image - so resolution in terms of arc seconds per pixel would change, but resolution of features might not change (same size features identifiable in both images).

Ah, the "Layman's" meaning of "resolution".  Mixing these 2 variations causes much confusion in conversations.  Not to mention the other variations (number of pixels, size of image, clarity of image etc).  I always preferred "recorded detail" combined with image clarity (in typical conversation).

Rodd

Link to comment
Share on other sites

15 minutes ago, ollypenrice said:

However, if you only want an 'area of interest' which will fit on the chip without reducer then you can make the image acceptable in noise terms by resampling it downwards to the size of the same area of interest as seen in the reduced one. I don't claim they will be precisely equivalent but they have had the same number of 'area of interest' photons in the same time and will not be substantially different.

Olly

This is were the subtle disagreement comes in, downwards sampling will not bring out data that is hidden under the noise floor. Yes they do have exactly the same number of photons within the area of interest in a purely optical sense, but once you bring the camera into the picture then the fact remains that those photons are spread over more pixels and so the signal to noise is worse. Its exactly the inverse of the change in signal to noise ratio you would get by hardware binning. You don't get that same gain by software binning (reducing the resolution) such as described although it may help make data that is marginal appear batter its not the same thing. 

Image quality in this context is a balance between signal to noise ratio and resolution.

If you have great signal to noise ratio and really horrible resolution within the area of interest then thats no good.

If you have horrible signal to noise ratio and fantastic resolution within the area of interest then thats no good either.

All that the reducer is doing is allowing you to exchange resolution for additional signal to noise and some extra field of view is you need it. 

So in these terms the reducer is not always improving image quality, its just a tool for exchanging one parameter for anther.  

Link to comment
Share on other sites

8 minutes ago, ollypenrice said:

Vlaiv, I think it does matter, when deciding whether or not to use a reducer, whether or not you want the whole field of view. If you do want the whole field you'll be happy to accept the reduced resolution in order to widen the field and you will get to an acceptable SN ratio at that resolution in less time. I take 'acceptable' to mean the noise level when the image is presented at 100%. It will never have been your intention to resample the reduced image upwards by the amount the reducer reduced its size. And you couldn't usefully do so anyway.

However, if you only want an 'area of interest' which will fit on the chip without reducer then you can make the image acceptable in noise terms by resampling it downwards to the size of the same area of interest as seen in the reduced one. I don't claim they will be precisely equivalent but they have had the same number of 'area of interest' photons in the same time and will not be substantially different.

Olly

Yes, I agree with this, focal reducer matters for both framing and because it lowers time for target SNR. I also agree that lower integration time for target SNR can be achieved without use of focal reducer if one bins down high resolution image, and this process can produce two results: increase brightness and increase SNR, and keep original brightness but increase SNR as well as first result (just difference between additive and averaging resampling). And yes, using focal reducer has a small "edge" over binning in the same way hardware binning has edge over software binning (x1 read noise per binned pixel, vs x4 read noise per binned pixel)

Link to comment
Share on other sites

7 minutes ago, vlaiv said:

Yes, I agree with this, focal reducer matters for both framing and because it lowers time for target SNR. I also agree that lower integration time for target SNR can be achieved without use of focal reducer if one bins down high resolution image, and this process can produce two results: increase brightness and increase SNR, and keep original brightness but increase SNR as well as first result (just difference between additive and averaging resampling). And yes, using focal reducer has a small "edge" over binning in the same way hardware binning has edge over software binning (x1 read noise per binned pixel, vs x4 read noise per binned pixel)

Yes broadly i did agree with what olly said too, but you can also use a reducer in addition to binning as opposed to instead of binning. Also software and hardware binning are not the same thing in terms of noise reduction. In the case of software binning you average the noise out in a similar way to stacking, in hardware binning you are actually increasing the raw signal to noise level.  As you say its 1x read noise vs 4 x read noise, software binning will just level the noise out it wont actually bring out detail that is below the noise floor. So in the very faint signal that is close to the noise floor you will reveal it one way but not the other. 

Link to comment
Share on other sites

6 minutes ago, Adam J said:

This is were the subtle disagreement comes in, downwards sampling will not bring out data that is hidden under the noise floor. Yes they do have exactly the same number of photons within the area of interest in a purely optical sense, but once you bring the camera into the picture then the fact remains that those photons are spread over more pixels and so the signal to noise is worse. Its exactly the inverse of the change in signal to noise ratio you would get by hardware binning. You don't get that same gain by software binning (reducing the resolution) such as described although it may help make data that is marginal appear batter its not the same thing. 

Image quality in this context is a balance between signal to noise ratio and resolution.

If you have great signal to noise ratio and really horrible resolution within the area of interest then thats no good.

If you have horrible signal to noise ratio and fantastic resolution within the area of interest then thats no good either.

All that the reducer is doing is allowing you to exchange resolution for additional signal to noise and some extra field of view is you need it. 

So in these terms the reducer is not always improving image quality, its just a tool for exchanging one parameter for anther.  

But will it allow me to complete an image faster.  Yes I want the added FOV (otherwise I would not use the reducer).  When viewing an image at 1:1, the eye can only pick out details of a certain smallness anyway--unless you zoom, but one can't zoom in a print.  So, am I correct in assuming that regardless of everything said--if I use a focal reducer and go from F7.7 to F5.4, I can reduce exposure time by almost 1/2 and get the same quality image

Rodd

Link to comment
Share on other sites

3 minutes ago, Adam J said:

Yes broadly i did agree with what olly said too, but you can also use a reducer in addition to binning as opposed to instead of binning. Also software and hardware binning are not the same thing in terms of noise reduction. In the case of software binning you average the noise out in a similar way to stacking, in hardware binning you are actually increasing the raw signal to noise level.  

But--when you bin 2x2--you lose resolution without gaining FOV.  So it is a pure loss.  With a reducer, you lose resolution but you gain FOV--so it is an exchange.

Rodd

Link to comment
Share on other sites

2 minutes ago, Rodd said:

But will it allow me to complete an image faster.  Yes I want the added FOV (otherwise I would not use the reducer).  When viewing an image at 1:1, the eye can only pick out details of a certain smallness anyway--unless you zoom, but one can't zoom in a print.  So, am I correct in assuming that regardless of everything said--if I use a focal reducer and go from F7.7 to F5.4, I can reduce exposure time by almost 1/2 and get the same quality image

Rodd

Forget that the brighter areas look even brighter in the reduced image. Those areas are going to have good signal to noise with or without the reducer. The place were the reducer will make the difference is at the margins of detect ability. If you want to bring these areas above the noise floor then yes a focal reducer will allow you to do it in less time than the same setup without one. 

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.