Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Reducer Myth revisited


Rodd

Recommended Posts

42 minutes ago, ollypenrice said:

Adam and Vlaiv in particular, how about this caveat for the diagrams I posted?

These diagrams assume workable sampling rates for the camera in all cases. If the longer focal length introduces over-sampling it will add no resolution and the imager would benefit from the reducer since it will add speed without reducing resolution. An alternative would be to bin the data when oversampled.

I'm left wondering if we don't need a new unit. Arcseconds per pixel is fine for resolution but don't we really need something indicating flux per pixel? This might be indicated by something like square mm of aperture per pixel, no? 

Olly

 

Not sure if people will understand what "workable sampling rate" is. Imager might benefit from reducer if they are properly sampled at native focal length or even undersampled. In fact they will benefit from focal reducer in terms of speed if they leave sampling rate as is, and are happy with their sampling rate - image will reach target SNR faster.

Some time ago, when I first started taking interest in whole F/ratio - speed stuff, I also tried to come up with meaningful measure of how fast system is. Something that can be used to compare two setups in terms of speed. It is possible to do, but defeats the point - too much math involved to be done quickly mentally (things need to be squared and divided). Only decent thing that I got out of it is "aperture at resolution" - but again that does not help much in this case and in general case because resolution changes.

Even if we define some measure - it will be more of "setup is capable to deliver" type of thing but it will not be relative measure in all scenarios (like saying setup A is two times as fast as setup B). Whenever we have weak signal - other things start being more important than aperture and sampling rate. Processing workflow can have effects as well - like number of calibration frames and algorithms used.

Link to comment
Share on other sites

4 minutes ago, Rodd said:

I think the question was more how to do it in PI but great info. The question remains, how does this translate to PI

Well, PI documentation does offer some insight in how it's all done - and what options are available:

https://pixinsight.com/doc/tools/Resample/Resample.html

https://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html

In fact in this last link you will see math behind it and effects it has on image and noise in some scenarios

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

Adam and Vlaiv in particular, how about this caveat for the diagrams I posted?

These diagrams assume workable sampling rates for the camera in all cases. If the longer focal length introduces over-sampling it will add no resolution and the imager would benefit from the reducer since it will add speed without reducing resolution. An alternative would be to bin the data when oversampled.

I'm left wondering if we don't need a new unit. Arcseconds per pixel is fine for resolution but don't we really need something indicating flux per pixel? This might be indicated by something like square mm of aperture per pixel, no? 

Olly

 

This does not really get to the point of using a reducer to decrease imaging time for smaller targets within a larger FOV.  My original post tried to ask (its proving to be a difficult thing to articulate).....well, how about this.....you have a 8 inch scope at F10 and you want to image M1.  so you say, wait, I will throw on the reducer, image M1 and surrounding environs, then crop out M1 and I will have my picture of M1 in half the time.    A very specific use of reducers.  I understand all other aspect of the devices.  You are interested in the interlacing details in the core, naturally.

Rodd

Link to comment
Share on other sites

39 minutes ago, vlaiv said:

Well, PI documentation does offer some insight in how it's all done - and what options are available:

https://pixinsight.com/doc/tools/Resample/Resample.html

https://pixinsight.com/doc/docs/InterpolationAlgorithms/InterpolationAlgorithms.html

In fact in this last link you will see math behind it and effects it has on image and noise in some scenarios

Thanks. You think the pi folks would have offered this.   Like pulling teeth.  
 

question.  Is there any difference between software binning and using a camera with bigger pixels that is more suited for a scope ( in my case I am way over sampled with toa native and asi 1600.  I can bin to bring the resolution better in line with the Fwhm (closer to 2x as apposed to 1.6). Or I can switch to a larger pixel camera.   Any major advantage to either.   Assume software bin

Link to comment
Share on other sites

54 minutes ago, Rodd said:

This does not really get to the point of using a reducer to decrease imaging time for smaller targets within a larger FOV.  My original post tried to ask (its proving to be a difficult thing to articulate).....well, how about this.....you have a 8 inch scope at F10 and you want to image M1.  so you say, wait, I will throw on the reducer, image M1 and surrounding environs, then crop out M1 and I will have my picture of M1 in half the time.    A very specific use of reducers.  I understand all other aspect of the devices.  You are interested in the interlacing details in the core, naturally.

Rodd

I think it does address the point because the reducer applies the light from the same aperture to fewer pixels, an increase in flux per pixel.

Regarding your M1 example, this is pure 'F ratio myth' territory whether you subscribe to the myth or reject it. My contention is that you don't need the reducer. You could just work in Bin 2 or resample the native image downwards for a comparable result without the expense of the reducer, its more critical focus and its tendency, in some cases, to plague you with internal reflections, spacer distances, tilt, etc.

The extreme example would be the Hyperstar. Some of the wording on their website implies that exposure times go from hours to seconds. So they might, but not of the same target! You can hardly compare M51 at F10 and F2 from the same aperture because at F2 it would be tiny and could clearly not be resampled upwards to the size of the F10 image.*

Olly

*Edit: Cannot be usefully resampled upwards as Vaiv says below. 

Edited by ollypenrice
Link to comment
Share on other sites

39 minutes ago, Rodd said:

This does not really get to the point of using a reducer to decrease imaging time for smaller targets within a larger FOV.  My original post tried to ask (its proving to be a difficult thing to articulate).....well, how about this.....you have a 8 inch scope at F10 and you want to image M1.  so you say, wait, I will throw on the reducer, image M1 and surrounding environs, then crop out M1 and I will have my picture of M1 in half the time.    A very specific use of reducers.  I understand all other aspect of the devices.  You are interested in the interlacing details in the core, naturally.

Rodd

In principle you can do that and it will lead to faster imaging time (or better image in the same time). Same thing can be achieved by not using focal reducer and binning your data (almost the same thing - there are very minute differences related to pixel blur and read noise, but you will be hard pressed to see them).

You can even do what you proposed at one point - if your native FL is oversampling, use focal reducer to record smaller image of object more quickly and then resample back to wanted size. However, I don't really see the point in resizing back to original size - no detail will be achieved that way (and that is ok because no detail was present in the first place when oversampling) and resulting image will look less sharp. You can leave it at lower resolution / smaller object - it will be aesthetically more pleasing to look at and if someone wants to look at it enlarged (to the same effect as if you upsampled it as part of processing) - they can just hit magnify buttom (ctrl + in their browser) - and image will be enlarged. Who ever chooses to do this will be aware that blur is consequence of enlargement and will not think that your image is at fault (but can object to this if you do it in processing).

28 minutes ago, Rodd said:

Thanks. You think the pi folks would have offered this.   Like pulling teeth.  
 

question.  Is there any difference between software binning and using a camera with bigger pixels that is more suited for a scope ( in my case I am way over sampled with toa native and asi 1600.  I can bin to bring the resolution better in line with the Fwhm (closer to 2x as apposed to 1.6). Or I can switch to a larger pixel camera.   Any major advantage to either.   Assume software bin

In practice there will be no difference assuming you bin x2 vs use x2 as large pixel (x4 in surface area - x2 width x2 height). In theory there are subtle differences, especially if you use software binning.

There will be difference in read noise for binned camera - it will double "per pixel" - so instead comparing camera with smaller pixel base read noise to camera with larger pixels - compare double that value. For ASI1600 bin x2 will give 7.6um pixels, so when comparing to camera that has 7.6um pixels - use 3.4e read noise instead of 1.7e read noise.

If you do binning certain way, for the same conditions you will have slightly sharper image - this is due to pixel blur. Surface of the pixel causes some blur over all other things that blur image (atmosphere, aperture, tracking/guiding). It is very small difference but it is there. Larger the surface of the pixel - larger the pixel blur. Regular binning will do the same to pixel blur as using larger pixel - so no advantage there. But other types of binning will circumvent issue with pixel blur to some degree.

There are a few more differences that are worth mentioning in "academic" sense. Issue with hot pixels. When you software bin you effectively loose "1/4 of pixel value" due to hot pixel. With camera with larger pixels that is whole pixel. Similarly you can bin in a certain way to include the fact that not all pixels have the same read noise (some are more noisy than others) so you can do weighted average and improve read noise characteristics.

I listed these subtle differences just that people get informed about them, but in practice you won't see almost any difference between camera with larger pixels and binning camera with smaller pixels.

Edited by vlaiv
  • Like 1
Link to comment
Share on other sites

2 minutes ago, ollypenrice said:

I think it does address the point because the reducer applies the light from the same aperture to fewer pixels, an increase in flux per pixel.

Olly

So, to be clear, in hopes of putting this little rascal to bed, you have changed you position from some time ago regarding using reducers to increase imaging speed of objects within a Fov?

 

Link to comment
Share on other sites

5 minutes ago, vlaiv said:

In principle you can do that and it will lead to faster imaging time (or better image in the same time). Same thing can be achieved by not using focal reducer and binning your data (almost the same thing - there are very minute differences related to pixel blur and read noise, but you will be hard pressed to see them).

You can even do what you proposed at one point - if your native FL is oversampling, use focal reducer to record smaller image of object more quickly and then resample back to wanted size. However, I don't really see the point in resizing back to original size - no detail will be achieved that way (and that is ok because no detail was present in the first place when oversampling) and resulting image will look less sharp. You can leave it at lower resolution / smaller object - it will be aesthetically more pleasing to look at and if someone wants to look at it enlarged (to the same effect as if you upsampled it as part of processing) - they can just hit magnify buttom (ctrl + in their browser) - and image will be enlarged. Who ever chooses to do this will be aware that blur is consequence of enlargement and will not think that your image is at fault (but can object to this if you do it in processing).

In practice there will be no difference assuming you bin x2 vs use x2 as large pixel (x4 in surface area - x2 width x2 height). In theory there are subtle differences, especially if you use software binning.

There will be difference in read noise for binned camera - it will double "per pixel" - so instead comparing camera with smaller pixel base read noise to camera with larger pixels - compare double that value. For ASI1600 bin x2 will give 7.6um pixels, so when comparing to camera that has 7.6um pixels - use 3.4e read noise instead of 1.7e read noise.

If you do binning certain way, for the same conditions you will have slightly sharper image - this is due to pixel blur. Surface of the pixel causes some blur over all other things that blur image (atmosphere, aperture, tracking/guiding). It is very small difference but it is there. Larger the surface of the pixel - larger the pixel blur. Regular binning will do the same to pixel blur as using larger pixel - so no advantage there. But other types of binning will circumvent issue with pixel blur to some degree.

There are a few more differences that are worth mentioning in "academic" sense. Issue with hot pixels. When you software bin you effectively loose "1/4 of pixel value" due to hot pixel. With camera with larger pixels that is whole pixel. Similarly you can bin in a certain way to include the fact that not all pixels have the same read noise (some are more noisy than others) so you can do weighted average and improve read noise characteristics.

I listed these subtle differences just that people get informed about them, but in practice you won't see almost any difference between camera with larger pixels and binning camera with smaller pixels.

So it makes more sense not to match pixel size with scope.   It makes more sense to use pixels that are too small because you can have both binned images and when the seeing is great higher resolution images

Link to comment
Share on other sites

13 minutes ago, Rodd said:

So it makes more sense not to match pixel size with scope.   It makes more sense to use pixels that are too small because you can have both binned images and when the seeing is great higher resolution images

Well, yes if you know what you are doing :D

I'm going to quote wiki article on Optical transfer function (https://en.wikipedi0.org/wiki/Optical_transfer_function) :

Quote

Just as standard definition video with a high contrast MTF is only possible with oversampling, so HD television with full theoretical sharpness is only possible by starting with a camera that has a significantly higher resolution, followed by digitally filtering. With movies now being shot in 4k and even 8k video for the cinema, we can expect to see the best pictures on HDTV only from movies or material shot at the higher standard. However much we raise the number of pixels used in cameras, this will always remain true in absence of a perfect optical spatial filter. Similarly, a 5-megapixel image obtained from a 5-megapixel still camera can never be sharper than a 5-megapixel image obtained after down-conversion from an equal quality 10-megapixel still camera. Because of the problem of maintaining a high contrast MTF, broadcasters like the BBC did for a long time consider maintaining standard definition television, but improving its quality by shooting and viewing with many more pixels (though as previously mentioned, such a system, though impressive, does ultimately lack the very fine detail which, though attenuated, enhances the effect of true HD viewing).

Similarly you will get somewhat sharper image if you oversample and then do some tricks to restore lost SNR. This is one of the reasons for recent trend of having high megapixel sensors and decreasing pixel size. Coupled with low read noise of CMOS sensor that enables you to get sharper images then with larger pixels (other being that it just sells better when people hear it has "more" of something - more mega pixels :D ).

In theory, ideal sensor will have extremely small pixels and 0 read noise. Such sensor would provide you with sharpest image and would offer fractional binning at any level without introducing artifacts. Due to nature of the world - it is simply not possible to make pixels smaller than wavelength of light (for example 2.4um pixel is about x4 larger than red wavelength so you can't get much smaller than that and still manage to record photons) and there will always be some read noise.

Edited by vlaiv
Not a sad face but ending bracket and colon
Link to comment
Share on other sites

22 minutes ago, ollypenrice said:

I think it does address the point because the reducer applies the light from the same aperture to fewer pixels, an increase in flux per pixel.

Regarding your M1 example, this is pure 'F ratio myth' territory whether you subscribe to the myth or reject it. My contention is that you don't need the reducer. You could just work in Bin 2 or resample the native image downwards for a comparable result without the expense of the reducer, its more critical focus and its tendency, in some cases, to plague you with internal reflections, spacer distances, tilt, etc.

The extreme example would be the Hyperstar. Some of the wording on their website implies that exposure times go from hours to seconds. So they might, but not of the same target! You can hardly compare M51 at F10 and F2 from the same aperture because at F2 it would be tiny and could clearly not be resampled upwards to the size of the F10 image.*

Olly

*Edit: Cannot be usefully resampled upwards as Vaiv says below. 

Olly--I understand.  However, if one happens to have a reducer, in principle, seeing as its the same as binning (or close enough) one CAN use is to achieve the results to which I refer....correct?  In a more succinct form...yes it can be used for that purpose, or no it cant...whether its a good idea, practical or economical is another question.   last year, binning never entered the discussion.  So, to avoid having to pound through this battlefield again........have you changed your position? (legitimately so if that is the case--I make no judgemenst--I just want to know once and for all.  I own a few reducers.  I like using them as opposed to binning--gives me a wider FOV when I want it.  I do not like to change configurations much--ruins all my flats.

Link to comment
Share on other sites

5 minutes ago, Rodd said:

So....it CAN't be done!!!! I get conflicting reports!

I think that you misinterpreted what Olly said there.

34 minutes ago, ollypenrice said:

*Edit: Cannot be usefully resampled upwards as Vaiv says below. 

What I'm reading (and happy to be corrected) is that Olly is saying in fact:

Yes you can upsample it to same size, but like I pointed out - there is no useful detail to be had by doing so. That is not conflicting to what I said - no detail will be gained, in case of oversampling image will be the same because there was no detail in the first place, and I don't really see the point in doing so as image will be more pleasing at lower resolution and if people want to enlarge that image for better view - they can easily do it them selves by hitting magnify/zoom in button.

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

I think that you misinterpreted what Olly said there.

What I'm reading (and happy to be corrected) is that Olly is saying in fact:

Yes you can upsample it to same size, but like I pointed out - there is no useful detail to be had by doing so. That is not conflicting to what I said - no detail will be gained, in case of oversampling image will be the same because there was no detail in the first place, and I don't really see the point in doing so as image will be more pleasing at lower resolution and if people want to enlarge that image for better view - they can easily do it them selves by hitting magnify/zoom in button.

I know you can upsple it. That is not what I mean by can.    I am frustrated.  I will beg off and have to bring this all up agsin

Link to comment
Share on other sites

6 minutes ago, vlaiv said:

I think that you misinterpreted what Olly said there.

What I'm reading (and happy to be corrected) is that Olly is saying in fact:

Yes you can upsample it to same size, but like I pointed out - there is no useful detail to be had by doing so. That is not conflicting to what I said - no detail will be gained, in case of oversampling image will be the same because there was no detail in the first place, and I don't really see the point in doing so as image will be more pleasing at lower resolution and if people want to enlarge that image for better view - they can easily do it them selves by hitting magnify/zoom in button.

The question is can you do it and save time imaging.   That is the whole point, not whether you CAN do it

Link to comment
Share on other sites

17 minutes ago, vlaiv said:

I think that you misinterpreted what Olly said there.

What I'm reading (and happy to be corrected) is that Olly is saying in fact:

Yes you can upsample it to same size, but like I pointed out - there is no useful detail to be had by doing so. That is not conflicting to what I said - no detail will be gained, in case of oversampling image will be the same because there was no detail in the first place, and I don't really see the point in doing so as image will be more pleasing at lower resolution and if people want to enlarge that image for better view - they can easily do it them selves by hitting magnify/zoom in button.

Yes, that's what I'm saying. In fact I said it already way back in the thread!

For me, '100%,' meaning one camera pixel for one screen pixel, is as large as I will ever present an image and even getting to that level is time consuming.

Olly

 

Edited by ollypenrice
Link to comment
Share on other sites

2 minutes ago, Rodd said:

The question is can you do it and save time imaging.   That is the whole point, not whether you CAN do it

I think that answer has been given to that question, but to reiterate:

In my words - yes.

Olly's position on that again is clear from his last post on subject, and I'll quote:

48 minutes ago, ollypenrice said:

Regarding your M1 example, this is pure 'F ratio myth' territory whether you subscribe to the myth or reject it. My contention is that you don't need the reducer. You could just work in Bin 2 or resample the native image downwards for a comparable result without the expense of the reducer, its more critical focus and its tendency, in some cases, to plague you with internal reflections, spacer distances, tilt, etc.

Here I'll point you to part: "you don't need the reducer". And in fact all of the rest said above is true and also demonstrated in this thread:

- Bin x2 in practice will give you same results

- Downsampling image will give you comparable results (and I made demonstration of how different resampling methods fare against binning).

However, same as x2 bin works - so does reducer.

In fact, in some cases reducer will have an edge over binning. For example if you are sampling at 1"/px and ideal sampling rate is 1.3"/px, and you happen to have reducer that reduces your focal length and gives you that sampling rate - it will be easier to use it than to bin - simply because you don't have suitable fractional binning method implemented in software yet. Reducer will provide better SNR improvement over other forms of resampling.

There is another benefit of doing reducer vs binning - exposure time. With reducer you are getting stronger signal per pixel and it is easier to beat read noise - you need shorter subs. If sharpness of your image depends on your sub duration (shorter subs are sharper than longer subs due to guiding issues or perhaps wind or whatever) then reducer will perform better because you will be able to use shorter subs.

  • Like 1
Link to comment
Share on other sites

One interesting test any of us could do would be to pre-process and post-process an image we already have but using only half the data. My prediction would be this: the half-data image, when shown at some fraction of full size, will look pretty similar to the full data image. It will break down into noise as we take it closer to full size. This would give us a practical feeling for the role of resizing in influencing perceived image quality.

Olly

Link to comment
Share on other sites

3 minutes ago, ollypenrice said:

One interesting test any of us could do would be to pre-process and post-process an image we already have but using only half the data. My prediction would be this: the half-data image, when shown at some fraction of full size, will look pretty similar to the full data image. It will break down into noise as we take it closer to full size. This would give us a practical feeling for the role of resizing in influencing perceived image quality.

Olly

I don't really follow what you propose to be done?

By half the data, do you mean half of the subs, or "every other pixel".

I'm not getting the parts: "some fraction of the full size", and "we take it closer to full size"

Once I understand your proposal, I'd be happy to give it a go and show the differences.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

I don't really follow what you propose to be done?

By half the data, do you mean half of the subs, or "every other pixel".

I'm not getting the parts: "some fraction of the full size", and "we take it closer to full size"

Once I understand your proposal, I'd be happy to give it a go and show the differences.

Half the subs vlad. My whole question revolves around imaging time

Link to comment
Share on other sites

2 minutes ago, Rodd said:

Half the subs vlad. My whole question revolves around imaging time

I think I get it now.

Idea is to simulate reducer scenario?

I can do following. Take data set, stack it at "normal resolution". Take half of that data set (half the subs) and bin it 2x2 to simulate reducer. Stack that one as well. Create comparison between images at large scale and small scale.

Enlarge small image to match scale of larger image and do another comparison.

I can do the same with quarter of the subs (that would be equivalent time when using x0.5 reducer or 2x2 binning - it should give same SNR as native resolution).

Will do that and post results sometime today.

 

  • Like 2
Link to comment
Share on other sites

5 minutes ago, vlaiv said:

I think I get it now.

Idea is to simulate reducer scenario?

I can do following. Take data set, stack it at "normal resolution". Take half of that data set (half the subs) and bin it 2x2 to simulate reducer. Stack that one as well. Create comparison between images at large scale and small scale.

Enlarge small image to match scale of larger image and do another comparison.

I can do the same with quarter of the subs (that would be equivalent time when using x0.5 reducer or 2x2 binning - it should give same SNR as native resolution).

Will do that and post results sometime today.

 

I just collected a big over 8 hours of the horse head. I will post binned and I binned.  I am having trouble seeing the difference between binnin and just making the image smaller.  Images look better smaller. That’s why they look so good on my iPhone.  Why not just makes smaller. 

Link to comment
Share on other sites

3 minutes ago, Rodd said:

I just collected a big over 8 hours of the horse head. I will post binned and I binned.  I am having trouble seeing the difference between binnin and just making the image smaller.  Images look better smaller. That’s why they look so good on my iPhone.  Why not just makes smaller. 

I will do experiment with the data I have, but if you want, I can do it with the data you've gathered. No need to bin it, just post aligned and cropped (to remove any alignment / registration artifacts) set of subs and I'll do all the processing.

Once I do experiment, you will be able to tell the difference between all versions - binned, native, reduced, enlarged ....

Link to comment
Share on other sites

46 minutes ago, vlaiv said:

I will do experiment with the data I have, but if you want, I can do it with the data you've gathered. No need to bin it, just post aligned and cropped (to remove any alignment / registration artifacts) set of subs and I'll do all the processing.

Once I do experiment, you will be able to tell the difference between all versions - binned, native, reduced, enlarged ....

Well not reduced.  I did not use a reducer. Also this is not the same a galaxy.  The Horse head nebula is extendedand there are not a lot of details

Edited by Rodd
Link to comment
Share on other sites

Just now, Rodd said:

Well not reduced.  I did not use a reducer

Indeed, both my data as your data is not gathered with reducer, I plan to use binning as it has the same effect as reducer for this purpose - it provides the same aperture at resolution

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.