Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Reducer Myth revisited


Rodd

Recommended Posts

3 hours ago, vlaiv said:

I'm not sure why is this causing so much headache to people - it is in fact simple.

Using focal reducer will in fact increase the speed needed to reach target SNR regardless of the fact that:

1. whole target fits both unreduced and reduced FOV

2. same aperture is used and number of total captured photons for the target remains the same.

When we talk about SNR in terms of imaging it is rather simple - one is interested in number of photons per pixel - that is what counts towards the SNR. It is signal level per each pixel that goes into that equation. Using focal reducer means that (from above two points) - same amount of light from a target is divided by smaller number of pixels (coarser sampling rate). Each pixel there fore receives more signal and that means that it has better SNR (for same total imaging time - or system is faster to reach target SNR).

Although common belief is that this holds only for extended targets it is actually true for stars as well to some extent. Star profile is about the same expressed in arcseconds (FWHM or other measure) and when we decrease sampling rate - again star profile is spread over fewer pixels. Stars are almost never single pixel unless image is hugely undersampled - so they are in fact spread over number of pixels. Relation between resolution and spread is not as straight forward as with extended targets, but in principle that is what happens - increase sampling rate - you decrease SNR. Decrease sampling rate and you increase SNR.

Using focal reducer is not part of F/ratio myth. It works and speeds up things. F/ratio myth is about something else - it is about saying that fast scope will always be faster than slow scope. That is not true because it does not take into account pixel size (again how much light is reaching individual pixels). Slow scope with large pixels can be faster than fast scope with smaller pixels. That is F/ratio myth. When using same camera / same pixels - reducer does raise the speed of reaching target SNR.

 

I have to disagree.  One simple statement is true and cant be refuted.  That is, if you image M82 with a 5 inch scope at F10 (native) then switch to a reducer at F5 and crop M82 out of the image and emlarge it so the F10 galaxy and the F5 galaxy are equaly sizes, you will not get the f5 galaxy faster.  that is the myth.  People throw on reducers to speed things up, then crop out targets thinking they will save time....not true.  Becuase you loose resolution, to get an equivalent image you will have to actually expose longer.

Link to comment
Share on other sites

52 minutes ago, Rodd said:

By the way--I gave it a bit of a stretch.  I think I was being too conservative.  It still needs several hours of data--but I see some promise now.  In the end--its not teh camera--its the photagrapher.  As far as quality goes--FOV and pixel size are camera.  Hosey is a bit dark--just a quicky to see how teh data is.  Also--i will definitely take a cloer look at teh subs--I havent even looked at them yet!  I am sure there are some nastis in there.

h50d4a.thumb.jpg.b30e4ff90184a2fc58a248febd03b247.jpg

 

 

I wish i could see that in RGB ! 

Link to comment
Share on other sites

4 hours ago, Rodd said:

Becuase you loose resolution, to get an equivalent image you will have to actually expose longer.

You can't improve resolution by taking longer exposures (nor by taking more exposures of the same duration)

Link to comment
Share on other sites

7 hours ago, Rodd said:

I have to disagree.  One simple statement is true and cant be refuted.  That is, if you image M82 with a 5 inch scope at F10 (native) then switch to a reducer at F5 and crop M82 out of the image and emlarge it so the F10 galaxy and the F5 galaxy are equaly sizes, you will not get the f5 galaxy faster.  that is the myth.  People throw on reducers to speed things up, then crop out targets thinking they will save time....not true.  Becuase you loose resolution, to get an equivalent image you will have to actually expose longer.

I can easily refute that :D. It involves understanding how telescopes work, a bit of nature of the light and some mathematics.

If you wish, we can go into details of it, but here is very simplified version of it. When using same camera (same pixel size, same QE, ....) and same telescope with reduced and native focal length - you will get different pixel scale.

Let's say that at native focal length, your pixel scale is 1"/px, and reduced you have 2"/px. In first instance, pixel will gather all the light from 1"x1" area of the target. In second instance, reduced, pixel will gather photons from 2"x2". Because of the way telescope works - all the photons originating from said areas that fall on aperture will be focused on given pixel.

In 2"x2" case, we simply have more signal collected in the same amount of time in comparison to 1"x1" case. Because of the nature of the noise sources present - more signal in given amount of time means better SNR (some sources stay the same per pixel - like read noise, dark current noise, while some have square root dependence on the level of signal - like shot noise and LP noise). Just to make it clear why is this observe following:

1, sqrt(1) = 1, ratio of those is 1/1 = 1

4, sqrt(4) = 2, ratio of those is 4/2 = 2

9, sqrt(9) = 3, ratio of those is 9/3 = 3

....

In another words - you increase signal and ratio of that signal and associated noise is increased (SNR), while other noise sources remain the same per pixel. I'm just showing here that more signal per pixel will always produce better SNR even if we account for all the noise sources (I can do precise math formula for SNR if you want - it will still show the same). Btw shot noise is equal to square root of signal because light comes in photons - Poisson distribution.

For the same aperture - pixel covering more sky will have better SNR than pixel covering less sky.

You have mixed in resolution in all of that, and yes you are right, in some cases system with reducer will loose some of high frequency components over unreduced system - there will be some detail loss, but let's look at what happens with SNR if you upsample the image - do you loose it?

Here is a little test, we start with empty image and add gaussian noise of magnitude 1.

image.png.402913ba627620175748db6cb03f822f.png

image.png.bcae18eaad4103624a8253621e19e234.png

Here it is - we measure it, and it is indeed 1.

Now I will upsample image by x2, and again measure noise of it to see what we get:

image.png.f957bc1edd4c3686611b5b6016ba7464.png

Ok, so noise did not increase, in fact it decreased somewhat (I'll explain why), and we know that signal remains the same if we upsample the image (resizing does not make image brighter or fainter). SNR of upsampled image remained the same, or to be precise - it improved a bit.

There is no other conclusion but to say that in your mentioned workflow, for same total time - system with reducer will provide better SNR, even if you end up upsampling result image (which I would not personally do - nothing gained by it except larger scale but somewhat blurry image - I like better smaller scale sharp image instead).

Just a final note - upsampling of pure noise image has less noise because noise is random, and is distributed over frequencies. When you upsample something you are in face missing those highest level frequencies - their value is 0 - that means that noise part in those highest frequencies is also 0, but pure noise should be distributed over frequencies, so it should also have component at highest frequencies. By upsampling we effectively removed noise from the highest frequencies and total noise has to go down because of this (it is equal to sum of noise over all frequencies).

  • Like 1
Link to comment
Share on other sites

8 hours ago, Thalestris24 said:

Um, dare I ask about the converse i.e. adding a Barlow vs imaging small targets with an equivalent longer focal length scope? I have a feeling that using even a high quality Barlow for imaging is never a good idea...

Louise

ps Think I've raised this before but I've forgotten the answer, d'uh

Works both ways - for the same camera and same scope (aperture) - adding a barlow "makes pixels smaller" - they end up collecting less light and SNR will be less for same imaging time compared to same scope without barlow. It does not matter if you use longer focal length scope with same aperture or barlow (provided barlow is perfect and has no light loss - but even if it has, with modern coatings difference will be minimal).

F/ratio myth is not is not a myth if you keep pixel size the same. It only becomes the myth if you assume it to be true always, regardless of the pixel size. Statement like F/5 scope will always be faster than F/10 scope is incorrect - because it does not consider pixel size. It will be true if you use the same camera, but if you put camera with pixels twice the large on F/10 scope (and otherwise equal camera) - they will have the same speed. If you put camera with x3 large pixels on F/10 scope - it will in fact be faster than F/5 scope with x3 smaller pixels.

  • Like 1
Link to comment
Share on other sites

1 minute ago, kirkster501 said:

.....  so why isn't binning 2x2 the same thing as using a FR?  Because it definitely is not....?????

I find all this a bit mind boggling...🙈

There are only two differences between 2x2 binning vs using focal reducer (x0.5 one of course):

- with software binning you will have slightly higher read noise. Using focal reducer is like hardware binning - read noise per pixel stays the same. So difference here applies only to software binning / CMOS sensors

- with 2x2 binning FOV remains the same (you can't change size of chip by binning), while with FR it increases (it still does not change size of chip but it does change geometry of light - making "relative" chip size larger).

Otherwise they are the same.

  • Like 1
Link to comment
Share on other sites

54 minutes ago, kirkster501 said:

.....  so why isn't binning 2x2 the same thing as using a FR?  Because it definitely is not....?????

I find all this a bit mind boggling...🙈

I think Vlaiv's point is that it's the same in terms of light collected per 'effective' pixel (an effective pixel being, say, a two by two binned array of four working as one.)

Olly

Link to comment
Share on other sites

1 hour ago, vlaiv said:

I can easily refute that :D. It involves understanding how telescopes work, a bit of nature of the light and some mathematics.

If you wish, we can go into details of it, but here is very simplified version of it. When using same camera (same pixel size, same QE, ....) and same telescope with reduced and native focal length - you will get different pixel scale.

Let's say that at native focal length, your pixel scale is 1"/px, and reduced you have 2"/px. In first instance, pixel will gather all the light from 1"x1" area of the target. In second instance, reduced, pixel will gather photons from 2"x2". Because of the way telescope works - all the photons originating from said areas that fall on aperture will be focused on given pixel.

In 2"x2" case, we simply have more signal collected in the same amount of time in comparison to 1"x1" case. Because of the nature of the noise sources present - more signal in given amount of time means better SNR (some sources stay the same per pixel - like read noise, dark current noise, while some have square root dependence on the level of signal - like shot noise and LP noise). Just to make it clear why is this observe following:

1, sqrt(1) = 1, ratio of those is 1/1 = 1

4, sqrt(4) = 2, ratio of those is 4/2 = 2

9, sqrt(9) = 3, ratio of those is 9/3 = 3

....

In another words - you increase signal and ratio of that signal and associated noise is increased (SNR), while other noise sources remain the same per pixel. I'm just showing here that more signal per pixel will always produce better SNR even if we account for all the noise sources (I can do precise math formula for SNR if you want - it will still show the same). Btw shot noise is equal to square root of signal because light comes in photons - Poisson distribution.

For the same aperture - pixel covering more sky will have better SNR than pixel covering less sky.

You have mixed in resolution in all of that, and yes you are right, in some cases system with reducer will loose some of high frequency components over unreduced system - there will be some detail loss, but let's look at what happens with SNR if you upsample the image - do you loose it?

Here is a little test, we start with empty image and add gaussian noise of magnitude 1.

image.png.402913ba627620175748db6cb03f822f.png

image.png.bcae18eaad4103624a8253621e19e234.png

Here it is - we measure it, and it is indeed 1.

Now I will upsample image by x2, and again measure noise of it to see what we get:

image.png.f957bc1edd4c3686611b5b6016ba7464.png

Ok, so noise did not increase, in fact it decreased somewhat (I'll explain why), and we know that signal remains the same if we upsample the image (resizing does not make image brighter or fainter). SNR of upsampled image remained the same, or to be precise - it improved a bit.

There is no other conclusion but to say that in your mentioned workflow, for same total time - system with reducer will provide better SNR, even if you end up upsampling result image (which I would not personally do - nothing gained by it except larger scale but somewhat blurry image - I like better smaller scale sharp image instead).

Just a final note - upsampling of pure noise image has less noise because noise is random, and is distributed over frequencies. When you upsample something you are in face missing those highest level frequencies - their value is 0 - that means that noise part in those highest frequencies is also 0, but pure noise should be distributed over frequencies, so it should also have component at highest frequencies. By upsampling we effectively removed noise from the highest frequencies and total noise has to go down because of this (it is equal to sum of noise over all frequencies).

Hey, it’s not me you have to convince. It’s Olly.  

Link to comment
Share on other sites

1 hour ago, Rodd said:

Ok, so noise did not increase, in fact it decreased somewhat (I'll explain why), and we know that signal remains the same if we upsample the image (resizing does not make image brighter or fainter). SNR of upsampled image remained the same, or to be precise - it improved a bit.

How about a star--Sirius is very bright--won't an an image of sirius where the star takes up most of the FOV (say a 20 pixel square sensor) be reduced in brightness if you expand the image to be a 3 foot by 3 foot area?

And you say you loose some resolution--well at a dark sky site where the seeing is .5 arcsec/pix, and your reduce the resolution from .7 to 1.5--you will loose quite a bit of resolution, no?  So in this case, the two images are not the same.  The reduced galaxy will not look as sharp as the unreduced galaxy.  So even if you collect the data in 1/2 the time--its not the same image. 

 

Lastly (I really hope) is that enlarging an image ALWAYS reduces its quality.  An image is NEVER as good 1:1 as it is 1:2 or even 1:3.  Thats why when you go to full resolution on the forum the images tend to degrade.  Even the very best loose a bit.  My contention is that cropping a galaxy after collecting with reducer is never going to be as clear and decent an image as normal size.  i know--I have crop the FSQ and ASI 1600 reduced by .6x and while the images are decent, they show defects and they are not as "good" as native sized images.  Zooming in reduces image quality.  That is why many images are downsampled before posting (kind of annoys me). 

Link to comment
Share on other sites

1 hour ago, Rodd said:

Hey, it’s not me you have to convince. It’s Olly.  

I'm not sure it is my place to convince anyone of anything. It is in fact the math and theory that offers explanation, and only thing I can do is help understand it.

Link to comment
Share on other sites

5 hours ago, pete_l said:

You can't improve resolution by taking longer exposures (nor by taking more exposures of the same duration)

I know.....but that just supports my contention....However, a minor point would be that if you image for long enough you will be able to assemble only the best subs (pristine seeing) and in so doing get a better image.

Link to comment
Share on other sites

2 minutes ago, vlaiv said:

I'm not sure it is my place to convince anyone of anything. It is in fact the math and theory that offers explanation, and only thing I can do is help understand it.

😐

Seriously, even Craig Stark discussed using a reducer to capture data and cropping out small targets and increasing size to be the same as native.  I know you do not think that is "the myth" but it is a myth.  I did not make up the statement above (I may have butchered it, but its still alive!).  I am simply a regurgitator.  But if you were right, people would never image distant galaxies with 24" scopes at native resolution--say at 5,500 mm FL--they would reduce to 2,500 mm.  But people don't do that.  Why?  Because you do lose resolution at a time when resolution is key. 

Link to comment
Share on other sites

2 minutes ago, Rodd said:

How about a star--Sirius is very bright--won't an an image of sirius where the star takes up most of the FOV (say a 20 pixel square sensor) be reduced in brightness if you expand the image to be a 3 foot by 3 foot area?

I'm assuming you in fact wanted to quote me although you are quoted as saying that (maybe you quoted your quote of my text - irrelevant :D ).

This in fact has to do with SNR and brightness. You can increase "brightness" of something as you please. You can't do that with SNR - it remains the same. That is why we always talk about SNR rather than brightness.

If you have a pixel with value of let's say 10, and you multiply it with some number - let's say 5, your brightness will now be 50. You changed the "brightness" of it by simply multiplying with a number.

Now here is another case. Again pixel has the value of 10 but you know that it is a wrong value. Right value is 12. This means that there is an error in pixel value - or noise. Pixel value deviates from true value by 2 (12-10 = 2). And ratio of the two is 6 = 12/2 (true signal divided by noise).

Now let's multiply again that pixel with 5 and see what we get. Again recorded pixel value will be 50 (5*10), but true value of the pixel should be 60 (12*5). So error is now 10 instead of 2 (it also got multiplied by 2). Now SNR of this new image is 60/(60-50) = 6. It has not changed.

Multiplying image with some number does alter its brightness but SNR remains the same.

Now you are right that changing of sampling rate will affect brightness. It is in fact recorded signal - but you can equalize the signal like above with simple multiplication, but you can't do that with SNR - it is important bit.

11 minutes ago, Rodd said:

And you say you loose some resolution--well at a dark sky site where the seeing is .5 arcsec/pix, and your reduce the resolution from .7 to 1.5--you will loose quite a bit of resolution, no?  So in this case, the two images are not the same.  The reduced galaxy will not look as sharp as the unreduced galaxy.  So even if you collect the data in 1/2 the time--its not the same image. 

You can loose some resolution if there is detail there in the first place. If there is no detail - you won't loose anything - two images will in fact be the same. In fact - most people don't have a sense of how much of sharpness/detail you actually loose when changing pixel scale by factor of two. And the answer to that is - in most cases that we face when imaging - not a lot. It is quite subtle effect.

15 minutes ago, Rodd said:

Lastly (I really hope) is that enlarging an image ALWAYS reduces its quality.  An image is NEVER as good 1:1 as it is 1:2 or even 1:3.  Thats why when you go to full resolution on the forum the images tend to degrade.  Even the very best loose a bit.  My contention is that cropping a galaxy after collecting with reducer is never going to be as clear and decent an image as normal size.  i know--I have crop the FSQ and ASI 1600 reduced by .6x and while the images are decent, they show defects and they are not as "good" as native sized images.  Zooming in reduces image quality.  That is why many images are downsampled before posting (kind of annoys me). 

You are looking at this wrong way around. Images look better when reduced because reduction causes improvement in SNR. Almost always (depends on way you reduce, and there is only one way to reduce that does not affect SNR).

If your image at 1:1 is good enough and has no noise to be perceived - then reduction will reduce the noise, but it will still not be perceivable - and images will look the same in quality.

Zooming in, or enlarging image does not change quality of image (or signal to be precise) - it adds nothing to it (depends on the way it is enlarged) - it is only our perception of the image that changes. Imagine following scenario. Instead of enlarging image, you are observing the same image on very large display with very large pixels up close. It will not look as good as on regular monitor - your perception of the recorded data will change - not the image.

10 minutes ago, Rodd said:

😐

Seriously, even Craig Stark discussed using a reducer to capture data and cropping out small targets and increasing size to be the same as native.  I know you do not think that is "the myth" but it is a myth.  I did not make up the statement above (I may have butchered it, but its still alive!).  I am simply a regurgitator.  But if you were right, people would never image distant galaxies with 24" scopes at native resolution--say at 5,500 mm FL--they would reduce to 2,500 mm.  But people don't do that.  Why?  Because you do lose resolution at a time when resolution is key. 

Ok, I can't be responsible to what other people say. What I say should not be taken at face value as well. That is why we have mathematics and science - it is unambiguous and can be verified for correctness and consistency.

If what someone is saying can be verified by science / math facts - then yes, what that person is saying is in fact pointing towards what science and math says - and that is what you should believe, and if you don't believe it - well there is another way to go about it - test it and verify if it is indeed so.

On the subject of focal length and using reducer - it depends on couple of factors:

1. size of pixels used

2. effective resolution system can provide (scope / mount / atmosphere)

3. Interest in doing recording at the limit of resolution ( for some images / measurements you are not interested in finest detail for example)

Also bare in mind that don't many focal reducers operate on x0.5 without severely degrading quality of image (optical aberrations).

Link to comment
Share on other sites

3 hours ago, vlaiv said:

Works both ways - for the same camera and same scope (aperture) - adding a barlow "makes pixels smaller" - they end up collecting less light and SNR will be less for same imaging time compared to same scope without barlow. It does not matter if you use longer focal length scope with same aperture or barlow (provided barlow is perfect and has no light loss - but even if it has, with modern coatings difference will be minimal).

F/ratio myth is not is not a myth if you keep pixel size the same. It only becomes the myth if you assume it to be true always, regardless of the pixel size. Statement like F/5 scope will always be faster than F/10 scope is incorrect - because it does not consider pixel size. It will be true if you use the same camera, but if you put camera with pixels twice the large on F/10 scope (and otherwise equal camera) - they will have the same speed. If you put camera with x3 large pixels on F/10 scope - it will in fact be faster than F/5 scope with x3 smaller pixels.

Thanks, Vlaiv

I'll have to try this out for myself one of these days. It would be great to be able to image smaller targets sometimes. Guiding more critical, of course. One thing though - it will obviously depend on the camera. For instance, my Atik383l+ has 5.4um pixels whereas my qhy183m only has 2.4um pixels. However, the 183m has a much higher qe - 84% vs a mere 40% (peak at 540nm). I guess if I were to attempt imaging with a x2 Barlow I'd still be better off with the smaller pixel 183m (but bin 2x2). I wish I had loads more imaging opportunities than I do so that I could try these things out. The proof of the pudding (is in the eating) as they say! :)

Louise

  • Like 1
Link to comment
Share on other sites

12 minutes ago, vlaiv said:

On the subject of focal length and using reducer - it depends on couple of factors:

1. size of pixels used

2. effective resolution system can provide (scope / mount / atmosphere)

3. Interest in doing recording at the limit of resolution ( for some images / measurements you are not interested in finest detail for example)

Also bare in mind that don't many focal reducers operate on x0.5 without severely degrading quality of image (optical aberrations).

If you image a galaxy and that galaxy fills the FOV--all photons collected by the scope will go into the galaxy.  If you through on a reducer, widen the FOV, all photons collected by the scope will not go into galaxy--only 50% will.  How can this be faster?

Link to comment
Share on other sites

8 minutes ago, Thalestris24 said:

Thanks, Vlaiv

I'll have to try this out for myself one of these days. It would be great to be able to image smaller targets sometimes. Guiding more critical, of course. One thing though - it will obviously depend on the camera. For instance, my Atik383l+ has 5.4um pixels whereas my qhy183m only has 2.4um pixels. However, the 183m has a much higher qe - 84% vs a mere 40% (peak at 540nm). I guess if I were to attempt imaging with a x2 Barlow I'd still be better off with the smaller pixel 183m (but bin 2x2). I wish I had loads more imaging opportunities than I do so that I could try these things out. The proof of the pudding (is in the eating) as they say! :)

Louise

Yes it looks like bin x2 of 183 sensor would give edge over Kaf8300 due to less read noise and better QE - but the question is - why would you use x2 barlow and then bin x2 if you get the same sampling rate with regular pixel size and without barlow?

Btw, here are some guidelines on max achievable resolution - expected FWHM of the star divided with 1.6-1.8.

So if your star has FWHM of about 3" then max useful resolution to record such image would be 3"/1.6 = 1.875"/px. To get to 1"/px you need to have very tight stars - around 1.6" FWHM. That means excellent tracking and largish aperture and steady skies.

You don't have to sample at 1"/px to get data that is sampled at 1"/px :D - like you already know, you can sample at 0.5"/px for example and bin later in software (for CMOS sensors). In fact - there is better way to bin than regular binning. See this thread for some details:

 

Link to comment
Share on other sites

30 minutes ago, vlaiv said:

, I can't be responsible to what other people say. What I say should not be taken at face value as well. That is why we have mathematics and science - it is unambiguous and can be verified for correctness and consistency.

It appears I was wrong....the following screen shot of Craig Stark's article seems to reiterate your position.  Hard to see here--but it is quite clear on the web page  where the eimages are zoomed in. 

 

FRC 300 at f/7.8 and f/5.9. I ran a DDP on the data and used Photoshop to match black and white points and to crop the two frames. Click on the thumbnail for a bigger view and/or just look at the crop.
keitel_m1_thumb.jpg

Here is a crop around the red and yellow circled areas. In each of these, the left image is the one at f/7.8 and the right at f/5.9 (as you might guess from the difference in scale. Now, look carefully at the circled areas. You can see there is more detail recorded at the lower f-ratio. We can see the noise here in the image and that these bits are closer to the noise floor. Again, the point is that it’s incorrect to say that the f-ratio rules all and that a 1” scope at f/5 is equal to a 100” scope at f/5, but it’s also wrong to say that under real-world conditions, it’s entirely irrelevant.

Edited by Rodd
Link to comment
Share on other sites

5 minutes ago, Rodd said:

If you image a galaxy and that galaxy fills the FOV--all photons collected by the scope will go into the galaxy.  If you through on a reducer, widen the FOV, all photons collected by the scope will not go into galaxy--only 50% will.  How can this be faster?

All the photons collected by the scope that originate from the galaxy will go into "image" of that galaxy regardless of the fact how large it is in FOV (except when it extends beyond FOV).

That is how telescope works - it collects photons and forms an image out of all those photons.

There is slight distinction between large galaxy that fills the FOV and smaller galaxy that is only half the size (or rather quarter the size - half width and half height). Distribution of the light. All the light collected will be the same - but it will be spread over more pixels in case that galaxy fills the FOV. It's like taking canister of water and filling glasses - if you have smaller number of glasses - each will contain more water once you divide all the water between them (if the water does not spill or overflow).

It is this level in each glass / pixel that gives rise to SNR. More water / photons per pixel you have - better the SNR.

Better SNR for same time means - same SNR in less time (faster system).

Link to comment
Share on other sites

3 minutes ago, vlaiv said:

Yes it looks like bin x2 of 183 sensor would give edge over Kaf8300 due to less read noise and better QE - but the question is - why would you use x2 barlow and then bin x2 if you get the same sampling rate with regular pixel size and without barlow?

Btw, here are some guidelines on max achievable resolution - expected FWHM of the star divided with 1.6-1.8.

So if your star has FWHM of about 3" then max useful resolution to record such image would be 3"/1.6 = 1.875"/px. To get to 1"/px you need to have very tight stars - around 1.6" FWHM. That means excellent tracking and largish aperture and steady skies.

You don't have to sample at 1"/px to get data that is sampled at 1"/px :D - like you already know, you can sample at 0.5"/px for example and bin later in software (for CMOS sensors). In fact - there is better way to bin than regular binning. See this thread for some details:

 

"why would you use x2 barlow and then bin x2 if you get the same sampling rate with regular pixel size and without barlow?" 

That's true! Ha ha - silly me! These things always catch me out! I tend to be driven by what I might want to achieve rather than how - probably why I make silly errors sometimes!

Louise

  • Like 1
Link to comment
Share on other sites

1 minute ago, vlaiv said:

All the photons collected by the scope that originate from the galaxy will go into "image" of that galaxy regardless of the fact how large it is in FOV (except when it extends beyond FOV).

That is how telescope works - it collects photons and forms an image out of all those photons.

No way....if there is a star in the upper left hand corner, some photons collected by the scope will go into that star--or another galaxy, or nebula.  If teh galaxy only fills 1/4 of the FOV, and there are other things in the FOV, most certainly all the photons collected by the scope WILL NOT go into that galaxy!

Link to comment
Share on other sites

2 minutes ago, Rodd said:

Again, the point is that it’s incorrect to say that the f-ratio rules all and that a 1” scope at f/5 is equal to a 100” scope at f/5, but it’s also wrong to say that under real-world conditions, it’s entirely irrelevant.

That is absolutely correct - and is the origin of F/ratio myth.

There is in fact one thing that is correct in statement that 1" scope at F/5 is equal to 100" scope at F/5 - and that is: if you use the same camera, don't care about resolution loss, then in principle they will be "equally" fast on signal that they can equally record.

That is true in daytime / regular photography (or nearly true) - because it works in light dominated regime (plenty of light so no one cares about all the noise sources for the most part) and resolution is not severely impacted. It is also often used to characterize different lens on same camera - so pixel size does not change. In that world - it holds. It is only problematic when you try to extrapolate reasoning to cases when other factors start to dominate - like in astro imaging.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.