Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Reducer Myth: Some data.


Rodd

Recommended Posts

1 minute ago, Rodd said:

But--when you bin 2x2--you lose resolution without gaining FOV.  So it is a pure loss.  With a reducer, you lose resolution but you gain FOV--so it is an exchange.

Rodd

Yes that is another way of looking at it, but we were thinking of the situation where the object of interest is covered by only a small portion of the FOV and you don't lose anything of interest by cropping the image. 

Link to comment
Share on other sites

  • Replies 133
  • Created
  • Last Reply
2 minutes ago, Adam J said:

Forget that the brighter areas look even brighter in the reduced image. Those areas are going to have good signal to noise with or without the reducer. The place were the reducer will make the difference is at the margins of detect ability. If you want to bring these areas above the noise floor then yes a focal reducer will allow you to do it in less time than the same setup without one. 

How about the situation where I am not interested in bringing anything more out--I am concerned with bringing what I have already brought out faster.

Rodd

Link to comment
Share on other sites

6 minutes ago, Adam J said:

Yes broadly i did agree with what olly said too, but you can also use a reducer in addition to binning as opposed to instead of binning. Also software and hardware binning are not the same thing in terms of noise reduction. In the case of software binning you average the noise out in a similar way to stacking, in hardware binning you are actually increasing the raw signal to noise level.  As you say its 1x read noise vs 4 x read noise, software binning will just level the noise out it wont actually bring out detail that is below the noise floor. So in the very faint signal that is close to the noise floor you will reveal it one way but not the other. 

I don't understand this idea that if target signal is below read noise it is not recorded at all.

Why isn't it recorded? As far as I understand, read noise is additive noise (not multiplicative or modulatory in some way) - so photon that comes of from a target is still there.

Take for example target that gives of on average 1 photon per pixel per 100 minutes for given setup, and noise floor is 3e. If you stack couple of hundred of subs you will still able to isolate that 0.01e in that pixel. I have many examples of subs where I can't identify faint galaxies (I shoot with cmos, short subs in high LP) - but those galaxies clearly show when I stack ~200 such subs.

Link to comment
Share on other sites

2 minutes ago, Rodd said:

How about the situation where I am not interested in bringing anything more out--I am concerned with bringing what I have already brought out faster.

Rodd

Same thing, given certain parameters, you can actually calculate how much exposure time you need (divided into subs of certain length) to achieve target SNR - target SNR is "hitting what you already have and not bringing out more" - meaning showing object to certain level of fidelity. Unfortunately calculation is rather complex - not because math is complex but because there are number of things that you have to "guesstimate" - like sky transparency, target brightness in band that you are recording, losses in optical train, LP levels ....

Link to comment
Share on other sites

5 minutes ago, vlaiv said:

I don't understand this idea that if target signal is below read noise it is not recorded at all.

Why isn't it recorded? As far as I understand, read noise is additive noise (not multiplicative or modulatory in some way) - so photon that comes of from a target is still there.

Take for example target that gives of on average 1 photon per pixel per 100 minutes for given setup, and noise floor is 3e. If you stack couple of hundred of subs you will still able to isolate that 0.01e in that pixel. I have many examples of subs where I can't identify faint galaxies (I shoot with cmos, short subs in high LP) - but those galaxies clearly show when I stack ~200 such subs.

But there is signal that can be detected in a long sub (or multiple long subs) that can't be detected in multiple short subs subs (even a million), because the object is so dim for instance.  Could it be related to this?

Rodd

Link to comment
Share on other sites

1 minute ago, Rodd said:

But there is signal that can be detected in a long sub (or multiple long subs) that can't be detected in multiple short subs subs (even a million), because the object is so dim for instance.  Could it be related to this?

Rodd

Yes precisely related to this, I don't understand why should signal be detected in long sub, but not in stack of many (or precisely said enough) short subs?

Again, let's assume that on average we get 1 photon per 100minutes. We will ignore noise for now, just looking at the signal and if it is detectable. So I shoot 1 sub that is 100 minutes long, I look at pixel and I find value of 1. There is my photon!

Now I take 100 subs of length of 1minute. I look at each of them, and in 99 I don't find anything - value is 0. But there is one sub that caught the photon it has value of 1. There is my photon. Now for both approaches I can calculate that rate of photons is 0.01 photon per minute or 1 photon per 100 minutes on average.

Adding noise into the mix does not change if this one photon is recorded or not. I just means that I need to have good SNR in order to say - There is my photon! with good confidence. And by stacking enough subs (both short or long) I will achieve needed SNR for given confidence level.

Link to comment
Share on other sites

8 minutes ago, vlaiv said:

Yes precisely related to this, I don't understand why should signal be detected in long sub, but not in stack of many (or precisely said enough) short subs?

Again, let's assume that on average we get 1 photon per 100minutes. We will ignore noise for now, just looking at the signal and if it is detectable. So I shoot 1 sub that is 100 minutes long, I look at pixel and I find value of 1. There is my photon!

Now I take 100 subs of length of 1minute. I look at each of them, and in 99 I don't find anything - value is 0. But there is one sub that caught the photon it has value of 1. There is my photon. Now for both approaches I can calculate that rate of photons is 0.01 photon per minute or 1 photon per 100 minutes on average.

Adding noise into the mix does not change if this one photon is recorded or not. I just means that I need to have good SNR in order to say - There is my photon! with good confidence. And by stacking enough subs (both short or long) I will achieve needed SNR for given confidence level.

But there are objects so faint, they can't be detected at all in 1 min subs (I was thinking of 20 sec subs--or really short ones--same principle).  If not enough photons hit the sensor to register, it does not matter how many subs you take--it will never register.  Or, in your way of thinking, maybe you would need to take 1,000,000 subs.  But that really means it can't be detected with shorter subs.

Todd

Link to comment
Share on other sites

2 minutes ago, Rodd said:

But there are objects so faint, they can't be detected at all in 1 min subs (I was thinking of 20 sec subs--or really short ones--same principle).  If not enough photons hit the sensor to register, it does not matter how many subs you take--it will never register.  Or, in your way of thinking, maybe you would need to take 1,000,000 subs.  But that really means it can't be detected with shorter subs.

Todd

Yes, we'll take 1,000,000 subs if we need to :D

There is thread in imaging section that discusses possibility of recording Crab nebula pulsar in visible spectrum - I proposed method of imaging that involves shooting 3ms subs to record ~15mag star. Now there we are talking about 2,500,000 - 3,000,000 subs in order to capture 10 frame "video" of one cycle of pulsar (about 30ms is one cycle). So yes, while it "can't be detected" with shorter subs - it is possible to image ~15mag star with 3ms subs :D

 

Link to comment
Share on other sites

1 minute ago, vlaiv said:

Yes, we'll take 1,000,000 subs if we need to :D

There is thread in imaging section that discusses possibility of recording Crab nebula pulsar in visible spectrum - I proposed method of imaging that involves shooting 3ms subs to record ~15mag star. Now there we are talking about 2,500,000 - 3,000,000 subs in order to capture 10 frame "video" of one cycle of pulsar (about 30ms is one cycle). So yes, while it "can't be detected" with shorter subs - it is possible to image ~15mag star with 3ms subs :D

 

Ahhh...but stars are different, no?  They are in all other respects in imaging it seems.  Maybe they are different with respect to this too.  Not sure.  Just think how many 10 sec Ha subs it would take to depict the druid above (way, way above) at the same level of signal it is depicted at in either image posted.  (With the same equipment!)

Rodd

Link to comment
Share on other sites

40 minutes ago, Rodd said:

How about the situation where I am not interested in bringing anything more out--I am concerned with bringing what I have already brought out faster.

Rodd

You are still better off with hardware binning or with larger pixels or with a reducer than you are with software binning. But software binning will help. 

Link to comment
Share on other sites

1 minute ago, Adam J said:

You are still better off with hardware binning or with larger pixels or with a reducer than you are with software binning. But software binning will help. 

So here is a practical question I need to answer.  I am thinking of getting an FSQ 106 to use with 5.4um pixel camera and a .6x reducer for F3 imaging.  In your opinion, would this system be appreciably faster than a F4.3 system with same camera (Televue bp101is with .8x reducer)?

Rodd

Link to comment
Share on other sites

26 minutes ago, vlaiv said:

Yes precisely related to this, I don't understand why should signal be detected in long sub, but not in stack of many (or precisely said enough) short subs?

Again, let's assume that on average we get 1 photon per 100minutes. We will ignore noise for now, just looking at the signal and if it is detectable. So I shoot 1 sub that is 100 minutes long, I look at pixel and I find value of 1. There is my photon!

Now I take 100 subs of length of 1minute. I look at each of them, and in 99 I don't find anything - value is 0. But there is one sub that caught the photon it has value of 1. There is my photon. Now for both approaches I can calculate that rate of photons is 0.01 photon per minute or 1 photon per 100 minutes on average.

Adding noise into the mix does not change if this one photon is recorded or not. I just means that I need to have good SNR in order to say - There is my photon! with good confidence. And by stacking enough subs (both short or long) I will achieve needed SNR for given confidence level.

You are always better off with a few long subs than with many many short subs because you also accumulating read noise in addition to signal and so the delta between the Noise + signal and the noise alone is very small. The problem is that in that one sub what has a value of 1...was that your photon or was it read noise? Once you get more 1's generated by read noise than you do by photons you are really really up against it. Nothing is inpossible with enougth subs but once you have more noise than signal you are talking un-achievable numbers of subs in a practical sense. 

 

Link to comment
Share on other sites

Just now, Adam J said:

You are always better off with a few long subs than with many many short subs because you also accumulating read noise in addition to signal and so the delta between the Noise + signal and the noise alone is very small. The problem is that in that one sub what has a value of 1...was that your photon or was it read noise? Once you get more 1's generated by read noise than you do by photons you are really really up against it. Nothing is inpossible with enougth subs but once you have more noise than signal you are talking un-achievable numbers of subs in a practical sense. 

 

Ok, so we do agree that for any SNR there is number of subs of given duration that will achieve that SNR, and you were all only talking of not recording signal - as hard to attain but not impossible?

Btw with cmos cameras where read noise is of order of 1.5e - short subs don't loose out that much compared to long ones, here is example for one of my setups, under average conditions:

Parameters for calculation:

RC8" - 1624mm focal length, 3.8um pixel camera, QE ~50%, target brightness mag22, sky brightness mag18, read noise 1.7e, dark noise set to 0 (it is really low when cooled at -20C), total integration time 4h, software binned for resolution ~1"/pixel

20 minute subs: estimated SNR ~7.512

4 minute subs: estimated SNR ~7.496

20 second subs: estimated SNR ~7.215

4 second subs: estimated SNR ~6.276

Link to comment
Share on other sites

9 minutes ago, Rodd said:

So here is a practical question I need to answer.  I am thinking of getting an FSQ 106 to use with 5.4um pixel camera and a .6x reducer for F3 imaging.  In your opinion, would this system be appreciably faster than a F4.3 system with same camera (Televue bp101is with .8x reducer)?

Rodd

So the FSQ 106 is native F5.0 and F3 reduced. 

 

Televue bp101 is native F5.4 and F4.3 reduced. 

 

So for a similar aperture you are going from F4.3 to F3. 

 

I would say that you are going to end up under-sampling with that pixel size with the FSQ 106 @ F3 due to it being 3.71 arc seconds per pixel so I would not say that the camera is well matched to the focal length, you would want smaller pixels and so any gains from the faster focal ratio would be eliminated by the need for smaller pixels. The BP101 at F4.3 is more suited to the pixel size of your camera. 

So yes you might image slightly faster at F3 but as I have previously stated there is more to image quality than Signal to Noise, resolution is also important and at F3 with that camera you may be on the wrong side of that balance. 

 

Link to comment
Share on other sites

22 minutes ago, vlaiv said:

Ok, so we do agree that for any SNR there is number of subs of given duration that will achieve that SNR, and you were all only talking of not recording signal - as hard to attain but not impossible?

Btw with cmos cameras where read noise is of order of 1.5e - short subs don't loose out that much compared to long ones, here is example for one of my setups, under average conditions:

Parameters for calculation:

RC8" - 1624mm focal length, 3.8um pixel camera, QE ~50%, target brightness mag22, sky brightness mag18, read noise 1.7e, dark noise set to 0 (it is really low when cooled at -20C), total integration time 4h, software binned for resolution ~1"/pixel

20 minute subs: estimated SNR ~7.512

4 minute subs: estimated SNR ~7.496

20 second subs: estimated SNR ~7.215

4 second subs: estimated SNR ~6.276

Yes we agree that with a ridiculous number of subs its possible but the total number required once below the noise floor is dependent on how far below and its not a linear relationship. In your 1 in 100 example you literally would need millions.  

As for the example you give its worth noting that as the subject gets dimmer the difference between the SNR of the 20min sub and the 4 second sub will expand greatly. A mag22 object is not marginal in this sense. The other thing is that if I remember how this works correctly then if you halved the integration time from 4-hours to say 2 hours then again the difference between your 20 min subs and the 4 second subs would expand. 

it might be more interesting to look at a mag 23 object with 1 hour integration and see how that works out. My hunch is that those values will diverge significantly. 

Link to comment
Share on other sites

1 minute ago, Adam J said:

Yes we agree that with a ridiculous number of subs its possible but the total number required once below the noise floor is dependent on how far below and its not a linear relationship. In your 1 in 100 example you literally would need millions.  

As for the example you give its worth noting that as the subject gets dimmer the difference between the SNR of the 20min sub and the 4 second sub will expand greatly. A mag22 object is not marginal in this sense. The other thing is that if I remember how this works correctly then if you halved the integration time from 4-hours to say 2 hours then again the difference between your 20 min subs and the 4 second subs would expand. 

Agree on all of the above, except for signal level vs read noise level on single sub.

In above calculation, 4 second sub signal level from mag22 target is on average 0.16e while read noise level is 1.7e, and sky level is 6.45e, so even though average electron count per pixel is 10 times smaller than read noise (and about 40 times smaller than sky level) - we still have decent SNR level comparable to 20 minute exposures - difference being ~17% (83% SNR of 20 minute sub)

Link to comment
Share on other sites

2 hours ago, Adam J said:

This is were the subtle disagreement comes in, downwards sampling will not bring out data that is hidden under the noise floor.

This is true, but whenever I mention the F ratio myth I acknowledge that the fallacy in question is only a fallacy in respect of signal which is above the noise floor. I think this means we agree! Craig Stark has a good video talk on this caveat, though he seems to agree with Stan Moore with regard to signal which is above the noise floor.

We may not be meaning quite the same thing by 'noise,' however. In practical imaging terms, and I always speak as a practical imager, you need a far better S/N ratio to present an image at 100% (1 camera pixel for 1 screen pixel) than you do to present the same image at 66%. The inherent S/N ratio hasn't changed because it's the same image but what looks acceptable changes enormously. So in this discussion the noise I'm talking about is mostly the residual grainy small scale pattern and rather larger scale speckle. The percentage of full size at which the image is shown is enormously important in this regard, so resampling an unreduced image downwards does, in practical terms, diminish its visible noise.

This is why I can take a short dataset in the TEC140, far too short to make an acceptable image in its own right, and still use it, resampled downwards 4x by area, to enhance the resolution in an area of interest in the widefield.

Olly

Link to comment
Share on other sites

24 minutes ago, ollypenrice said:

This is true, but whenever I mention the F ratio myth I acknowledge that the fallacy in question is only a fallacy in respect of signal which is above the noise floor. I think this means we agree! Craig Stark has a good video talk on this caveat, though he seems to agree with Stan Moore with regard to signal which is above the noise floor.

We may not be meaning quite the same thing by 'noise,' however. In practical imaging terms, and I always speak as a practical imager, you need a far better S/N ratio to present an image at 100% (1 camera pixel for 1 screen pixel) than you do to present the same image at 66%. The inherent S/N ratio hasn't changed because it's the same image but what looks acceptable changes enormously. So in this discussion the noise I'm talking about is mostly the residual grainy small scale pattern and rather larger scale speckle. The percentage of full size at which the image is shown is enormously important in this regard, so resampling an unreduced image downwards does, in practical terms, diminish its visible noise.

This is why I can take a short dataset in the TEC140, far too short to make an acceptable image in its own right, and still use it, resampled downwards 4x by area, to enhance the resolution in an area of interest in the widefield.

Olly

What you are effectively doing when presenting image at smaller size is similar to binning. It increases SNR by means of introducing correlation between pixel values. Exact level of SNR increase is not easy to calculate like it is in case of integer binning, but any type of downsampling, which ever method it is using bilinear, bicubic, ... except nearest neighbor, is effectively enhancing SNR (by unknown factor, smaller than equivalent binning, but still present).

This even happens when you view image in any image viewer that supports any kind of filtering, even on 8bit images, especially if original size has some noise visible. Just try it - take image that has some fine grained noise present, convert it to 8 bit so that some of noise is noticeable at native scale and view it zoomed out - the more you zoom out - less noise is visible.

Link to comment
Share on other sites

44 minutes ago, ollypenrice said:

This is true, but whenever I mention the F ratio myth I acknowledge that the fallacy in question is only a fallacy in respect of signal which is above the noise floor. I think this means we agree! Craig Stark has a good video talk on this caveat, though he seems to agree with Stan Moore with regard to signal which is above the noise floor.

We may not be meaning quite the same thing by 'noise,' however. In practical imaging terms, and I always speak as a practical imager, you need a far better S/N ratio to present an image at 100% (1 camera pixel for 1 screen pixel) than you do to present the same image at 66%. The inherent S/N ratio hasn't changed because it's the same image but what looks acceptable changes enormously. So in this discussion the noise I'm talking about is mostly the residual grainy small scale pattern and rather larger scale speckle. The percentage of full size at which the image is shown is enormously important in this regard, so resampling an unreduced image downwards does, in practical terms, diminish its visible noise.

This is why I can take a short dataset in the TEC140, far too short to make an acceptable image in its own right, and still use it, resampled downwards 4x by area, to enhance the resolution in an area of interest in the widefield.

Olly

Yes I understand the point you are making and mostly agree, but would also point out that the resolution of the monitor has a lot to do with making something look presentable or not, its those people who zoom into your image and point out the flaws at the pixel level that I worry about. With 4k monitors the bar may change for what looks acceptable. As the monitor resolution will not be doing a default down sample for you and the image will be displayed at native resolution. 

In the end you are software binning to increase signal to noise ratio. It would just be more effective to reduce the focal length, do hardware binning or use a sensor with bigger pixels as these methods result in a larger increase in signal to noise and will bring out detail in the image what would have been below the noise floor.   

Link to comment
Share on other sites

50 minutes ago, ollypenrice said:

This is true, but whenever I mention the F ratio myth I acknowledge that the fallacy in question is only a fallacy in respect of signal which is above the noise floor. I think this means we agree! Craig Stark has a good video talk on this caveat, though he seems to agree with Stan Moore with regard to signal which is above the noise floor.

We may not be meaning quite the same thing by 'noise,' however. In practical imaging terms, and I always speak as a practical imager, you need a far better S/N ratio to present an image at 100% (1 camera pixel for 1 screen pixel) than you do to present the same image at 66%. The inherent S/N ratio hasn't changed because it's the same image but what looks acceptable changes enormously. So in this discussion the noise I'm talking about is mostly the residual grainy small scale pattern and rather larger scale speckle. The percentage of full size at which the image is shown is enormously important in this regard, so resampling an unreduced image downwards does, in practical terms, diminish its visible noise.

This is why I can take a short dataset in the TEC140, far too short to make an acceptable image in its own right, and still use it, resampled downwards 4x by area, to enhance the resolution in an area of interest in the widefield.

Olly

A 50% linear downsample will average out four pixels into one, so it will should double the signal to noise ratio, a 4X linear downsample with reduce the area by 16X and quadruple the S/N ratio.

Link to comment
Share on other sites

16 minutes ago, Stub Mandrel said:

A 50% linear downsample will average out four pixels into one, so it will should double the signal to noise ratio, a 4X linear downsample with reduce the area by 16X and quadruple the S/N ratio.

Not quite so if using for example bicubic resampling - algorithm uses more than 4 adjacent pixels to form result pixel value - regular cubic function needs 4 values to interpolate (quadratic needs 3, linear needs 2), so bicubic uses 16 values I believe (not entirely sure, have to check that) - so it will introduce correlation between more pixels than simply summing 4 pixels into one.

yes, here is diagram for it:

https://en.wikipedia.org/wiki/Bicubic_interpolation#/media/File:Comparison_of_1D_and_2D_interpolation.svg

 

Link to comment
Share on other sites

48 minutes ago, vlaiv said:

Not quite so if using for example bicubic resampling - algorithm uses more than 4 adjacent pixels to form result pixel value - regular cubic function needs 4 values to interpolate (quadratic needs 3, linear needs 2), so bicubic uses 16 values I believe (not entirely sure, have to check that) - so it will introduce correlation between more pixels than simply summing 4 pixels into one.

yes, here is diagram for it:

https://en.wikipedia.org/wiki/Bicubic_interpolation#/media/File:Comparison_of_1D_and_2D_interpolation.svg

 

It does depend on the exact operation. If there is no shift in alignment then fewer pixels will be involved.

I think in most imaging programs a straight integer:1 resample will just fuse the appropriate pixels unless you have anti-aliasing switched on.

Anti-aliasing will give a smoother result by using more pixels but will also smooth the noise so I suspect (but can't prove) the S:N ratio may not be affected.

Link to comment
Share on other sites

3 hours ago, Adam J said:

So the FSQ 106 is native F5.0 and F3 reduced. 

 

Televue bp101 is native F5.4 and F4.3 reduced. 

 

So for a similar aperture you are going from F4.3 to F3. 

 

I would say that you are going to end up under-sampling with that pixel size with the FSQ 106 @ F3 due to it being 3.71 arc seconds per pixel so I would not say that the camera is well matched to the focal length, you would want smaller pixels and so any gains from the faster focal ratio would be eliminated by the need for smaller pixels. The BP101 at F4.3 is more suited to the pixel size of your camera. 

So yes you might image slightly faster at F3 but as I have previously stated there is more to image quality than Signal to Noise, resolution is also important and at F3 with that camera you may be on the wrong side of that balance. 

 

According to the FOV calculator the pixel scale with the FSQ at F3 would be 3.5.  For widefield shots I thought this would be fine.  I have seen widefield shots at this scale that are amazing (Olly's come to mind).  A smaller pixel camera would be welcomed--but then I would loose FOV (Unless I went with the ASI 1600mmcool--but I am not interested in CMOS imaging style at this point (a plethora of subs).  I find 100 or so too many as it is.    But--here's teh thing--unless it will be SIGNIFICANTLY faster, its not worth the $7K it will cost to set it up.

Rodd

Link to comment
Share on other sites

54 minutes ago, Rodd said:

According to the FOV calculator the pixel scale with the FSQ at F3 would be 3.5.  For widefield shots I thought this would be fine.  I have seen widefield shots at this scale that are amazing (Olly's come to mind).  A smaller pixel camera would be welcomed--but then I would loose FOV (Unless I went with the ASI 1600mmcool--but I am not interested in CMOS imaging style at this point (a plethora of subs).  I find 100 or so too many as it is.    But--here's teh thing--unless it will be SIGNIFICANTLY faster, its not worth the $7K it will cost to set it up.

Rodd

Depends on what you call significant, personally I would not find it significant enough to warrant 7000 dollers but then thats a personal choice. If you are expecting "WOW OMG ITS SO MUCH FASTER" out of your 7k...they you are going to have some buyers regret. The 1600mmcool is exactly what I would do, but I am not a fan of the number of subs required either....

Link to comment
Share on other sites

29 minutes ago, Adam J said:

Depends on what you call significant, personally I would not find it significant enough to warrant 7000 dollers but then thats a personal choice. If you are expecting "WOW OMG ITS SO MUCH FASTER" out of your 7k...they you are going to have some buyers regret. The 1600mmcool is exactly what I would do, but I am not a fan of the number of subs required either....

Well--my goal is to finish a image (LRGB or HaSHO) in 1 night--say 6-8 hours.  Currently, I do that amount of time per filter and it takes weeks because of weather conditions.   I see images from Epsilons that have 3-4 hours and are amazing.  My only real choice with my mount would be Epsilon 180 at F2.8, OS RH 200 at F/3, or Tak FSQ 106 (or 130) at F/3.  I could do Hyperstar with teh C11Edge--but that does not interest me really.

Rodd

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.