Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

  • Announcements


Adam J

Advanced Members
  • Content count

  • Joined

  • Last visited

Community Reputation

490 Excellent

1 Follower

About Adam J

  • Rank
    Proto Star

Profile Information

  • Location
    Lincoln, UK
  1. Reducer Myth: Some data.

    Depends on what you call significant, personally I would not find it significant enough to warrant 7000 dollers but then thats a personal choice. If you are expecting "WOW OMG ITS SO MUCH FASTER" out of your 7k...they you are going to have some buyers regret. The 1600mmcool is exactly what I would do, but I am not a fan of the number of subs required either....
  2. Reducer Myth: Some data.

    Yes I understand the point you are making and mostly agree, but would also point out that the resolution of the monitor has a lot to do with making something look presentable or not, its those people who zoom into your image and point out the flaws at the pixel level that I worry about. With 4k monitors the bar may change for what looks acceptable. As the monitor resolution will not be doing a default down sample for you and the image will be displayed at native resolution. In the end you are software binning to increase signal to noise ratio. It would just be more effective to reduce the focal length, do hardware binning or use a sensor with bigger pixels as these methods result in a larger increase in signal to noise and will bring out detail in the image what would have been below the noise floor.
  3. Reducer Myth: Some data.

    Yes we agree that with a ridiculous number of subs its possible but the total number required once below the noise floor is dependent on how far below and its not a linear relationship. In your 1 in 100 example you literally would need millions. As for the example you give its worth noting that as the subject gets dimmer the difference between the SNR of the 20min sub and the 4 second sub will expand greatly. A mag22 object is not marginal in this sense. The other thing is that if I remember how this works correctly then if you halved the integration time from 4-hours to say 2 hours then again the difference between your 20 min subs and the 4 second subs would expand. it might be more interesting to look at a mag 23 object with 1 hour integration and see how that works out. My hunch is that those values will diverge significantly.
  4. Reducer Myth: Some data.

    So the FSQ 106 is native F5.0 and F3 reduced. Televue bp101 is native F5.4 and F4.3 reduced. So for a similar aperture you are going from F4.3 to F3. I would say that you are going to end up under-sampling with that pixel size with the FSQ 106 @ F3 due to it being 3.71 arc seconds per pixel so I would not say that the camera is well matched to the focal length, you would want smaller pixels and so any gains from the faster focal ratio would be eliminated by the need for smaller pixels. The BP101 at F4.3 is more suited to the pixel size of your camera. So yes you might image slightly faster at F3 but as I have previously stated there is more to image quality than Signal to Noise, resolution is also important and at F3 with that camera you may be on the wrong side of that balance.
  5. Reducer Myth: Some data.

    You are always better off with a few long subs than with many many short subs because you also accumulating read noise in addition to signal and so the delta between the Noise + signal and the noise alone is very small. The problem is that in that one sub what has a value of 1...was that your photon or was it read noise? Once you get more 1's generated by read noise than you do by photons you are really really up against it. Nothing is inpossible with enougth subs but once you have more noise than signal you are talking un-achievable numbers of subs in a practical sense.
  6. Reducer Myth: Some data.

    You are still better off with hardware binning or with larger pixels or with a reducer than you are with software binning. But software binning will help.
  7. Reducer Myth: Some data.

    Yes that is another way of looking at it, but we were thinking of the situation where the object of interest is covered by only a small portion of the FOV and you don't lose anything of interest by cropping the image.
  8. Reducer Myth: Some data.

    Forget that the brighter areas look even brighter in the reduced image. Those areas are going to have good signal to noise with or without the reducer. The place were the reducer will make the difference is at the margins of detect ability. If you want to bring these areas above the noise floor then yes a focal reducer will allow you to do it in less time than the same setup without one.
  9. Reducer Myth: Some data.

    Yes broadly i did agree with what olly said too, but you can also use a reducer in addition to binning as opposed to instead of binning. Also software and hardware binning are not the same thing in terms of noise reduction. In the case of software binning you average the noise out in a similar way to stacking, in hardware binning you are actually increasing the raw signal to noise level. As you say its 1x read noise vs 4 x read noise, software binning will just level the noise out it wont actually bring out detail that is below the noise floor. So in the very faint signal that is close to the noise floor you will reveal it one way but not the other.
  10. Reducer Myth: Some data.

    This is were the subtle disagreement comes in, downwards sampling will not bring out data that is hidden under the noise floor. Yes they do have exactly the same number of photons within the area of interest in a purely optical sense, but once you bring the camera into the picture then the fact remains that those photons are spread over more pixels and so the signal to noise is worse. Its exactly the inverse of the change in signal to noise ratio you would get by hardware binning. You don't get that same gain by software binning (reducing the resolution) such as described although it may help make data that is marginal appear batter its not the same thing. Image quality in this context is a balance between signal to noise ratio and resolution. If you have great signal to noise ratio and really horrible resolution within the area of interest then thats no good. If you have horrible signal to noise ratio and fantastic resolution within the area of interest then thats no good either. All that the reducer is doing is allowing you to exchange resolution for additional signal to noise and some extra field of view is you need it. So in these terms the reducer is not always improving image quality, its just a tool for exchanging one parameter for anther.
  11. Reducer Myth: Some data.

    Yes the whole thing is almost exactly analogous to the increase in signal to noise gained by binning in a CCD.
  12. Reducer Myth: Some data.

    And that is the 'camp' that i am in. There are are a number of interacting factors at play here, but all things being equal then using a focal reducer will always yield a shorter to to achieve a given SNR but at the expense of image resolution....though i would note that if you are oversampling with a small pixel sensor then you may actually not even lose resolution as the pixel scale would not be the limiting factor. If I follow you then no i dont agree with that statement, it would have a higher signal strength in the reduced image but at lower resolution (in most cases).
  13. Reducer Myth: Some data.

    The diagram a simplification, but in general the diffraction limited size of the stars will reduce with a larger objective. I was still thinking in terms of ollys example of 50mm vs 500mm objectives at different F-ratios. There is also bloat to consider and the star is unlikely to actually fall perfectly into the center of 4 pixels. For the same aperture at different f-ratios the diffraction limited size of the stars will be similar or smaller...I think. But even if the size of the imaged star is not changing the point remains the same. For stars the same rules do not apply as the size of the sampled / projected image is approaching the pixel scale.
  14. Reducer Myth: Some data.

    Its to do with pixel sampling. The pixels are cover a finite area and so you get a situation where a star may only be covered by say 4 pixels (above the noise floor) meaning that a small reduction in focal length will not change the number of pixels sampling the star. Normally it only applies to stars and depending on how bloated the star is sometimes not even then, it could also apply to a bright filament within a nebula. Most of the time an object would be considered extended. Its more a counter to people saying they see more stars on a larger aperture at lower F-ratio...which can happen. Here is a diagram to explain: This is something of a large approximation / extreme example however: 4 x Squares are pixels the circle is the focused star. Irrespective the star is still sampled by 4 pixels and so the photons collected from the star will be spread over 4 x pixels. As such the values of the 4x pixels stay the same and the apparent brightness in the image remains unaffected. But this is not always the case and not the case for an extended object where more photons are sampled per pixel if the F-ratio is reduced. You can also get the situation where you have more aperture and a slower f-ratio but the star is still sampled by the same number of pixels in which case you will get more photons collected per pixel and that goes counter to the effect observed on an extended object.
  15. Reducer Myth: Some data.

    Its a relative digital pixel level as opposed to a brightness, you can make something appear brighter by adjusting the monitor back light, I am just being pedantic with the use of the word brightness. I think the key is as you say that they 'appear' unequally bright.