Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

F ratio and exposure time.


ollypenrice

Recommended Posts

The trouble with this concept of 'fast' and 'slow' f-ratios is that people take it mean that a fast scope of the same aperture will image fainter objects than a slow scope, given the same exposure time (or conversely that a smaller aperture scope can perform as well as a larger one, in the same exposure time, simply by having a smaller f-ratio) . Read noise issues aside, this simply is not true. For a given exposure time, the depth of your telescopic image depends on aperture, irrespective of f-ratio.

NigelM

Link to comment
Share on other sites

  • Replies 41
  • Created
  • Last Reply
9 hours ago, dph1nm said:

The trouble with this concept of 'fast' and 'slow' f-ratios is that people take it mean that a fast scope of the same aperture will image fainter objects than a slow scope, given the same exposure time (or conversely that a smaller aperture scope can perform as well as a larger one, in the same exposure time, simply by having a smaller f-ratio) . Read noise issues aside, this simply is not true. For a given exposure time, the depth of your telescopic image depends on aperture, irrespective of f-ratio.

NigelM

Exactly. 

Link to comment
Share on other sites

11 hours ago, dph1nm said:

The trouble with this concept of 'fast' and 'slow' f-ratios is that people take it mean that a fast scope of the same aperture will image fainter objects than a slow scope, given the same exposure time (or conversely that a smaller aperture scope can perform as well as a larger one, in the same exposure time, simply by having a smaller f-ratio) . Read noise issues aside, this simply is not true. For a given exposure time, the depth of your telescopic image depends on aperture, irrespective of f-ratio.

NigelM

Very succinctly put; you describe the popular misconception very well.

It is inescapable that for any desired plate scale in astro-imaging it is aperture that is the overwhelmingly important determinant of object SNR.

 

Link to comment
Share on other sites

What baffles me the most is that this topic still causes so much confusion.

Let me give you my take on it:

1. We are interested in SNR and not the signal itself - we have to be clear on this one - this is true even for point sources - stars - only much much less noticeable (it does matter for example when guiding, you want your star to have good SNR).

2. As signal per area goes up - SNR goes up as well (issue with SNR is not as simple as this but it is a general conclusion that as signal per area goes up so does the SNR regardless of exact formula, and depending on read, dark and sky noise, and we are talking single frame here).

3. We are not interested in signal but signal per area (when making cooking plans you purchase food based on how many people you want to get fed, same amount of food, more people - they end up hungry).

So here we can conclude the following:

- Same aperture, same pixel area, different focal length (speed of scope) - Fast one will have bigger signal per area and therefore better SNR for single frame - so yes, using reducer with same pixel size will result in larger SNR per frame regardless of the fact that there are no new photons introduced (same aperture) - those photons that are there are being spread over larger surface (more pixels - more people to feed) in case of longer FL.

- Can F/15 be considered slow scope in photographic terms? - NO! - it does not depend only on focal length and aperture. It depends on signal per area, and in case of F/15 we can achieve same signal per area as with F/5 scope - provided we use larger area - camera with larger pixel size.

- Can two systems having totally different spec (different FL, aperture, F/ratio and pixel size) have same photographic speed? - YES! - provided that systems are matched in such a way that same number of photons hit each pixel - over duration of single frame both systems will collect the same amount of signal per pixel - hence same SNR (of course not the same SNR, as SNR depends on noise as well, so camera that has less read and dark noise wins here, but if we consider only shot noise in dark skies, then yes - same SNR).

So, photographic speed of system depends on 4 elements

- Normalized collecting area (aperture area modulated by telescope system throughput - percent of light that actually passes thru the scope) - direct dependence

- Recording area - in terms of size of camera pixel - direct dependence

- Focal length - inverse dependence (larger FL - more spread signal, less signal per area).

- QE of sensor - direct dependence

This deals only with photographic speed and not noise part (how much signal photons will each pixel record in given amount of time).

 

Link to comment
Share on other sites

21 hours ago, ollypenrice said:

...and then there are the practicalities of making ultra fast reflecting systems (since they have to be reflecting in order to be ultra fast.) You don't have to quadruple the exp time, though, if you go about it the right way...

Tandem-S.jpg

:evil4:lly

 

€€€€€€€

... which to me at least, is always the limiting factor. As well as that fluffy patch showing in Ollys picture on the right.

Link to comment
Share on other sites

Olly, this topic crops up often and I think that it is some of the terminology and cross-over between night time astronomy and daytime photography that causes some of the confusion. In daytime photography, focal length stays the same when the aperture changes.

This is how I see it...take with a pinch of salt.

Quote

When you say the "The pixels in the reducer scenario will 'fill' faster" this is exactly a reduction in exposure.

I agree with this (the f-ratio myth caveat), the reduced image (the physical image will also be reduced to the same ratio) will have a higher S/N ratio, i.e. same photon count projected on to fewer pixels fills up the wells faster. An ant and a magnifying glass will attest to this. The magnifying glass being the focal reducer. As Craig Stark states, it is of more importance when we are close to the noise floor, trying to pick out faint signal.

BUT reducing doesn't come for free...A focal reducer also reduces the imaging circle of the OTA. So if you have a small enough sensor you are okay, but if you have a large sensor the vignetting will become more of an issue.

Will the captured image be better with a focal reducer than without? Depends on what you are trying to capture. A tiny object in a large FOV will become even smaller...but the overall image will certainly be cleaner (if your sensor is still smaller than the imaging circle after reduction that is).

With respect to the first two scenarios in your first picture I agree, aperture rules...I image with a 500mm focal length telephoto* lens with a 125mm aperture. If someone else has a 500mm focal length lens and the aperture is only 90mm then I collect ~4 times as many photons and to get the equivalent S/N ratio I only have to expose for ~half the time (with all other things being equal).

*A telephoto lens contains a telephoto lens group which makes the lens physically shorter than, in my case,  500mm in length.

 

It is interesting reading the "expert's" opinions on this...they agree to a point, then disagree, then something crops up that puts a spanner in the works...the thing I believe is what Craig Stark states, basically given all other things being equal, aperture rules.
http://www.stark-labs.com/help/blog/files/FratioAperture.php
http://www.stanmooreastro.com/f_ratio_myth.htm

Link to comment
Share on other sites

Well, I think I am about to put my money where my mouth is and buy an F10 Meade ACF from a friend/guest. It does come with an AP focal reducer but most of the projected targets will be small so I'm thinking of working at native and taking as long as it takes - or downsizing the final result.

In thinking abut this discussion it strikes me that I don't know what the graphics software actually does when it presents an image at 50% or 33% etc. Time for more homework.

Olly

Link to comment
Share on other sites

1 hour ago, ollypenrice said:

Well, I think I am about to put my money where my mouth is and buy an F10 Meade ACF from a friend/guest. It does come with an AP focal reducer but most of the projected targets will be small so I'm thinking of working at native and taking as long as it takes - or downsizing the final result.

In thinking abut this discussion it strikes me that I don't know what the graphics software actually does when it presents an image at 50% or 33% etc. Time for more homework.

Olly

Do you mean resizing image to smaller dimensions? It can be done in two ways: resampling and binning.

Binning is straight forward binning very similar to what happens with hardware binning. You take neighbouring pixels (2x2, 3x3, ...) and add/average them - much like stacking - reduces SNR.

With resampling there are different algorithms but they all do something akin to: Take pixel values and interpret them as point samples at center of pixel position, and interpolate with function that fits those samples (think of statistics, bunch of values and function that best fits those samples). Function can be linear, quadratic, qubic - hence bilinear, biquadratic, bicubic scaling (bi is because it does this in both x and y axis). Then you take smaller (empty) image and calculate centers of its pixels (convert into "x" value for interpolating function) and get values of interpolating function at those positions - these will be "y" or value of pixel in final picture. I think it also boosts SNR, but no idea by how much (it probably depends both on interpolating function and "smoothness" of signal itself).

Link to comment
Share on other sites

32 minutes ago, vlaiv said:

Do you mean resizing image to smaller dimensions? It can be done in two ways: resampling and binning.

Binning is straight forward binning very similar to what happens with hardware binning. You take neighbouring pixels (2x2, 3x3, ...) and add/average them - much like stacking - reduces SNR.

With resampling there are different algorithms but they all do something akin to: Take pixel values and interpret them as point samples at center of pixel position, and interpolate with function that fits those samples (think of statistics, bunch of values and function that best fits those samples). Function can be linear, quadratic, qubic - hence bilinear, biquadratic, bicubic scaling (bi is because it does this in both x and y axis). Then you take smaller (empty) image and calculate centers of its pixels (convert into "x" value for interpolating function) and get values of interpolating function at those positions - these will be "y" or value of pixel in final picture. I think it also boosts SNR, but no idea by how much (it probably depends both on interpolating function and "smoothness" of signal itself).

Good information. Thanks Vlaiv. Resampling certainly seems to improve SNR in the sense that, very obviously, we can produce an image which looks fine at 50% but which will not look acceptable at 100% without a lot more data.

I think it must boost SNR because noise is likely to take the form of an errant value in a single pixel. 

Olly

Link to comment
Share on other sites

I  am sure that the right kind of resampling, of which there are many, does improve the SNR. Only if the noise in each pixel is random, which it probably is, and there is some correlation between adjacent pixels, which there usually is (think of a nebulous area of very similar colour) and you then sum/average or whatever those adjacent 2x2 pixels into a single pixel it should have less noise in the final single pixel (just like stacking does).

Quite often I will resize my DSLR images to about 50% which certainly helps to make things look cleaner/smoother for this very reason. You have to be careful of course because it also softens the image so that is where the different types of resizing/resampling algorithms come in to play.

Link to comment
Share on other sites

On ‎2016‎-‎04‎-‎02 at 13:27, vlaiv said:

What baffles me the most is that this topic still causes so much confusion.

Let me give you my take on it:

1. We are interested in SNR and not the signal itself - we have to be clear on this one - this is true even for point sources - stars - only much much less noticeable (it does matter for example when guiding, you want your star to have good SNR).

2. As signal per area goes up - SNR goes up as well (issue with SNR is not as simple as this but it is a general conclusion that as signal per area goes up so does the SNR regardless of exact formula, and depending on read, dark and sky noise, and we are talking single frame here).

3. We are not interested in signal but signal per area (when making cooking plans you purchase food based on how many people you want to get fed, same amount of food, more people - they end up hungry).

So here we can conclude the following:

- Same aperture, same pixel area, different focal length (speed of scope) - Fast one will have bigger signal per area and therefore better SNR for single frame - so yes, using reducer with same pixel size will result in larger SNR per frame regardless of the fact that there are no new photons introduced (same aperture) - those photons that are there are being spread over larger surface (more pixels - more people to feed) in case of longer FL.

- Can F/15 be considered slow scope in photographic terms? - NO! - it does not depend only on focal length and aperture. It depends on signal per area, and in case of F/15 we can achieve same signal per area as with F/5 scope - provided we use larger area - camera with larger pixel size.

- Can two systems having totally different spec (different FL, aperture, F/ratio and pixel size) have same photographic speed? - YES! - provided that systems are matched in such a way that same number of photons hit each pixel - over duration of single frame both systems will collect the same amount of signal per pixel - hence same SNR (of course not the same SNR, as SNR depends on noise as well, so camera that has less read and dark noise wins here, but if we consider only shot noise in dark skies, then yes - same SNR).

So, photographic speed of system depends on 4 elements

- Normalized collecting area (aperture area modulated by telescope system throughput - percent of light that actually passes thru the scope) - direct dependence

- Recording area - in terms of size of camera pixel - direct dependence

- Focal length - inverse dependence (larger FL - more spread signal, less signal per area).

- QE of sensor - direct dependence

This deals only with photographic speed and not noise part (how much signal photons will each pixel record in given amount of time).

 

I believe this is covered in Moores text (Ch 1 in Gendler, Lessons from the Masters), and that it is why he goes the extra mile of normalizing pixel size.

As for noise being reduced by 2x2 binning; I understand that this works for small scale noise (no correlation between individual pixels). But what about large scale noise, from which CMOS cameras (DSLR) often suffer??

 

Wim (aka Ignoramus)

Link to comment
Share on other sites

4 hours ago, wimvb said:

I believe this is covered in Moores text (Ch 1 in Gendler, Lessons from the Masters), and that it is why he goes the extra mile of normalizing pixel size.

As for noise being reduced by 2x2 binning; I understand that this works for small scale noise (no correlation between individual pixels). But what about large scale noise, from which CMOS cameras (DSLR) often suffer??

 

Wim (aka Ignoramus)

Not sure what large scale noise is? As I see it (I might be totally wrong on that one), but noise is any unwanted random signal. Any type of noise is pixel related since signal is sampled at pixels. There might be some correlation in noise across the image, but then it is not quite what we perceive as noise - random unwanted fluctuations in signal level. It might still be unwanted signal but due to the fact that it is not random it can be removed, or at least non random part of it can be deduced and removed - what remains is true random noise. Think of amp glow - average amp glow signal can be removed but due to its nature regions affected by amp glow will have higher levels of residue noise (random component).

I guess one more example (and I now believe you thought of that one) is for example, per column random bias signal (so it is not like normal bias signal, but rather changes per column every time chip is being read). I think this is particularly nasty type of signal, but I believe that given enough information it can be removed from image (one might craft algorithm for that particular case), but you are right - binning will not lower that type of noise, stacking will.

Link to comment
Share on other sites

22 hours ago, vlaiv said:

Not sure what large scale noise is? As I see it (I might be totally wrong on that one), but noise is any unwanted random signal. Any type of noise is pixel related since signal is sampled at pixels. There might be some correlation in noise across the image, but then it is not quite what we perceive as noise - random unwanted fluctuations in signal level. It might still be unwanted signal but due to the fact that it is not random it can be removed, or at least non random part of it can be deduced and removed - what remains is true random noise. Think of amp glow - average amp glow signal can be removed but due to its nature regions affected by amp glow will have higher levels of residue noise (random component).

I guess one more example (and I now believe you thought of that one) is for example, per column random bias signal (so it is not like normal bias signal, but rather changes per column every time chip is being read). I think this is particularly nasty type of signal, but I believe that given enough information it can be removed from image (one might craft algorithm for that particular case), but you are right - binning will not lower that type of noise, stacking will.

I would expect that for instance any on chip electronics that is shared between a group of pixels can introduce noise over that group of pixels. Also, if there are local variations in pixel characteristics, this may affect the chips noise characteristics locally.

In CMOS there is of course what Tony Hallas refers to as colour mottle. In this case, noise is not just a variation of the signal in time, but also in space (location).

The discussion so far has been about temporal noise, which can be taken care of by integrating/averaging over time. Spatial noise can be taken care of by averaging over location. Tony Hallas technique of dithering for DSLR is one example.

BTW, I think we're moving off topic here.

Link to comment
Share on other sites

  • 1 month later...

Speaking as a novice here's my understanding. A focal reducer will not decrease exposure time, only increase field of view up to the maximum field of view of the telescope before vignetting. Does that sound right?

Link to comment
Share on other sites

Almost.

if you are talking about a small object that will fit on the chip without the reducer then the reducer brings in no new object photons. All it does is put them on fewer pixels which 'fill' faster. You can do that by resampling the unreduced image down in size to get pretty much the same effect. (So nearly the same effect that only someone on an argumentative forum elsewhere would care. :D )

If the reducer brings in another object that you actually want in your image then that total image will reach the signal to noise ratio that the imager in question considers acceptable in a time which reduces according to the daytime photographer's F ratio rule. 

A specific example:

Unreduced, I can just fit M42 on the chip at F5. 

Reduced, I can have M42 and the Running Man at F4.

I want the same signal to noise ratio in each case. Unreduced, I take 25 minutes at F5. (5x5=25.) Reduced to F4 I can get the same S/N ratio in 4x4=16 minutes. BUT that is for the whole image. That does NOT mean I can fit the reducer and get the same M42 in 16 minutes as I got unreduced in 25 minutes. If I shot M42 reduced in 16 minutes, cropped off the Running Man and compared it with a downwards-sampled version without reducer taken in 16 minutes the difference would not be worth getting excited about.

Olly

Link to comment
Share on other sites

10 hours ago, cuivenion said:

Thanks that clears things up. As an addition though, does this mean that the reducer would help with exposure times for live viewing as the software resampling would not be available at that point?

Yes, the reducer would give you a smaller, brighter target on the live view and the FOV would be wider.

Olly

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.