Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Long focal length for deep sky astrophotography


Recommended Posts

Thanks - that's what my brain was telling me, but then I started to doubt when you said 

Quote

I wonder why people insist on "speed" of the telescope as being crucial thing when imaging galaxies?

This isn't a criticism by the way! It's a genuine insight into why I went to a polytechnic. 

Link to comment
Share on other sites

8 minutes ago, osbourne one-nil said:

This isn't a criticism by the way! It's a genuine insight into why I went to a polytechnic. 

Ah, ok.

This is because 8" F/8 telescope working at 1"/px will produce the same SNR image in the same imaging time as 8" F/4 telescope working at 1"/px.

Both gather the same amount of light and spread it over the same amount of "pixel surface" - so resulting signal and thus SNR is the same.

Fact that one is F/8 and one is F/4 is irrelevant - hence, speed of the scope is irrelevant once you set your working resolution.

Makes sense?

  • Thanks 1
Link to comment
Share on other sites

Ah - so it's purely the aperture which improves things, not the f ratio. So my 8" SCT should be as effective in theory? I guess it then comes down to collimation and optical quality?

Yes - that makes perfect sense. You've done well; it's not easy making me understand something!

Edited by osbourne one-nil
Link to comment
Share on other sites

4 hours ago, vlaiv said:

I wonder why people insist on "speed" of the telescope as being crucial thing when imaging galaxies?

Not sure if this was aimed at my post, but perhaps I should have clarified by adding a faster scope when using the same pixel scale (not the quoted f ratio). A 10" Newtonian will be quicker than the 115mm refractor with the same pixel scale.

Link to comment
Share on other sites

10 minutes ago, osbourne one-nil said:

Ah - so it's purely the aperture which improves things, not the f ratio. So my 8" SCT should be as effective in theory? I guess it then comes down to collimation and optical quality?

Yes - that makes perfect sense. You've done well; it's not easy making me understand something!

Not only the aperture - it is aperture in combination with sampling rate - or "how much of the sky is covered by a single pixel"

Actual formula would be aperture_area * area_of_the_sky = speed of the setup

If you think about it - that is what "speed" of the telescope does in a sense (if we keep the physical pixel size the same): "faster" telescope = shorter focal length = more sky covered with single pixel (if we keep the physical pixel size the same) = faster speed because of "area_of_the_sky" part of above equation.

However - we don't need to alter F/ratio of the system in order to cover more of the sky - we can simply use larger pixel (in physical size sense) - either by using camera with larger pixels in microns or by binning our existing pixels.

  • Thanks 1
Link to comment
Share on other sites

Just now, Clarkey said:

Not sure if this was aimed at my post, but perhaps I should have clarified by adding a faster scope when using the same pixel scale (not the quoted f ratio). A 10" Newtonian will be quicker than the 115mm refractor with the same pixel scale.

I don't think it was aimed an anyone in particular - but in general notion that often repeats - advice is "get large newtonian" rather than "get large aperture telescope of any design type that suits you the best". Both will have the same speed at the same pixel scale, but other types might be, and often are, more manageable than large newtonian on several basis.

First, they can often be used without corrective optics which often reduces strehl ratio of the telescope in center (to be able to correct over larger field). Second - they can be of a compact design which is of course easier to manage and mount.

There are some drawbacks - like ease of collimation (which I think might be debatable) and soundness of construction - but that is just different type of discussion - bad vs good telescope execution.

Newtonians also have one major thing going for them and that is price - they are often cheaper.

There are some other smaller things in favor of folded designs - like better baffling, slower to dew up, easier to produce flat fields (less chance of light leak) and so on ..., but again, that might be better directed at bad vs good telescope execution rather than inherent design type.

  • Like 1
Link to comment
Share on other sites

5 hours ago, vlaiv said:

Nope.

It improves SNR by bin factor from recorded image - regardless of how read noise is treated. Once you have image - no matter how it was acquired - was it CMOS or CCD - software binning will improve its SNR by bin factor (if you bin x2 - you will get x2 improvement, x3 - x3 SNR improvement and so on).

Binning is the same underlying procedure as stacking - which is in turn the same underlying procedure as longer integration.

You effectively trade spatial resolution for integration time when you bin.

 

So just to clarify, I was talking about CMOS binning in camera vs binning after downloading the subs.  The same exact process is done when you do say as you do in camera, whether it's averaging or summation.  As a matter of fact, reducing an image during the integration process can achieve better results as you can apply more sophisticated algorithms than simple addition or averaging.

 

I wasn't talking about the benefits of CMOS binning, which are indeed real, although not as good as cc'd binning because of the additional read noise.

 

And read noise and swamping read noise with signal.  It's important to identify what signal actually swamps the read noise in a particular sub.  We all do the histogram thing looking at a single sub.  But that really doesn't tell the whole story.  The lowest signal we may eventually try to extract may well be still in the read noise regime even if the histogram peak is well above it.  And in that case CMOS binning vs just using a larger pixel or faster optics becomes a disadvantage.  

An ideal answer, oddly enough is fast optics with a large full well camera, in which case you can take full advantage of fast optics, reasonably low read noise and a full well that allows you to bring very faint signal like tidal streams well above the read noise.  I've seen some impressive examples of this taken with the asi2400 which has those specs.  Other than that, I would try and match the FL of the scope to the camera given a bit better than Nyquist sampling and not bin but rather resize in post processing if needed.  From my experience with the asi2600 it has always worked better than using the in camera binning.

Link to comment
Share on other sites

4 minutes ago, dciobota said:

And read noise and swamping read noise with signal.  It's important to identify what signal actually swamps the read noise in a particular sub. 

Signal does not need to swamp the read noise at all - and when imaging the faint stuff - it almost never does, at least not signal of interest - target signal.

What is important when we talk about read noise is to swamp read noise with some other type of noise. Out of basic types of noise present when imaging - only read noise is "per exposure". All others are time dependent - that is dark current noise, light pollution noise and target shot noise.

Since target signal is often weaker then read noise in single exposure - that is not our candidate. Neither is thermal noise (dark current noise). Only real candidate is light pollution noise.

Noise adds like linearly independent vectors and if one is much bigger than the other - result will be very close to that larger one (think right angled triangle and one side being particularly short - that makes hypotenuse almost the same length as the other side).

8 minutes ago, dciobota said:

We all do the histogram thing looking at a single sub.

Histogram is just a distraction - it has almost zero value in astronomical imaging. Main thing it can tell us if there is some clipping - but we can see that from the stats as well, so no real reason to use histogram.

When you want to calculate optimum exposure length - you should simply measure background signal per exposure - from that derive LP noise (which is square root of sky signal) and compare that with your read noise. This LP noise should be 3-5 times larger than read noise. Any exposure length longer than that is just bringing in diminishing returns (there is only 2% difference in SNR if one uses x5 swamp factor over single long exposure - and humans can't tell that difference in SNR by eye).

11 minutes ago, dciobota said:

The lowest signal we may eventually try to extract may well be still in the read noise regime even if the histogram peak is well above it.  And in that case CMOS binning vs just using a larger pixel or faster optics becomes a disadvantage.  

This makes no sense - as I've shown above with the example of one sub. In that sub read noise is many times larger than signal in tidal tails - yet binning works just as fine.

SNR impact of read noise depends on other noise sources and not on signal we are trying to capture. As long as we keep it (read noise that is) below some fraction of some other noise source - it is irrelevant regardless of how low our target signal is.

Let me ask you a question like this:

Say you have fast scope with large pixels and two different cameras, but cameras differ only by read noise.

First camera has 1.5e of read noise, and second camera has 3e of read noise.

Under which circumstances will you actually see decrease in SNR between these two setups?

15 minutes ago, dciobota said:

An ideal answer, oddly enough is fast optics with a large full well camera, in which case you can take full advantage of fast optics, reasonably low read noise and a full well that allows you to bring very faint signal like tidal streams well above the read noise. 

Full size well has nothing to do with bringing signal below read noise levels into view. Full well size of camera is largely inconsequential in astrophotography.

Link to comment
Share on other sites

6 hours ago, vlaiv said:

I wonder why people insist on "speed" of the telescope as being crucial thing when imaging galaxies?

Most galaxies are very small in angular size - maybe dozen of arc minutes at most. That is about 700px or less across the image of galaxy if one samples at highest practical sampling rates for amateur setups - which is 1"/px.

Now take any modern sensor that has more than 12MP - that is 4000x3000px or more, so you have 4000px/700px = ~x5 at least x5 larger sensor than you actually need in terms of pixels. You can bin x5 and you'll be still able to capture galaxy in its entirety + some surrounding space.

Btw - bin x5 will make F/15 scope work as if it was F/3 scope - so what is the point in going for F/4 Newtonian scope when you can comfortably use compact Cass type - be that SCT, MCT, RC or CC and produce excellent galaxy image.

 

My take on this would be - get largest aperture that you can afford and comfortably mount and use and adjust your working resolution to range of 1-1.2"/px for best small galaxy captures.

If the slower scopes you mention were cheaper or a lot easier to use then yes you could make an argument for them. You'll get the same image out of a budget F4 newt, but have the flexibility of a wider field of view; there are always other neighbouring galaxies you can capture at the same time.

Link to comment
Share on other sites

3 hours ago, vlaiv said:

Signal does not need to swamp the read noise at all - and when imaging the faint stuff - it almost never does, at least not signal of interest - target signal.

What is important when we talk about read noise is to swamp read noise with some other type of noise. Out of basic types of noise present when imaging - only read noise is "per exposure". All others are time dependent - that is dark current noise, light pollution noise and target shot noise.

Since target signal is often weaker then read noise in single exposure - that is not our candidate. Neither is thermal noise (dark current noise). Only real candidate is light pollution noise.

Noise adds like linearly independent vectors and if one is much bigger than the other - result will be very close to that larger one (think right angled triangle and one side being particularly short - that makes hypotenuse almost the same length as the other side).

Histogram is just a distraction - it has almost zero value in astronomical imaging. Main thing it can tell us if there is some clipping - but we can see that from the stats as well, so no real reason to use histogram.

When you want to calculate optimum exposure length - you should simply measure background signal per exposure - from that derive LP noise (which is square root of sky signal) and compare that with your read noise. This LP noise should be 3-5 times larger than read noise. Any exposure length longer than that is just bringing in diminishing returns (there is only 2% difference in SNR if one uses x5 swamp factor over single long exposure - and humans can't tell that difference in SNR by eye).

This makes no sense - as I've shown above with the example of one sub. In that sub read noise is many times larger than signal in tidal tails - yet binning works just as fine.

SNR impact of read noise depends on other noise sources and not on signal we are trying to capture. As long as we keep it (read noise that is) below some fraction of some other noise source - it is irrelevant regardless of how low our target signal is.

Let me ask you a question like this:

Say you have fast scope with large pixels and two different cameras, but cameras differ only by read noise.

First camera has 1.5e of read noise, and second camera has 3e of read noise.

Under which circumstances will you actually see decrease in SNR between these two setups?

Full size well has nothing to do with bringing signal below read noise levels into view. Full well size of camera is largely inconsequential in astrophotography.

I'm sorry but what you're saying goes contrary to actual field experience.  Just showing that binned image really proves nothing but a brightened image.  The only way you would make a convincing argument that way (and what exactly are you arguing here, noise or overall image brightness?) Is to show the data behind that image especially the pixel read noise as it relates to the binned pixel noise.  That image shows you nothing really especially at that tiny scale.  We argue data here nor pics.

Your explanation of linear vectors is only valid if noise is a non random deterministic value, which is not.  It's more a gaussian type distribution, therefore it's not strictly additive.  You can superimpose two gaussian distributions and derive a new mean if you want, but on a shot by shot basis you'll see completely random values.  Yes light pollution can dominate but if you look at some of those examples, especially narrowband ones, light pollution is not the issue.  The photon flux itself which is a random event, can throw a photon onto the sensor in one sub and none the next, yet the read noise will interpret random values every time within the sensor read noise error.  You don't try to swamp with lp, that's nonsense.  It even contradicts what you just said about the histogram, where you mention about not blowing out too many stars.  That sometimes implies that at a dark site your lp signal for one sub will be close to zero, or actually well below your swamp criteria.

And let's not forget, the swamp factor is just a rule of thumb.  It's not a number set in stone, just a "reasonable" number arrived at that gives good results.  It's not a binary decision point by any means.

Not sure what you're driving at with your hypothetical question about circumstances but is the first camera has a full well of 15000 at unity gain  electrons and the second 60000 electrons at unity gain then in a single sub I can definitely expose the second camera for twice longer and obtain a cleaner image because the actual dynamic range is twice as much, therefore the signal even in the faint areas is twice as clean.  There's no magic about that.  If you take two exposures with the first camera to equal the same total integration time you've introduced two read noise errors so your total dynamic range for those two frames is still less than the single exposure from the second camera.  I can't remember the exact formula for the combined read error.  It's a square root term I believe.  So something like sqrt(1.5) into 15000 vs 3 into 60000.  The dynamic range of the second is still greater.

Link to comment
Share on other sites

6 hours ago, vlaiv said:

Ah, ok.

This is because 8" F/8 telescope working at 1"/px will produce the same SNR image in the same imaging time as 8" F/4 telescope working at 1"/px.

Both gather the same amount of light and spread it over the same amount of "pixel surface" - so resulting signal and thus SNR is the same.

Fact that one is F/8 and one is F/4 is irrelevant - hence, speed of the scope is irrelevant once you set your working resolution.

Makes sense?

So this is again something that sounds good in theory but not of real practical use.  The pixel sizes in microns will need to be different for the two scopes to achieve the same equivalent fovs, binning out of the equation. So it will totally depend on the pixel read noises of the two cameras.  This is rare to find in real life something to compare directly like that in such a case.  It's often the other way around, choosing a fastest optic for a specific camera that will give you the desired image scale.  This is why speed matters in that respect.  You want to minimize total imaging time for a specific image scale.  Binning becomes a compromise because there are only so many cameras and pixel sizes on the market.

Edited by dciobota
Link to comment
Share on other sites

10 hours ago, osbourne one-nil said:

Ah - so it's purely the aperture which improves things, not the f ratio. So my 8" SCT should be as effective in theory? I guess it then comes down to collimation and optical quality?

Yes - that makes perfect sense. You've done well; it's not easy making me understand something!

The active ingredient in 'F ratio' is aperture. That's why, in regular photography, aperture is usually referred to as 'F stop.'  This OK when neither the focal length nor the pixel size is being changed while the aperture is being changed by opening or closing the diaphragm. There is no 'F ratio myth' here but there is no diaphragm on a telescope!

The F ratio myth creeps in when an astrophotographer becomes fixated on F ratio when 1) comparing scopes of different focal length 2) ignoring pixel size. 

When comparing two setups of the same focal length and with the same camera you can rely on F ratio to indicate the speed of capture because aperture is the only variable.

It's probably worth adding that resolution (of fine detail) and signal strength are, in theory, different entities. In real world imaging, though, they have an interconnected relationship. Few of us expose for long enough to get the absolute best out of our systems. Many fine details in galaxies are quite faint. The more signal we have, the more we can sharpen - and that's where signal strength impacts upon the final resolution of detail and, if we are honest, perhaps, the final impression of detail. This is where a faster system will be a benefit.  (Here I'm using Vlaiv's definition of a faster system, not just a faster F ratio.)

Olly

  • Thanks 1
Link to comment
Share on other sites

5 hours ago, dciobota said:

the first camera has a full well of 15000 at unity gain  electrons and the second 60000 electrons at unity gain then in a single sub I can definitely expose the second camera for twice longer and obtain a cleaner image because the actual dynamic range is twice as much, therefore the signal even in the faint areas is twice as clean

I know this is just an example but can you specify which cameras offer such FWDs at unity gain? I only look at zwo cameras and their graphs and marketing with regard to FWD is misleading, at unity gain those high marketing figures quickly drop to below 10k.

Link to comment
Share on other sites

7 hours ago, dciobota said:

I'm sorry but what you're saying goes contrary to actual field experience.  Just showing that binned image really proves nothing but a brightened image.  The only way you would make a convincing argument that way (and what exactly are you arguing here, noise or overall image brightness?) Is to show the data behind that image especially the pixel read noise as it relates to the binned pixel noise.  That image shows you nothing really especially at that tiny scale.  We argue data here nor pics.

Brightness of the image does not depend on captured data but rather white point you use and brightness of the display device you use to show the image.

Important metric is signal to noise ratio - or can something be detected above noise level. I've shown that, while in single image unbinned - there is no way to detect tidal tails as they are below noise floor (you need about SNR of 5 to have reliable detection) - then can easily be seen in binned image because of SNR being higher.

7 hours ago, dciobota said:

Your explanation of linear vectors is only valid if noise is a non random deterministic value, which is not. 

Quite the opposite - they need to be truly random with zero correlation in order to add like that. If they were fully deterministic - they would add like normal numbers do - like signal does - just plain old addition.

7 hours ago, dciobota said:

  You don't try to swamp with lp, that's nonsense.  It even contradicts what you just said about the histogram, where you mention about not blowing out too many stars.  That sometimes implies that at a dark site your lp signal for one sub will be close to zero, or actually well below your swamp criteria.

No comment

 

Link to comment
Share on other sites

5 hours ago, Elp said:

I know this is just an example but can you specify which cameras offer such FWDs at unity gain? I only look at zwo cameras and their graphs and marketing with regard to FWD is misleading, at unity gain those high marketing figures quickly drop to below 10k.

They're just theoretical examples of course.  He's always talking theory and asked a theoretical question so I have a theoretical answer.  This is why I always point out real life field experience, especially given the limited choices we have in cameras today as far as pixel sizes, read noise, etc.  It's fun to play with numbers but we choose based on what actually works.  This is why fast optics are still preferred for imaging vs binning if the choice is available and affordable.

So mulling this a bit.  Maybe it's a bit of perspective on this subject.  It is indeed a matter of aperture, yes.  But we're stuck with cameras of certain pixel sizes right?  So often times I think, we take the camera first, choose an image scale or scales, then build the optics around that.  So for that pixel sizes and image scale, which implies a certain focal length, we see larger apertures as faster optics systems, right?  From a smaller aperture refractor at say f7 to a fast cdk at f3 or something like that. Indeed, that means the faster system has the larger apertures.  So it's basically saying the same thing, but it's more natural, for me at least to think of a faster system in terms of faster light gathering...it's how the term came about in the first place.  Makes sense?

Edited by dciobota
Clarification
Link to comment
Share on other sites

3 hours ago, vlaiv said:

Brightness of the image does not depend on captured data but rather white point you use and brightness of the display device you use to show the image.

Important metric is signal to noise ratio - or can something be detected above noise level. I've shown that, while in single image unbinned - there is no way to detect tidal tails as they are below noise floor (you need about SNR of 5 to have reliable detection) - then can easily be seen in binned image because of SNR being higher.

Quite the opposite - they need to be truly random with zero correlation in order to add like that. If they were fully deterministic - they would add like normal numbers do - like signal does - just plain old addition.

No comment

 

In the case of your comment on brightness, you are saying exactly what I said.  My comment was on your pictures used as proof which show nothing but brightness changed, which as you point out is not useful to make a comparison.  That's all I was saying.

As to the vector additions, I honestly don't know what kind of point you're trying to make.  Uncorrelated vectors have covariances of zero and lp has nothing to do with signal or read noise, which is why they are uncorrelated.  Increasing lp does nothing but decrease dynamic range by overwhelming or swamping both signal and read noise.  This is precisely the reason why imaging in say London is more difficult to bring faint signal out than in a rural dark sky.  The same math applies to a reduced skyglow in a rural area as well.  All those forms of noise simply mask the signal, and the worst part of LP is that it increases with time the same as signal, whereas read noise is constant, which is why you increase exposure time to swamp it with signal, not lp.

 

Link to comment
Share on other sites

4 minutes ago, dciobota said:

As to the vector additions, I honestly don't know what kind of point you're trying to make.  Uncorrelated vectors have covariances of zero and lp has nothing to do with signal or read noise, which is why they are uncorrelated.  Increasing lp does nothing but decrease dynamic range by overwhelming or swamping both signal and read noise.  This is precisely the reason why imaging in say London is more difficult to bring faint signal out than in a rural dark sky.  The same math applies to a reduced skyglow in a rural area as well.  All those forms of noise simply mask the signal, and the worst part of LP is that it increases with time the same as signal, whereas read noise is constant, which is why you increase exposure time to swamp it with signal, not lp.

LP signal is made out of photons, right?

Those photons always behave the same. Regardless if they are coming from the target or from LP, which means that they have associated noise - Poisson noise.

LP Signal itself is very easy to remove. It is usually either constant or in form of a linear gradient (in some cases it can be higher order polynomial, but in any case - easy to model and remove). What you can't remove is Poisson shot noise tied to that LP signal and it is this noise that adds to total noise in the image.

Since both read noise and this LP shot noise have zero correlation - they are added like linearly independent vectors - or square root of sum of squares.

We can easily calculate increase in LP noise given the "swamp factor" - or how many times LP noise is larger than read noise.

Say that we have some read noise X and we have x5 larger LP noise. Total noise will be: sqrt(X^2 + (5*X)^2) = sqrt( X^2 + 25 *X^2) = sqrt(26*X^2) = X * sqrt(26) = 5.099 *X

In another words - if we have LP noise that is x5 larger than read noise it is the same as having LP noise that is 5.099 / 5  = 1.0198... or ~2% (1.98....%) times larger and no read noise at all.

By choosing suitable sub duration we can ignore read noise of the camera by assuming we have 2% more of the light pollution noise.

That 2% might seem like significant value - but in reality it is far from it.

Over the course of the evening, due to target changing position we can have its apparent magnitude change by 0.05 most of the time (atmospheric attenuation). That translates into ~5% change in signal level and consequently more than 2% change in SNR, so you see 2% increase in noise happens just because earth spins :D and is not something you'll notice in the image.

Hear me out . What I'm saying is not my personal opinion. I'm just listing verifiable facts.

I've seen people often mention this video, and I think it will benefit you to watch it as well:

https://www.youtube.com/watch?v=3RH93UvP358

 

Link to comment
Share on other sites

6 hours ago, vlaiv said:

LP signal is made out of photons, right?

Those photons always behave the same. Regardless if they are coming from the target or from LP, which means that they have associated noise - Poisson noise.

LP Signal itself is very easy to remove. It is usually either constant or in form of a linear gradient (in some cases it can be higher order polynomial, but in any case - easy to model and remove). What you can't remove is Poisson shot noise tied to that LP signal and it is this noise that adds to total noise in the image.

Since both read noise and this LP shot noise have zero correlation - they are added like linearly independent vectors - or square root of sum of squares.

We can easily calculate increase in LP noise given the "swamp factor" - or how many times LP noise is larger than read noise.

Say that we have some read noise X and we have x5 larger LP noise. Total noise will be: sqrt(X^2 + (5*X)^2) = sqrt( X^2 + 25 *X^2) = sqrt(26*X^2) = X * sqrt(26) = 5.099 *X

In another words - if we have LP noise that is x5 larger than read noise it is the same as having LP noise that is 5.099 / 5  = 1.0198... or ~2% (1.98....%) times larger and no read noise at all.

By choosing suitable sub duration we can ignore read noise of the camera by assuming we have 2% more of the light pollution noise.

That 2% might seem like significant value - but in reality it is far from it.

Over the course of the evening, due to target changing position we can have its apparent magnitude change by 0.05 most of the time (atmospheric attenuation). That translates into ~5% change in signal level and consequently more than 2% change in SNR, so you see 2% increase in noise happens just because earth spins :D and is not something you'll notice in the image.

Hear me out . What I'm saying is not my personal opinion. I'm just listing verifiable facts.

I've seen people often mention this video, and I think it will benefit you to watch it as well:

https://www.youtube.com/watch?v=3RH93UvP358

 

Lp noise is NOT a constant, just like object signal is not.  I think you are confusing a constant signal source, which lp comes from.  You yourself mentioned it has a poisson distribution.  So for a certain sub, exact same exposure a pixel does NOT record the same signal from an LP source or object.  It averages sure.  Same exact thing with read noise, it averages over time.  The difference however is read noise is constant with single exposure length, the other ones are not.

If you subtract LP from an image you're back where you started originally, with read noise and signal.  Lp is not some sort of magic eraser.  Are you really saying LP helps imaging?  Really?

So I watched the video, and while I disagree with the way he presents certain things, I think I see your confusion maybe.  He talks about read noise as a factor of image brightness and indeed, the brighter the image the less the read noise fraction.  But this is the total image.  The snr of interest is not the total image vs read noise but rather object signal vs read noise.  Total image contains LP.  Lp is useless to use in any context to measure of object signal.  It is however useful in calculating the dynamic range you will get from a single sub with a particular camera and gain.  This will determine your total effective integration time.  This is why total integration times needed to achieve a certain object image depth are much longer from urban than rural skies.  Simple physics.

 

Also note he talks about shot noise.  As I mentioned above, every signal has shot noise.  Every signal.  Thermal, read, photon from the object, photons from LP, etc.  Those are random as you surmised correctly poisson type distributions.  This is why LP of any kind is unwanted in a sub exposure.  Any kind.  We do live with it but it's not something we strive to increase in our exposures.  The key and I can't stress it enough is object signal.  It is not a distraction as your first post stated and what I objected to.

Link to comment
Share on other sites

4 minutes ago, dciobota said:

If you subtract LP from an image you're back where you started originally, with read noise and signal. 

You are left with LP noise as well - you can't remove that and it exists as we had LP signal present.

4 minutes ago, dciobota said:

Lp is not some sort of magic eraser. 

Not sure what you mean by this - I never said anything remotely so.

4 minutes ago, dciobota said:

Are you really saying LP helps imaging?  Really?

Never said such thing.

5 minutes ago, dciobota said:

The key and I can't stress it enough is object signal.  It is not a distraction as your first post stated and what I objected to.

I never said that signal is a distraction, could you please point to the place where I said so?

Signal itself is not the key, signal to noise ratio is.

Link to comment
Share on other sites

1 hour ago, vlaiv said:

You are left with LP noise as well - you can't remove that and it exists as we had LP signal present.

Not sure what you mean by this - I never said anything remotely so.

Never said such thing.

I never said that signal is a distraction, could you please point to the place where I said so?

Signal itself is not the key, signal to noise ratio is.

Sorry I misquoted you.  Instead you said this:

"Since target signal is often weaker then read noise in single exposure - that is not our candidate. Neither is thermal noise (dark current noise). Only real candidate is light pollution noise."

which kinda amounts to target signal is not the important thing to consider in calculating snr.  My point is that it is important, the most important. You also mentioned this:

"SNR impact of read noise depends on other noise sources and not on signal we are trying to capture. As long as we keep it (read noise that is) below some fraction of some other noise source - it is irrelevant regardless of how low our target signal is."

which implies the same thing.  I do not agree with this and in Robin Glovers discussion that is not what he's implying at all.  He's merely using that as a way of judging the dynamic range in a sub.  That's the and of the total image including LP vs read noise, as I said.  The snr of the target signal is not the same at all.  Once you subtract LP, you will end up with that weak snr of the target signal against read noise.  If you actually want to pull those faint target signals to an acceptable ratio as he says, you must consider sacrificing dynamic range by exposing longer.  Your total image and will be very high, but your target snr will become acceptable at that point, although you will then risk clipping highlights.  This is the very reason also why a deeper well is desirable, especially in skies with higher LP.  You want to swamp read noise with faint target signal not lp noise while maintaining reasonable star colors or highlights.  

 

Link to comment
Share on other sites

1 minute ago, dciobota said:

which kinda amounts to target signal is not the important thing to consider in calculating snr.

It does not. I was talking about swamping read noise with some other noise source.

On 19/02/2024 at 23:48, vlaiv said:

What is important when we talk about read noise is to swamp read noise with some other type of noise. Out of basic types of noise present when imaging - only read noise is "per exposure". All others are time dependent - that is dark current noise, light pollution noise and target shot noise.

Since target signal is often weaker then read noise in single exposure - that is not our candidate. Neither is thermal noise (dark current noise). Only real candidate is light pollution noise.

You can't misinterpret what has been said - it is very clear.

3 minutes ago, dciobota said:

which implies the same thing.

It does not imply the same thing - it is very clear in what it says - and that is that impact of the read noise on final SNR depends on other noise sources.

Signal is what it is - it does not change if we decide to spend one hour in 60 subs or the same hour in 5 subs. Neither does change any of noise sources that are time dependent. Only thing that changes is read noise - total amount of it. And Impact that this has on final image solely depends on other noise sources and their magnitude compared to read noise.

If read noise is small compared to any other noise source in single exposure - it's impact on overall noise and thus SNR will be minimal. I've shown you calculation for this. If any other noise has level that is x5 (or more) than the read noise in single exposure - then total increase in noise (and decrease in SNR) with respect to read noise (for example comparing this case to camera with 0 read noise) - is less than 2%.

9 minutes ago, dciobota said:

 I do not agree with this and in Robin Glovers discussion that is not what he's implying at all.  He's merely using that as a way of judging the dynamic range in a sub.  That's the and of the total image including LP vs read noise, as I said.  The snr of the target signal is not the same at all.  Once you subtract LP, you will end up with that weak snr of the target signal against read noise.  If you actually want to pull those faint target signals to an acceptable ratio as he says, you must consider sacrificing dynamic range by exposing longer.  Your total image and will be very high, but your target snr will become acceptable at that point, although you will then risk clipping highlights.  This is the very reason also why a deeper well is desirable, especially in skies with higher LP.  You want to swamp read noise with faint target signal not lp noise while maintaining reasonable star colors or highlights.  

I can't comment on what Robin Glover presented in that video as I have not watched the video. I've only concluded from comments from other members that he is presenting valid known statements (and thus is surely right in his presentation).

Since you say that you don't agree with me - could you please answer my question about difference that 1.5e versus 3e of read noise makes, depending on shooting conditions?

It is ok to disagree with me as long as you put forward different view and provide actual facts in support of it. Alternatively - you can point what it is that is wrong (and preferably cite a source) in what I've said.

Link to comment
Share on other sites

On 16/02/2024 at 14:01, Tom33 said:

" I would love a scope with a long focal length to go after some galaxies and the smaller deep sky objects but would appreciate some advice. I’ve looked at Newtonians, Cassegrain,s in all their formats but appreciate that collimation and other matters will come into play. If anyone has gone down this route I’d be pleased to hear from you."

Many thanks & clear skies

The OP said the above so here is what I have recently tried for imaging galaxies.  Due to the small pixel size of most modern CMOS cameras are quite over sampled.  After looking at the specs of my ASI482MC I decided to try my it since it has larger 5.8 micron pixels to sample at a better image scale and to get "up close and personal" with galaxies.  I used an AP67 reducer with an AT6RC to get NGC253 to fit the sensor so I ended up at F7 (which is about 1000mm) at 1.25"/pixel.  I shot 120 x 120" subs (4 hours) at gain 150 dithered every 3rd sub.  Since the ASI482MC is a non cooled camera I took a few Darks at the start of the run then some in the middle and some at the end and averaged them together.  I was not expecting much with the experiment so I did not take any Flats but since the sensor was fairly clean and is small enough there was not much vignetting.  

The images were captured and processed with Astroart 8  from my SRO dark site  (Bortle 2)

The result turned out way better than I expected !!!!!!

After this I am trying to lay hands on an ASI432MM with it's 9 micron pixels to use with my old C8 and AT6RC

 

 

NGC253 4H J Love SRO LBL.jpg

AT6RC-ASI482MC-GEM45-1-sm.JPG

Edited by CCD-Freak
  • Like 3
Link to comment
Share on other sites

6 hours ago, vlaiv said:

It does not. I was talking about swamping read noise with some other noise source.

You can't misinterpret what has been said - it is very clear.

It does not imply the same thing - it is very clear in what it says - and that is that impact of the read noise on final SNR depends on other noise sources.

Signal is what it is - it does not change if we decide to spend one hour in 60 subs or the same hour in 5 subs. Neither does change any of noise sources that are time dependent. Only thing that changes is read noise - total amount of it. And Impact that this has on final image solely depends on other noise sources and their magnitude compared to read noise.

If read noise is small compared to any other noise source in single exposure - it's impact on overall noise and thus SNR will be minimal. I've shown you calculation for this. If any other noise has level that is x5 (or more) than the read noise in single exposure - then total increase in noise (and decrease in SNR) with respect to read noise (for example comparing this case to camera with 0 read noise) - is less than 2%.

I can't comment on what Robin Glover presented in that video as I have not watched the video. I've only concluded from comments from other members that he is presenting valid known statements (and thus is surely right in his presentation).

Since you say that you don't agree with me - could you please answer my question about difference that 1.5e versus 3e of read noise makes, depending on shooting conditions?

It is ok to disagree with me as long as you put forward different view and provide actual facts in support of it. Alternatively - you can point what it is that is wrong (and preferably cite a source) in what I've said.

Ok you recommend a video you didn't even watch.  I think we're just going in circles on this.  I keep showing you what you're actually saying and you say it's not.  I don't think we're getting anywhere with this.

My assertions stand on their own merit and they are:

To get the faint object signal to show, it needs to exceed read noise.  This can be done in a single exposure by raising the exposure length until the signal exceeds read noise, since it increases with time whereas read noise is constant, or increase total integration time, or increase photon flux per pixel which for the same camera means a faster optic.

Lp has nothing to do with increasing snr of object signal vs read noise and no effect at helping anything to do with bringing out that faint signal.  Lp does have an effect on the dynamic range of an image from a single sub and must be considered when calculating exposure length.

We can certainly disagree on these statements but I'll leave them here and not debate them further, as I have already done that in previous posts.  I'm not one who enjoys repeating himself tbh.

Link to comment
Share on other sites

2 hours ago, dciobota said:

Ok you recommend a video you didn't even watch. 

Why not? I put my trust in fellow SGL members who recommend said video when discussion is had on topic of read noise and sub duration. I'm also aware that SharpCap (software which author is Dr Robin Glover) can do needed calculations to determine read noise of camera and proper sub duration.

2 hours ago, dciobota said:

I think we're just going in circles on this. 

No, we are actually not having civil conversation about this - you keep bashing at what's been said without any arguments.

3 hours ago, dciobota said:

I keep showing you what you're actually saying and you say it's not. 

You showed absolutely nothing. You keep giving vague statements on topic.

3 hours ago, dciobota said:

To get the faint object signal to show, it needs to exceed read noise.  This can be done in a single exposure by raising the exposure length until the signal exceeds read noise, since it increases with time whereas read noise is constant, or increase total integration time, or increase photon flux per pixel which for the same camera means a faster optic.

This is wrong on several accounts. I will list them:

1. First part is partly correct - it indeed requires signal to be above read noise, but signal needs to be above total noise - not just read noise. SNR needs to be higher than 1 (or in fact about 5 for reliable detection).

2. Increase in total integration time won't necessarily raise SNR above certain threshold. If we use exposures that are very short - for example few milliseconds, then you will introduce too much read noise for SNR to be above limit. In fact - for any total integration time - I can select sub duration that will keep SNR below 1 due to read noise.

3. Using faster optics is not the only way to increase photon flux per pixel. That is also achieved by using larger pixels or by binning smaller pixels to larger size.

3 hours ago, dciobota said:

Lp has nothing to do with increasing snr of object signal vs read noise and no effect at helping anything to do with bringing out that faint signal.  Lp does have an effect on the dynamic range of an image from a single sub and must be considered when calculating exposure length.

I agree on most of what you said here - although I don't know what you mean by "SNR of object signal vs read noise". That statement makes no sense. You should not compare ratio of different quantities (SNR is ratio of signal to noise) to one of such quantities (read noise). I'll just assume you meant SNR (signal to noise ratio).

I never claimed differently - although you tried to misinterpret what I've said in such way several times.

However, I will repeat what LP is good for - it shows you at which sub duration it virtually makes no difference for given read noise and you don't need to use longer subs. In fact - this is not exclusive to LP - it applies to any noise source that depends on time - when any noise source other than read noise becomes significantly larger than read noise - read noise stops having significant impact on total SNR (unlike above example where we use sufficiently short exposures and read noise overpowers signal for any integration time).

Having this capability brings equality to cameras with different read noise. If one camera has 1.5e of read noise and other has 3e of read noise - there are sub lengths for each of them that will produce the same final SNR for the same integration time.

3 hours ago, dciobota said:

We can certainly disagree on these statements but I'll leave them here and not debate them further, as I have already done that in previous posts.  I'm not one who enjoys repeating himself tbh.

I agree. No point in further "debating" this topic.

Link to comment
Share on other sites

9 hours ago, CCD-Freak said:

ASI482MC

They're great. I used a 485MC and use a 183MM and 294MC all uncooled perfectly fine for DSO, and they're comparable to my cooled cameras when the image is post processed.

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.