Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

Help please - binning and exposure times....


geoflewis

Recommended Posts

Hi experts,

I know that there's been a lot of discussion around binning on this forum, as I've read quite a bit of it this afternoon and previously, but the more I read the more confused I become.

I've used the 'Resources' page to check CCD suitability for my C14 plus x0.67 Optec lens, when used with my QSI583 camera. The unbinned (or binned 1x1) resolution for this equipment combimation is 0.43"/px, which even in good seeing is borderline over sampled and in more typical UK seeing is definitely oversampled, so I typically capture all data at 2x2 binning, which gets me in the green sampling range of 0.85"/px.

The question I now have is by how much should I vary my exposure times? Simple maths suggests that since I'm using 4 physical camera pixels in the 1 large (binned 2x2) pixel, that I'm capturing at x4, but I read that practical experience puts the ratio as more like x1.6 to x2. So, if I typically would use 10 min (600sec) exposures imaging unbinned, would that be equvalent to say 5 mins exposures when binning 2x2. I see many exposure durations far in excess of that being used by some folks, e.g. 900s, 1200s, 1800s albeit not imaging with a C14, but I'm seeing fully saturated stars with the C14 at even 300s binned 2x2 and for RGB images saturation of brighter stars occurs after even 120s. I want to shoot as long exposures as possible without saturating stars as there is a significant overhead in download times, storage and processing with many shorter exposures. I know I can experiment, but building a darks library for 10m, 15m, etc. exposures is not a 5 minute exercise, so I don't want to do that if there is no point.

I'm hoping that my question makes sense, or if not someone can put me on the right track please.

Cheers, Geof

Link to comment
Share on other sites

22 hours ago, geoflewis said:

Simple maths suggests that since I'm using 4 physical camera pixels in the 1 large (binned 2x2) pixel, that I'm capturing at x4, but I read that practical experience puts the ratio as more like x1.6 to x2.

Although you get 4x the signal (per binned pixel) the noise also goes up by the sqrt of this. So your S/N per binned pixel improves by 2x not 4x. I suspect this is where some of the confusion comes from. Of course, the total number  photons you collect from an given object has not changed - only exposure time can alter that.

NigelM

  • Thanks 1
Link to comment
Share on other sites

Don't worry about exposure time in relation to binning or saturation of stars.

You should base your exposure length based on few other factors:

- how big is read noise compared to other noise sources - mainly LP levels. As soon as any other noise source becomes dominant, there is not much point in going longer. You can measure background levels in your exposure (best after calibration, per filter) and then determine if exposure is sufficiently long. You want your LP noise level to be at least 5 times more than read noise (so square root of background signal should be about 5 times as large as read noise - do pay attention to convert from ADU to electrons).

- In contrast to above, you want more exposures, so each should be as short as possible. This is consequence of several reasons. Most algorithms work better when having a lot of data (statistical analysis fares better) so you want larger number of subs in your stack. Shorter exposure means less imaging time lost if something bad happens - like airplane passing, wind gust, cable snag, earthquake (yes, this is a real thing, at that focal length you will record even small tremors).

So balance the two above - go long enough to defeat read noise, but don't over do it as you will benefit from larger number of subs.

As for saturation, this is easily dealt with by using just a couple of very short subs at the end of the session (or per filter) that you will use only to "fill in" star cores. With star cores, or any part of image that saturates - signal is already very strong (otherwise it would not saturate sensor). This means that SNR is already high and you don't need many subs stacked to get good result. Just a few of 10-15s exposures will deal with this. After you stack - just select brightest stars and "paste" same region from scaled (you need to multiply linear values with ratio of exposure lengths) short stack.

In principle, luminance does not need "fill in" subs, as star cores end up saturated after stretch anyway. You do need color fill in though - you need to have exact RGB ratio to preserve color.

  • Thanks 1
Link to comment
Share on other sites

58 minutes ago, dph1nm said:

Although you get 4x the signal (per binned pixel) the noise also goes up by the sqrt of this. So your S/N per binned pixel improves by 2x not 4x. I suspect this is where some of the confusion comes from. Of course, the total number  photons you collect from an given object has not changed - only exposure time can alter that.

NigelM

Thanks Nigel,

I'm still trying to get my head around this, but it's gradually seeping in - I hope....

Cheers, Geof

Link to comment
Share on other sites

39 minutes ago, vlaiv said:

Don't worry about exposure time in relation to binning or saturation of stars.

You should base your exposure length based on few other factors:

- how big is read noise compared to other noise sources - mainly LP levels. As soon as any other noise source becomes dominant, there is not much point in going longer. You can measure background levels in your exposure (best after calibration, per filter) and then determine if exposure is sufficiently long. You want your LP noise level to be at least 5 times more than read noise (so square root of background signal should be about 5 times as large as read noise - do pay attention to convert from ADU to electrons).

- In contrast to above, you want more exposures, so each should be as short as possible. This is consequence of several reasons. Most algorithms work better when having a lot of data (statistical analysis fares better) so you want larger number of subs in your stack. Shorter exposure means less imaging time lost if something bad happens - like airplane passing, wind gust, cable snag, earthquake (yes, this is a real thing, at that focal length you will record even small tremors).

So balance the two above - go long enough to defeat read noise, but don't over do it as you will benefit from larger number of subs.

As for saturation, this is easily dealt with by using just a couple of very short subs at the end of the session (or per filter) that you will use only to "fill in" star cores. With star cores, or any part of image that saturates - signal is already very strong (otherwise it would not saturate sensor). This means that SNR is already high and you don't need many subs stacked to get good result. Just a few of 10-15s exposures will deal with this. After you stack - just select brightest stars and "paste" same region from scaled (you need to multiply linear values with ratio of exposure lengths) short stack.

In principle, luminance does not need "fill in" subs, as star cores end up saturated after stretch anyway. You do need color fill in though - you need to have exact RGB ratio to preserve color.

Thanks Vlaiv,

I understand the advice about LP noise v read noise, but I have no idea how to measure them, nor how to convert ADU to electrons - actually I don't even understand what that means. What software do I need for that?

I also understand that more shorter exposures is better than few longer exposures for improved S/N, provided each exposure is sufficient to dominate read noise, so why do many of the best imagers I see shoot exposures lasting 10, 15, even 30 minutes? I've never been able to get my head around that?

I also understand that short exposures can be used to restore star colours, but I've never been much good at that, so I guess that don't know the correct processing steps. One of the problems of beinf self taught I guess....

I sure wish that I understood all this much better than I do currently.... 😖

Cheers, Geof

Link to comment
Share on other sites

1 hour ago, geoflewis said:

I understand the advice about LP noise v read noise, but I have no idea how to measure them, nor how to convert ADU to electrons - actually I don't even understand what that means. What software do I need for that?

I don't know what software are you currently use for processing, but most astro software out there offers this functionality.

You simply need to select a piece of background, trying to avoid stars and background nebulosity / galaxies and do statistics on it - or more precisely average value of selection. That is all it takes.

You take that value from your calibrated sub and according to this:

http://www.astrosurf.com/buil/qsi/comparison.htm

your camera has:

0.485 e- / ADU

and

8.7e read noise.

This means that you should expose until you get about ~2000 electrons of background LP signal ( (8.7*5)^2 ), or converted in ADU (values you read of from sub) - ~4100.

If your sub has lower value than this - increase exposure length, if it has higher background value than this - you can lower exposure length.

In fact you can do this from your old subs - find a calibrated sub prior to x2 bin and measure background value and then multiply it with 4 (because when you bin you will get x4 higher background because values of 4 adjacent pixels add to form a single value). If this value is less than ~4100 increase exposure length, if higher - reduce exposure length. You can even calculate needed exposure length by using a bit of math. If for example you are using 10 minute subs and you find that measured value is 1200ADU (for one of past not binned subs), meaning it will be about 4800 when binned, then it leads that proper exposure would be 10 minutes * 4100/4800 = 8 and a half minutes (~8.51 minutes).

Hope this makes sense.

1 hour ago, geoflewis said:

I also understand that more shorter exposures is better than few longer exposures for improved S/N, provided each exposure is sufficient to dominate read noise, so why do many of the best imagers I see shoot exposures lasting 10, 15, even 30 minutes? I've never been able to get my head around that?

Actually no. Fewer longer subs will always have higher SNR than more shorter subs, for same total integration time. With above overcoming of read noise we just make sure that difference is too small to be of any practical significance. At some point difference becomes too small to be important, but still fewer longer subs will have better SNR than more shorter subs, only that difference will be something like less than 1%.

If you observe what impacts sub duration, then it will be obvious why some of best imagers use longer exposures.

- Sub duration depends on read noise - higher read noise, longer sub needs to be

- Sub duration depends on LP levels - more LP you have, you can get away with shorter exposures.

Most best imagers use CCD sensors (they are long enough in this to have invested in cameras before CMOS became available / popular), and CCDs tend to have higher read noise than CMOS sensors - often as much as 10e or so. This means that CCDs benefit from longer exposures. Also, most of best imagers shoot from dark skies (at least they try to), which means that LP levels are low - again promoting longer exposures (you need to wait longer for LP signal to build up to for its noise to become significantly larger than read noise).

Add to that the fact that you've often seen NB exposure times (just did not pay attention to type of image vs sub length). Narrow band additionally cuts down LP levels - thus requiring longer exposures.

Add all above together - CCD with high read noise, dark skies and 3nm Ha filter and you can see how it leads to optimum sub length being more than hour.

 

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

I don't know what software are you currently use for processing, but most astro software out there offers this functionality.

You simply need to select a piece of background, trying to avoid stars and background nebulosity / galaxies and do statistics on it - or more precisely average value of selection. That is all it takes.

You take that value from your calibrated sub and according to this:

http://www.astrosurf.com/buil/qsi/comparison.htm

your camera has:

0.485 e- / ADU

and

8.7e read noise.

This means that you should expose until you get about ~2000 electrons of background LP signal ( (8.7*5)^2 ), or converted in ADU (values you read of from sub) - ~4100.

If your sub has lower value than this - increase exposure length, if it has higher background value than this - you can lower exposure length.

In fact you can do this from your old subs - find a calibrated sub prior to x2 bin and measure background value and then multiply it with 4 (because when you bin you will get x4 higher background because values of 4 adjacent pixels add to form a single value). If this value is less than ~4100 increase exposure length, if higher - reduce exposure length. You can even calculate needed exposure length by using a bit of math. If for example you are using 10 minute subs and you find that measured value is 1200ADU (for one of past not binned subs), meaning it will be about 4800 when binned, then it leads that proper exposure would be 10 minutes * 4100/4800 = 8 and a half minutes (~8.51 minutes).

Hope this makes sense.

Actually no. Fewer longer subs will always have higher SNR than more shorter subs, for same total integration time. With above overcoming of read noise we just make sure that difference is too small to be of any practical significance. At some point difference becomes too small to be important, but still fewer longer subs will have better SNR than more shorter subs, only that difference will be something like less than 1%.

If you observe what impacts sub duration, then it will be obvious why some of best imagers use longer exposures.

- Sub duration depends on read noise - higher read noise, longer sub needs to be

- Sub duration depends on LP levels - more LP you have, you can get away with shorter exposures.

Most best imagers use CCD sensors (they are long enough in this to have invested in cameras before CMOS became available / popular), and CCDs tend to have higher read noise than CMOS sensors - often as much as 10e or so. This means that CCDs benefit from longer exposures. Also, most of best imagers shoot from dark skies (at least they try to), which means that LP levels are low - again promoting longer exposures (you need to wait longer for LP signal to build up to for its noise to become significantly larger than read noise).

Add to that the fact that you've often seen NB exposure times (just did not pay attention to type of image vs sub length). Narrow band additionally cuts down LP levels - thus requiring longer exposures.

Add all above together - CCD with high read noise, dark skies and 3nm Ha filter and you can see how it leads to optimum sub length being more than hour.

 

Vlaiv,

Many thanks again. I have found a statistics tool in my astro software ImagesPlus, so tried some tests on both binned 2x2 and unbinned 1x1 images. I only use the 1x1 when imaging with my 4" APO, whereas binned 2x2 is always with my C14. I'm not sure what if any significance these different configurations make as it is the same camera. Here is what I found checking  some L subs....

4" APO unbinned 600 sec sub - ADU = ~3200

If I understood you correctly I need to multiply that by 4 for comparison with binned 2x2 which gives 12,800 - I must say I'm not understanding this part....??

C14+Optec lens binned 2x2:

-300 sec sub - ADU = ~1400

-120 sec sub - ADU = ~800 (NB for an image of M13 the background ADU was ~600, but if I shoot much longer surely the globular cluster core would become blown out)

Of course the results vary depending whether there was a moon in the sky, hence recent subs from M57 had ADU of ~1800 at 120 sec.

Based on my understanding of what you are saying the 600 sec sub binned 1x1 with the 4" APO looks to be too long duration, which I find very surprising, whereas both 120s and 300s subs with the C14 binned 2x2 is much too short.

Please can you explain some more.

Many thanks,

Geof

Link to comment
Share on other sites

2 hours ago, geoflewis said:

Vlaiv,

Many thanks again. I have found a statistics tool in my astro software ImagesPlus, so tried some tests on both binned 2x2 and unbinned 1x1 images. I only use the 1x1 when imaging with my 4" APO, whereas binned 2x2 is always with my C14. I'm not sure what if any significance these different configurations make as it is the same camera. Here is what I found checking  some L subs....

4" APO unbinned 600 sec sub - ADU = ~3200

If I understood you correctly I need to multiply that by 4 for comparison with binned 2x2 which gives 12,800 - I must say I'm not understanding this part....??

C14+Optec lens binned 2x2:

-300 sec sub - ADU = ~1400

-120 sec sub - ADU = ~800 (NB for an image of M13 the background ADU was ~600, but if I shoot much longer surely the globular cluster core would become blown out)

Of course the results vary depending whether there was a moon in the sky, hence recent subs from M57 had ADU of ~1800 at 120 sec.

Based on my understanding of what you are saying the 600 sec sub binned 1x1 with the 4" APO looks to be too long duration, which I find very surprising, whereas both 120s and 300s subs with the C14 binned 2x2 is much too short.

Please can you explain some more.

Many thanks,

Geof

Ok, let's go over specific measurements. I mentioned bin vs not binned in case you use the same telescope for regular (not binned) and you want to calculate exposure time for binned version.

In above case, you are already shooting binned with C14 and regular with 4" APO, and I guess you intend to continue doing so, therefore no need to multiply things with 4 and do conversion between non binned and binned background levels.

For 4" APO, you say you get around 3200 ADU for 600 seconds. We calculated that you want to get to about 4100 ADU background level. This means that good exposure value for 4" APO would be 600s * 4100 / 3200 = ~768seconds or about 12 minutes and 50 seconds. This is not optimum solution, but it does tell us that you will get slightly better result if you shoot 12 minute subs instead of 10 minutes subs.

For C14 we have 300s with ADU of 1400. From this good exposure value will be 300s * 4100 / 1400 = ~878 seconds or about 14 and a half minutes of exposure.

Don't know why you shot NB image of M13, it is globular cluster and there is no significant signal in emission lines, but we can do the same:

120s * 4100/800 = 615s or about 10 minutes.

I'll explain a bit more couple of things. First, when I wrote above about swamping read noise, I used rather arbitrary figure of x5 for LP noise vs read noise. I've seen this figure used in calculations, and it makes sense to use because following:

Suppose that you have 1 unit of read noise and x5 larger LP noise, or in this case 5 units of LP noise. Total noise will be (according to noise addition, and not including other noise sources): square_root(1^2 + 5^2) = square_root(26) = ~5.1. This shows how much LP noise swamped read noise, as there is almost no difference in noise level of LP and LP+read noise - 5 units vs 5.1 units, very small increase.

However, like I said, factor of x5 is arbitrary - which means that above calculated exposures are not "optimal" or something like that - they are just good guide line. If you have 12 minutes and 50 seconds as result of calculation - you can use 12 minutes or 13 minutes - what ever suits you (do some value that you will use across the scopes, so you don't have to build large dark library with many different exposures).

Second thing that I wanted to explain is above calculation of sub duration in better terms for easier understanding. It is just a simple ratio when you think about it - let's again use APO example. You measured background level of 600 seconds exposure to be ~3200 ADU. This means that background signal is "building up" at 3200/600 ADUs per every second, or 5.3333ADU/s.

Since we've seen that for our coefficient of x5 (which means LP noise needs to be about x5 in magnitude compared to read noise) this means ~4100 ADU, and just to iterate, read noise of your camera is ~8.7e, five time larger value is 43.5e and we need LP noise level to be about that number. LP noise level is equal to square root of LP signal, so we need LP signal to be square of ~43.5 = ~1892.25e (I rounded it up to 2000 above).

Last thing that we need to do is convert electrons to ADU, and that is what gain value is used for. Your camera gain value is 0.485 e- / ADU, so in order to get ADUs we need to divide electrons with gain factor - ~2000 / 0.485 = ~4123 so I again rounded it to 4100 (you don't need all the rounding, but it was easier for me to write round numbers instead of typing in precise numbers from calculator).

Back to our exposure time. We have LP level build up of 5.33333ADU/s. How much time it takes to build up to 4100ADU? Well, that is easy, 4100 / 5.33333 = 768s. Again you don't need to be very precise and do 12 minutes and 48 seconds - either 12 minutes or 13 will do.

Makes sense?

  • Thanks 1
Link to comment
Share on other sites

5 minutes ago, vlaiv said:

Ok, let's go over specific measurements. I mentioned bin vs not binned in case you use the same telescope for regular (not binned) and you want to calculate exposure time for binned version.

In above case, you are already shooting binned with C14 and regular with 4" APO, and I guess you intend to continue doing so, therefore no need to multiply things with 4 and do conversion between non binned and binned background levels.

For 4" APO, you say you get around 3200 ADU for 600 seconds. We calculated that you want to get to about 4100 ADU background level. This means that good exposure value for 4" APO would be 600s * 4100 / 3200 = ~768seconds or about 12 minutes and 50 seconds. This is not optimum solution, but it does tell us that you will get slightly better result if you shoot 12 minute subs instead of 10 minutes subs.

For C14 we have 300s with ADU of 1400. From this good exposure value will be 300s * 4100 / 1400 = ~878 seconds or about 14 and a half minutes of exposure.

Don't know why you shot NB image of M13, it is globular cluster and there is no significant signal in emission lines, but we can do the same:

120s * 4100/800 = 615s or about 10 minutes.

I'll explain a bit more couple of things. First, when I wrote above about swamping read noise, I used rather arbitrary figure of x5 for LP noise vs read noise. I've seen this figure used in calculations, and it makes sense to use because following:

Suppose that you have 1 unit of read noise and x5 larger LP noise, or in this case 5 units of LP noise. Total noise will be (according to noise addition, and not including other noise sources): square_root(1^2 + 5^2) = square_root(26) = ~5.1. This shows how much LP noise swamped read noise, as there is almost no difference in noise level of LP and LP+read noise - 5 units vs 5.1 units, very small increase.

However, like I said, factor of x5 is arbitrary - which means that above calculated exposures are not "optimal" or something like that - they are just good guide line. If you have 12 minutes and 50 seconds as result of calculation - you can use 12 minutes or 13 minutes - what ever suits you (do some value that you will use across the scopes, so you don't have to build large dark library with many different exposures).

Second thing that I wanted to explain is above calculation of sub duration in better terms for easier understanding. It is just a simple ratio when you think about it - let's again use APO example. You measured background level of 600 seconds exposure to be ~3200 ADU. This means that background signal is "building up" at 3200/600 ADUs per every second, or 5.3333ADU/s.

Since we've seen that for our coefficient of x5 (which means LP noise needs to be about x5 in magnitude compared to read noise) this means ~4100 ADU, and just to iterate, read noise of your camera is ~8.7e, five time larger value is 43.5e and we need LP noise level to be about that number. LP noise level is equal to square root of LP signal, so we need LP signal to be square of ~43.5 = ~1892.25e (I rounded it up to 2000 above).

Last thing that we need to do is convert electrons to ADU, and that is what gain value is used for. Your camera gain value is 0.485 e- / ADU, so in order to get ADUs we need to divide electrons with gain factor - ~2000 / 0.485 = ~4123 so I again rounded it to 4100 (you don't need all the rounding, but it was easier for me to write round numbers instead of typing in precise numbers from calculator).

Back to our exposure time. We have LP level build up of 5.33333ADU/s. How much time it takes to build up to 4100ADU? Well, that is easy, 4100 / 5.33333 = 768s. Again you don't need to be very precise and do 12 minutes and 48 seconds - either 12 minutes or 13 will do.

Makes sense?

Vlaiv,

Thank you so much for taking the time to explain this using my example ADU readings. I think that I finally get it, or at least I feel much more comfortable in my understanding. Thanks also for explaining your use of the x5 multiplier, which makes sense. I will continue to re-read this thread (probably many times to get it fixed in my brain), but at last I feel that I have some logical methodology for determining exposure durations for my different rigs and different binning.

Oh and BTW (by the way) my reference to NB for M13 was not Narrow Band, but Nota Bene (Latin for note well). Of course I do not shoot narrow band for globular clusters 😉.

Many, many thanks.

Geof

  • Like 1
Link to comment
Share on other sites

55 minutes ago, geoflewis said:

For C14 we have 300s with ADU of 1400. From this good exposure value will be 300s * 4100 / 1400 = ~878 seconds or about 14 and a half minutes of exposure.

Hi Vlaiv,

One more time please? I checked the background ADU for a series of Ha image that I captured last night. The exposures were 300sec and background ADU is ~50. So if target ADU is ~4100, then approx optimum exposure is 300s x 4100 / 50 = 24,600s or 410min (6.8 hours) - surely ~7 hour exposures cannot be right, so what did I do wrong? Does a background ADU of just 50 at 300s seem likely? I live in a bortle 4 location with readings that night of 21 SQM, so reasonably dark.

I've attached the calibrated Tiff for you to check if that is possible please.

C_M57_300SEC_2X2_HA_FRAME1.tif

Thanks again,

Geof

Link to comment
Share on other sites

27 minutes ago, geoflewis said:

Hi Vlaiv,

One more time please? I checked the background ADU for a series of Ha image that I captured last night. The exposures were 300sec and background ADU is ~50. So if target ADU is ~4100, then approx optimum exposure is 300s x 4100 / 50 = 24,600s or 410min (6.8 hours) - surely ~7 hour exposures cannot be right, so what did I do wrong? Does a background ADU of just 50 at 300s seem likely? I live in a bortle 4 location with readings that night of 21 SQM, so reasonably dark.

I've attached the calibrated Tiff for you to check if that is possible please.

C_M57_300SEC_2X2_HA_FRAME1.tif 3.97 MB · 1 download

Thanks again,

Geof

It is in fact right. Not that you should try such exposure - but in 6.8 hours, LP noise is going to be 5 times as large as read noise.

Like I already mentioned - Narrow band imaging is most demanding in terms of exposure length because narrow band filters do excellent job of removing LP and associated noise. For best results in narrow band - do the longest exposure that you are comfortable with - you've mentioned that some people do 30 minutes exposures - this is the case for such long exposures - dark skies, narrow band and large read noise camera.

I checked your calibrated sub, and I have suggestion for you. For some reason frame is 16bit. If at all possible make all your processing in 32bit. It is true that single sub will be 16bit prior to calibration, but as soon as you start calibration process, you will need more precision than 16bit provides. Using lower bits per pixel just introduces (bad kind of) noise to your image.

When making master subs for calibration do this also, don't make your master dark, bias and flat files be 16 bit - process your data in 32bit format.

I've checked the tiff (another tip - use fits format, I know many people prefer tiff, but fits is meant as format for recording / transferring and processing astronomical images, so most astro software will support it), and it has background level of about 39-40ADU, so yes, you will benefit from large exposure times for NB.

There are couple of things that I find odd with tiff sub you attached - namely 0 values of some pixels. This is very odd - to get exactly 0. I would expect no zeros, or at least just a few zero values, but this sub contains a lot. It means that either background LP level is very low (and in principle it is) - but then I would also expect at least some negative values, or all values to be positive.

It might also be "artifact" of 16bit tiff format - maybe image is "shifted", so measured background value is skewed by this. If I try to do "sanity" check, here is what I have:

I took a patch of "empty sky" and measured signal in electrons - it is around 18e. I also measured noise in this part of the image and it is ~18.5. Now to check if this "adds up" we need to see how much noise 18e of LP level + 8.7e of read noise is going to produce (and add a bit more due to calibration). It will be square root(18e + 8.7e^2) = ~9.68e

There is much more noise in the background of the image than there should be if background level is proper.

Could you do a calibrated frame in 32 bit (with 32bit masters) and post it as fits to run the same measurement again?

Link to comment
Share on other sites

8 hours ago, vlaiv said:

Could you do a calibrated frame in 32 bit (with 32bit masters) and post it as fits to run the same measurement again?

Hi Vlaiv,

Thanks again, here is a 32bit FIT file - at least I think that it is.

C_M57_300sec_2x2_Ha_frame1.fit

I usually perform processing (calibration, align, combine) at 32 bit Fit in Images Plus, then save as 16 bit Tiff for post processing as tools that I use, e.g Registar and Photoshop (CS2) do not read the Fit files.

Even at 32 bit level I think I'm seeing 16% (~339k) at 0, so what does this mean?

Regards, Geof

Link to comment
Share on other sites

I've examined 32bit fits and indeed background is a bit lower - at 39ADU (actually about 38.9). I guess this is because added precision.

I'm worried about number of zeros, and it indicates something wrong in calibration workflow. You can get exactly 0 pixel value under some circumstances - hot pixel being one example. Let's say that in certain exposure pixel saturates because it is hot pixel - it will have max_value (65535 or whatever max value for ADC is). All dark subs will have this value - max_value, and their average is also max_value. Light sub will have this value in particular pixel and when you subtract master dark you will have max_value - max_value for that pixel and that is always exactly 0.

I don't think it is possible in your case. Too much of pixels have 0 value, and also you are binning x2. If one pixel is hot, binned with other pixels that are not hot will result in lower value - so it won't be able to produce exactly 0. It is highly unlikely that 4 adjacent pixels that are binned together are all hot pixels (this would produce "hot binned pixel").

Let's look at histogram of your calibrated sub in detail to see if we can spot anything interesting that would explain the problem:

image.png.ee1507ebe9e8b378f4eaf1e56d2f28e7.png

Ah yes, it is immediately obvious that zeros are result of "left clipping". This could be due to calibration - all values below zero are clipped at zero for some reason (this happens for example when one is using 16bit unsigned values when calibrating - that format can't hold negative values, or it might be internal problem with calibration software).

It can possibly happen because wrong camera settings - if offset is set too low. Not sure if your camera even allows for software adjustment of offset value, or is it set to proper value in factory and can't be changed.

Let's see if we can figure out what is going on (if you wish of course) - could you please post two more subs:

- single unprocessed dark sub directly from camera

- master dark, again as 32bit fits.

Link to comment
Share on other sites

Hi Vlaiv,

I am just leaving home to go on holiday for one week - actually to a local uk star party, so I cannot send these to you immediately, but I will take my laptop with me and see if I can upload them later today. I very much appreciate you helping me diagnose all this for me and helping me better understand  what is going on.

Regards, Geof

Link to comment
Share on other sites

1 hour ago, geoflewis said:

Hi Vlaiv,

I am just leaving home to go on holiday for one week - actually to a local uk star party, so I cannot send these to you immediately, but I will take my laptop with me and see if I can upload them later today. I very much appreciate you helping me diagnose all this for me and helping me better understand  what is going on.

Regards, Geof

You are very welcome. Don't worry if you don't have time right now - I'm subscribed to this thread and you can also "mention" me when you have the time to upload files so we can figure out what is going on here.

Link to comment
Share on other sites

For right or wrong I go for 10 or 15 minutes for Luminance and Ha binned 1x1 with my Atik460 on my TEC and Mesu.  I also always do RGB 2x2 for 5 minutes.  I am considering dropping Ha to 2x2 binning. I have a dark and BIAS library for -10C for these parameters refreshed every three months. 

On my widefield rig with the KAF8300 sensor - much noisier - I got for 10 minutes Ha and 5 minutes RGB.  I bin everything at 1x1.  I image at -20C on this sensor because of the noise.

There is no right or wrong answer Geof.  It depends on if you are under flight paths, what the cloud pattern is like, what resolution you are imaging at and what your guiding is like.  I have only recently had the confidence in my right to go to 15 minutes on clear nights.  10 mins on nights with more cloud about.  30 mins where I live in the Midlands with the LP and East Midlands airport not far away is not possible.

Steve

  • Like 1
Link to comment
Share on other sites

My feeling is that there is nothing 'bordeline' about 0.43"PP. It's oversampled and probably considerably so. I'd be amazed if you weren't in 'empty resolution' territory here. We certainly were at 0.6"PP. Assuming your camera bins cleanly (not all of them do) then I'd bin 2x2 for sure and experiment with 3x3. Your guide RMS in arcsecs needs to be no worse than half your image scale in arcsecs per pixel. Is this the case? Then you have the seeing to worry about. I simply look at the FWHM while focusing and if it's bad I only shoot RGB, Lum to be captured on a more stable night.

I don't worry about saturating stellar cores. I regard it as inevitable if you're going to get enough signal at the faint end. Stars can be rescued later in post processing. Noel's Actions has an 'increase star color' routine which will pull colour down into the cores from the edge. Alternatively you can do a very soft RGB 'stars only' stretch, blur it a little, put it as a layer underneath the RGB hard stretch, and erase the hard stretched stellar cores. (You would do this by making a star selection as per MartinB's tutoprial in the processing section here or use Noel again.) Or you could take a short set of very quick RGBs at short exposure and use it as a layer in the same way. As you'd only be keeping the cores you might get away with just one sub of each since you'll be using only the bits close to saturation anyway, meaning no noise problem.

Olly

Link to comment
Share on other sites

6 hours ago, kirkster501 said:

For right or wrong I go for 10 or 15 minutes for Luminance and Ha binned 1x1 with my Atik460 on my TEC and Mesu.  I also always do RGB 2x2 for 5 minutes.  I am considering dropping Ha to 2x2 binning. I have a dark and BIAS library for -10C for these parameters refreshed every three months. 

On my widefield rig with the KAF8300 sensor - much noisier - I got for 10 minutes Ha and 5 minutes RGB.  I bin everything at 1x1.  I image at -20C on this sensor because of the noise.

There is no right or wrong answer Geof.  It depends on if you are under flight paths, what the cloud pattern is like, what resolution you are imaging at and what your guiding is like.  I have only recently had the confidence in my right to go to 15 minutes on clear nights.  10 mins on nights with more cloud about.  30 mins where I live in the Midlands with the LP and East Midlands airport not far away is not possible.

Steve

Thanks Steve, it is very helpful to have your input. As I found in my discussion with Vlaiv, I was wrong in thinking that more shorter exposures gives better SNR than few longer exposures. I will go back to 10 min or maybe even 15 minute exposures, but I'm also very interest to learn more from Vlaiv more about this so will continue the thread with him by providing the dark frames that he requested.

Cheers, Geof

Link to comment
Share on other sites

9 hours ago, vlaiv said:

Let's see if we can figure out what is going on (if you wish of course) - could you please post two more subs:

- single unprocessed dark sub directly from camera

- master dark, again as 32bit fits

Hi Vlaiv,

I've now checked into my holiday lodge, but there is no wifi here, so I'm using my phone as hotspot, so that may be a little unreliable. I checked my darks and was very surprised to discover that the master dark was only 8 bit, so I've reprocessed the dark stack as 24 bit. Here is a single dark off the camera and the master dark from 20 dark frames.

QSI-Darks(-10C)_300sec_Bin2x2_frame1.fit

MasterDark.fit

I checked and there are no zero values in the single dark, where the minimum seems to be ~400 ADU. The minimum value for the master dark is ~500 ADU.

I then looked at singe uncalibrated frame and the same frame after calibration.

M57_300sec_2x2_Ha_frame1.fit

C_M57_300sec_2x2_Ha_frame1.fit

The raw file off the camera shows minimum at ~400 ADU, this just 1 or 2 pixels, rising to ~500 ADU by the time 1000+ pixels at that level.

Checking the calibrated same image shows a huge number of pixels with 0 ADU, so for sure the calibration seems to be making the difference. If calibration simple deducts the dark frame value per pixel this makes sense, as anything in the raw file with ADU <500 will move to 0 if the dark master has min ADU value of ~500.

It makes me wonder how a dark frame has >ADU per pixel than a light frame even when using Ha filter, but perhaps I'm still not understanding this well enough. Truth to say, I've never really thought much about this previously, so this is definitely a journey of discovery for me and I very much appreciate your help and look forward to your reply.

Best regards, Geof

Link to comment
Share on other sites

3 hours ago, ollypenrice said:

My feeling is that there is nothing 'bordeline' about 0.43"PP. It's oversampled and probably considerably so. I'd be amazed if you weren't in 'empty resolution' territory here. We certainly were at 0.6"PP. Assuming your camera bins cleanly (not all of them do) then I'd bin 2x2 for sure and experiment with 3x3. Your guide RMS in arcsecs needs to be no worse than half your image scale in arcsecs per pixel. Is this the case? Then you have the seeing to worry about. I simply look at the FWHM while focusing and if it's bad I only shoot RGB, Lum to be captured on a more stable night.

I don't worry about saturating stellar cores. I regard it as inevitable if you're going to get enough signal at the faint end. Stars can be rescued later in post processing. Noel's Actions has an 'increase star color' routine which will pull colour down into the cores from the edge. Alternatively you can do a very soft RGB 'stars only' stretch, blur it a little, put it as a layer underneath the RGB hard stretch, and erase the hard stretched stellar cores. (You would do this by making a star selection as per MartinB's tutoprial in the processing section here or use Noel again.) Or you could take a short set of very quick RGBs at short exposure and use it as a layer in the same way. As you'd only be keeping the cores you might get away with just one sub of each since you'll be using only the bits close to saturation anyway, meaning no noise problem.

Olly

Hi Olly,

Thanks for your observations. 0.43"/px is precisely why I bin 2x2 with the C14 and as you suggest I have considered 3x3, but never used that other than for plate solving where I actually use 4x4. My guide RMS is typically around 0.5"-0.6" total, with lower values for RA and Dec (typically 0.3"-0.5" each). That is clearly more than half my image scale of 0.85" when binned 2x2, so please could you explain the target of half image scale. As you will have concluded from my discussion with Vlaiv on this thread I really understand very little of the science of this hobby 😖. Up until now I've just guessed, but now I'm trying to better understand what I should be doing and why.

Thanks for the explanation of how to improve star colours. I'm not familiar with MartinB's tutorial, so will look for that. I have just purchased Noel's tools last week and had a play, including with his 'increase star colour', but it didn't seem to have much effect - maybe I was expecting too much. I like the layer approach in PS that you [proposed, so I will give that a try, though I'm something of a novice with PS too....

Many thanks, Geof

Link to comment
Share on other sites

36 minutes ago, geoflewis said:

I checked and there are no zero values in the single dark, where the minimum seems to be ~400 ADU. The minimum value for the master dark is ~500 ADU.

Yes indeed, darks look proper, here is histogram of single dark:

image.png.e3782ab2072193af81685b622ae232ed.png

and here is from master dark:

image.png.936cd77f13c21704fc0796ffa13e8d25.png

Both look as they should - no issues with clipping, which tells us that offset is good for this camera and that is not the source of many zeros.

Now for light frames:

Sub directly from camera looks ok, here it is histogram of that:

image.png.79f67406572420de559294e5cd7a3873.png

But the histogram of calibrated sub does not look good:

image.png.e4fdfa5f4fdf5a4505b3befc073aeacf.png

As it shows clipping to the left.

I don't have master flat, so I can't do full calibration, but let's just do dark subtraction and see what the histogram of single sub should be, and measure background ADU value.

image.png.db71c4c8ce644a6427b117e70da2028e.png

Ok, this is properly calibrated frame - it has nice gaussian curve with no clipping, and there are negative values present. I can now do measurements on it.

Background level now reads 38ADU - so a bit less than it did before (before it was skewed by improper calibration). Properly calibrated sub also looks "cleaner" than that with zeros.

There seems to be something wrong with your calibration workflow.

54 minutes ago, geoflewis said:

Checking the calibrated same image shows a huge number of pixels with 0 ADU, so for sure the calibration seems to be making the difference. If calibration simple deducts the dark frame value per pixel this makes sense, as anything in the raw file with ADU <500 will move to 0 if the dark master has min ADU value of ~500.

You are right about subtraction of dark pixel values from light pixel values, but clipping at zero is not the proper way to do it - you can have negative values in the sub, and that is perfectly fine. I'll explain how this happens and why it is ok (as opposed to clipping it to 0). Master dark indeed can have higher values that light sub and again that is fine.

It has to do with noise and signal level. If something has for example 100e signal level, that means that average value of signal on very large number of subs is going to approach 100e. Every single sub will in principle have different value - some will have more than 100e some will have less than 100e, and on average it will be 100e. As you can see noise can go "both ways" - meaning provide both larger or smaller value than actual signal value.

Similar thing happens with darks. Darks contain bias signal (which has read noise) and dark signal which has dark noise. Light frames contain all of that + target and LP signal. In places where there is no target and in cases where LP is very low - like in dark skies when using NB filters - most of light frame background will be only bias+dark (same as in master dark). It will include a bit of LP signal, but that being low might not make much difference.

Now remember that read noise for your camera is 8.7e - that means that pixel value on every sub goes +/- this value (or rather that is one sigma, so it goes even more than that - in 99.7% cases it will be in range +/-3 sigma or about +/-26e). Master dark is stack of dark subs - so each pixel is closer to average value than single sub. For discussion purposes we can say that master dark has exact average value.

Light sub will behave as any other sub on background pixels (only dark+bias same as dark sub, and very little LP) so it will have either higher or lower value than this. If value is lower due to noise and LP signal is not strong enough to "push" it back over average of master dark, when you subtract master dark - you will get negative value. Negative value just means "no signal" and noise that "oscillates around 0", and sometimes it's positive and sometimes it's negative.

Now let's get back to why clipping at 0 is bad. Imagine you have 0 average value and some noise around that value. Average value of that noise will be - well 0, that means there must be negative values and positive values and those should balance out to get you 0. If you take all negative values and just "declare" them to be 0 - you shift the balance. Sum of such numbers simply must be higher than 0 and average of such numbers will certainly be higher than 0. You have made a change in recorded signal - it is not 0 any more but some higher value. Clipping of histogram changes recorded signal. It does so with all the pixels that have negative values - it does not do that with pixels that have all positive values (because their average is higher than 0). This means that you don't just add some offset, you are in fact altering your image and any altering of image can be considered noise. But this noise is not random, so it is bad kind of noise.

Moral of all of this - you should look into your calibration workflow and fix things. If you want, I can give you small "tutorial" on how to properly calibrate your subs in ImageJ (it's a free software for scientific image processing) in case you can't figure out what is wrong with the software you are using and your workflow.

Btw, I'm sure if you properly calibrate your subs and restack them you will find that your image is easier to process / nicer looking than previous version.

49 minutes ago, geoflewis said:

Hi Olly,

Thanks for your observations. 0.43"/px is precisely why I bin 2x2 with the C14 and as you suggest I have considered 3x3, but never used that other than for plate solving where I actually use 4x4. My guide RMS is typically around 0.5"-0.6" total, with lower values for RA and Dec (typically 0.3"-0.5" each). That is clearly more than half my image scale of 0.85" when binned 2x2, so please could you explain the target of half image scale. As you will have concluded from my discussion with Vlaiv on this thread I really understand very little of the science of this hobby 😖. Up until now I've just guessed, but now I'm trying to better understand what I should be doing and why.

Although Olly mentioned this, I'll give my view of it (which I'm certain agrees with Olly's).

Guide RMS being half that of imaging resolution is just a rule of thumb / arbitrary chosen value that works well - much like above x5 as ratio of read noise to LP noise. In fact, same mathematical principle underlies both. Guide errors and seeing are sort of "noise" and add as any other type of noise (Square root of sum squares). That means - when one component gets small enough compared to other components it simply does not make much of a difference.

Sampling rate is often chosen based on average seeing conditions and aperture of the scope. Having guide RMS half of sampling rate (both in arc seconds) ensures that guide error will be comparatively small to that of seeing and therefore won't make much of a difference. Another way to think about it would be - if guide RMS is less than half a pixel, then "total" motion of scope relative to the star is still within that pixel for most of the time (Guide rms just looks at displacement, but displacement can be in both "positive" and "negative" direction, so if displacement is half a pixel then you won't "leave" pixel if you start at its center sort of reasoning).

This is why above is good rule of the thumb.

Just for completeness - same applies as with sub duration. Better guiding is always better than worse guiding (kind of obvious when written down like that :D ) - in the same way less longer subs is always better than more shorter subs (for the same total integration time), for the same reason but also with the same "property" that at some point difference is too small to be important for all practical purposes.

  • Like 1
Link to comment
Share on other sites

Vlaiv,

Again many thanks for your analysis. It's reassuring to know that the raw light, raw dark and master dark look ok, so now I have to understand what went wrong with the calibration, but I have no idea how to do that. The astro processing softwarwe that I use (ImagesPlus) provides an auto calibration tool where I load lights, darks, flats, flat darks and bias (though actually I don't use bias) frames. I then hit the 'process' button and it runs throught it's steps automatically with the output being the calibrated, aligned and combined stack ready for post processing. See screenshot below...

648539775_ImagesPlusAutoImageSetProcess.JPG.d6f15bf37620382859fe5e4e0d96c9e3.JPG

Is it possible that my flats and flat darks are the problem? I use an LED light panel for my lights and with the Ha filter the flat exposures were a fairly long 90sec, which I matched for the flat darks. Here are a single flat, the master flat (from 20), singe flatdark and master flatdark (from 20)....

C14+Optec-Flats(-10C)_2019-09-20_Flat_Ha_90sec_2x2_Ha_frame1.fit

C14+Optec+QSI(-10C)_20Sep2019_Ha_MasterFlat.fit

C14+Optec-FlatDarks(-10C)_2019-09-20_Dark_Ha_90sec_2x2_Ha_frame1.fit

C14+Optec+QSI(-10C)_20Sep2019_Ha_MasterFlatDark.fit

I am interested in your suggestion to use ImageJ (though I never heard of it previously) and your tutorial for it. I did just Google it and it looks frightening to me.....

I have also been thinking to switch to PixInsight (PI) or Astro Pixel Processor (APP) as I hear more and more that these are the best astro image processing tools, especially APP for calibration and stacking.

I await you next lesson please 😀.

Geof

Link to comment
Share on other sites

There are many nights in UK where I cannot achieve an RMS of half the imaging resolution (which in Atik/TEC140) is 0.93Arsec/pixel.  Friday was such a night and I binned the data from the three hour window as the stars were not controlled enough.  It's widefield rig only (8arcsec/pixel) on nights like that.

  • Like 1
Link to comment
Share on other sites

13 hours ago, geoflewis said:

Is it possible that my flats and flat darks are the problem? I use an LED light panel for my lights and with the Ha filter the flat exposures were a fairly long 90sec, which I matched for the flat darks. Here are a single flat, the master flat (from 20), singe flatdark and master flatdark (from 20)....

I've taken a look at subs you attached, and they look pretty much ok, so I'm certain they are not to blame for zeros that appear in calibrated frame.

I do have couple of recommendations and one observation.

- Use sigma clip when stacking your darks, flats and flat darks. You are using quite a long exposure for all of your calibration frames and chance you will pick up stray cosmic ray (or rather any sort of high energy particle) is high. It shows in your calibration masters. Sigma clip stacking is designed to deal with this. Here is an example in master flat dark:

image.png.559cef8097fe4930af0d2b2d9aa160c5.png

Now if that were hot pixels (like surrounding single pixels that are white) then it would show in the same place on master dark, but it does not (master dark has few of these, but at different locations / orientations).

- There is quite a lot of hot pixels that saturate to 100% value - these can't be properly calibrated, so you need some sort of "cosmetic correction" of these in your master calibration files. Such pixels will be removed from light stack by again using sigma clip stacking, but you need to dither your light subs (and I'm sure you do).

- I've noticed something that I find strange in your master flat, a sort of linear banding. Not sure why it is there, or if it has a significant impact on the image. Probably not, and it is probably related to manufacturing of sensor - sensor has slightly different QE in that area because of something.... If you did not notice this pattern in your final images, then it is all fine. Here it is what I'm talking about:

image.png.8a0a9629212a8d3f555e28e660b07f95.png

Does not look like regular vignetting as it has sort of "straight" edges (and not round) although it is in the corner where you would expect vignetting.

It is probably nothing to be concerned about - I just noticed as it looks interesting and I have not seen anything like that before.

Now on ImageJ - I've already written a bit on how to use it to do calibration, so maybe that thread would be a good place to start. I wanted to do a thread where I would describe full workflow in ImageJ, but it sort of did not draw too much interest so it slipped my mind after a while, and I did not post as much as I wanted, but I believe it has part on calibration. Let me see if I can dig it up.

Have a look at that thread - there is a lot written about calibration, and also a plugin included that will do sigma reject (which you will need for your subs).

You can try it out to see what sort of results you will get, but in long term - do look at specific software for doing it automatically like APP for example.

If you have any trouble following tutorial, please let me know and we can go into specific details.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.