Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

Ideal exposure tool


Recommended Posts

34 minutes ago, Pitch Black Skies said:

What is the definition of optimum exposure time?

In my mind, it is collecting as much signal as possible without blowing out the stars.

That makes me think, why not just take looong exposures, let the stars blow out, and then take some short exposures of just the stars and amalgamate the two.

Star cores don't come into equation about optimum exposure time as you can easily take just few very short exposures at the end and use that data to replace any saturated pixels in original longer exposure.

Optimum exposure duration is defined by how much additional noise are you willing to accept in your image.

If you for example image for an hour. You can split that hour in different ways.

You can say I'll just take one exposure that is one hour long. Or you might say I'll do 12 exposures 5 minute each. Or perhaps 60 x 1minute.

What is the main difference between all of these? Level of read noise and how big it is compared to other noise sources.

If you image 1 x 60 minutes - then your image will have only one dose of read noise as you used only one sub and it was read from sensor only one time.

If you image 12 x 5 minutes - then your stack will gather 12 doses of read noise (12 reads - each read is additional dose or read noise)

60 x 1 minute = 60 doses of read noise.

As far as signal and noise go - everything else is the same regardless of how you decide to split imaging time (there are other considerations like amount of data or how likely it is that you'll throw away a sub, but we don't consider these at the moment).

How much additional noise in final image that is - depends on other noise sources and how big the read noise is compared to those. As soon as you make read noise sufficiently small compared to any of other noise sources - you don't need to expose for longer as any further gains are minimal.

  • Like 1
Link to comment
Share on other sites

How do I integrate short exposures of the stars into the main integration. I tried to do it with M42 before but was unsuccessful. I was using DSS and it gave an error message stating the exposure lengths differed. I've seen a few people mention Affinity Star but haven't gotten round to checking it out.

Link to comment
Share on other sites

8 minutes ago, Pitch Black Skies said:

How do I integrate short exposures of the stars into the main integration. I tried to do it with M42 before but was unsuccessful. I was using DSS and it gave an error message stating the exposure lengths differed. I've seen a few people mention Affinity Star but haven't gotten round to checking it out.

I would do it in software like ImageJ with a bit of pixel math.

It is as simple as loading both images and running image expression parser / macro. You enter single line expression and it combines two images.

(actual line depends on data and saturation point, but if you post both stacks - I'll give you expression and explain how I wrote it).

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, Pitch Black Skies said:

How do I integrate short exposures of the stars into the main integration. I tried to do it with M42 before but was unsuccessful. I was using DSS and it gave an error message stating the exposure lengths differed. I've seen a few people mention Affinity Star but haven't gotten round to checking it out.

If you have one of the newer low noise and high full well capacity cameras you will probably find that this trick is not necessary (did you have the 533?). Its actually quite difficult to saturate stars to a point where they look bothersome in the stacked image and the double exposure thing is probably not worth the effort. Some stars will always saturate in an image but these will look out of place if you try to un-saturate them with the double exposures thing. Anything but the brightest stars will probably not saturate if you expose for the 3x read noise swamp thing.

But an easy way to combine the 2 would be with layers in photoshop. Stack both datasets separately and then combine them later in processing, just do early processing the same way for both stacks (colour calibration mainly, so that the stars and the main image have the same colours). You can use layer masks and other easy to do trickery to combine the 2 images the way you like, for instance only blend in the brightest very saturated stars and leave the medium brightness stars as they are.

  • Thanks 1
Link to comment
Share on other sites

9 minutes ago, Pitch Black Skies said:

Thanks,

M42 18 min 30 s, calibrated.FTS 30 seconds exposures

1 hr 6 min M42, fully calibrated.FTS 120 seconds exposures

Ok, so first thing to do is to register both images against same sub, they need to be aligned.

Then load both images in ImageJ and do following:

1. Convert both images in 32bit precision and split into channels (do following for each channel separately)

2. Multiply short image with 4 (Process / Math / Multiply) - we use 4 as that is ratio of exposure lengths (120s / 30s = 4).

3. Duplicate long exposure image and run following macro on it (Process / Math / Macro):

if (v>55000) v=0; else v=1;

This will turn this into "mask" - where there is zero - short exposure should be used, where it is 1 - long exposure should be used. We use ~55000 as max is around 65000 and we want to replace all pixels that are over 90% of max value (just to be sure - interpolation when aligning subs can make "transition" pixels that are less than max but still need to be replaced).

4. Use Process / Image Expression Parser (macro) and enter following:

image.png.4ab097348ba5d6e5ba264aa2ced0f5e5.png

Which simply means - take long exposure masked and add to short exposure inverse masked.

You can always perform above operation - just mind that you need to choose set of parameters for each case:

1. Factor of multiplication for shorter stack needs to be ratio of exposures

2. Mask needs to be created at about 90% or so of max pixel value in long exposure

(and of course - images need to be aligned the same - you can use ImageJ to align images as well - but that is another plugin and a bit more complicated procedure).

 

  • Thanks 1
Link to comment
Share on other sites

25 minutes ago, Pitch Black Skies said:

I don't know how to do this.

I haven't used DSS in quite a while, but if I do remember correctly, here is what you can do:

You will need to stack both stack again.

Start with long stack, but manually select (or just remember which one was used) - reference frame against which all others are registered. If I remember correctly - it is sub in the list marked with star and you can use space to mark any other as reference frame.

Do the rest of the stack like you normally would

Now, when stacking short stack - load all short subs, but also load that long reference sub.

Mark it to be reference sub (marked with star) - but uncheck it when you come to stacking phase - don't include it in stack.

Resulting stack should be aligned with first stack after that.

  • Thanks 1
Link to comment
Share on other sites

5 hours ago, pete_l said:

Practice beats theory every single time.

Professionals use exposure time calculators based on theory. The equations are not that hard to compute and give a better estimate than just guessing or practice. 

Link to comment
Share on other sites

On 14/06/2022 at 20:11, vlaiv said:

Offset is not important for sub exposure length. Use gain setting that you will be using for imaging.

If you want to determine what is best tradeoff for sub length - here are guidelines:

1. How much data you want to stack and process? Shorter subs mean more data. Some algorithms like more data, others like good SNR per sub

2. How likely is it that you'll get ruined sub (for whatever reason - wind, earthquake, airplane flying thru the FOV - whatever makes you discard the whole sub - satellite trails can be easily dealt with in stacking if you use some sort sigma reject). Longer discarded subs mean more imaging time wasted

3. Differences in setup - in general, you'll have different sub length for each filter, but sometimes you will want to keep single exposure length over range of filters (like same exposure for LRGB and same for NB filters) as this simplifies calibration - only one set of darks instead of darks for each filter

4. What is the increase in noise that you are prepared to tolerate?

 

Only difference between many short subs and few long subs (including one long sub lasting whole imaging time) - totaling to same total imaging time - is in read noise. More specifically, difference comes down to how small read noise is compared to other noise sources in the system.

When using cooled cameras and shooting faint targets - LP noise is by far the most dominant noise source, that is why we decide based on it, but it does not  have to be (another thing to consider when calculating). If you have very dark skies and use NB filters - it can turn out that thermal noise is highest component, so this calculation should be carried out against it instead.

In fact - you want "sum" of all time dependent noise sources (which are target shot noise, LP noise and dark current or thermal noise - all depend on exposure length) and compare that to read noise.

Read noise is only time independent type.

Noises add like linearly independent vectors - square root of sum of squares. This is important bit, because this means that total increase is small if you have components that are significantly different in magnitude. Here is example:

Let's calculate percentage of increase if we have LP noise that is same, twice as large, 3 times as large and 5 times as large as read noise.

"sum" of noises will be sqrt( read_noise^2 + lp_noise^2) so we have following:

1. sqrt(read_noise^2 + (1 x read_noise)^2) = sqrt( 2 * read_noise^2) = read_noise * sqrt(2) = read_noise * 1.4142 ... or 41.42% increase in total noise due to read noise

2. sqrt(read_noise^2 + (2 x read_noise)^2) = sqrt(5 * read_noise^2) = read_noise * sqrt(5) = read_noise * 2.23607 = (2 * read_noise) * (2.23607/2) = (2*read_noise) * 1.118 or 11.8% increase (over LP noise which is 2*read_noise in this case)

3. sqrt(read_noise^2 + (3 x read_noise)^2) = sqrt(10 * read_noise^2) = read_noise * sqrt(10) = read_noise * 3.162278 = (3 * read_noise) * 1.054093 = 5.4% increase over LP noise alone (which is 3*read_noise here)

4. sqrt(read_noise^2 + (5 x read_noise^2) = sqrt(26 * read_noise^2) = read_noise * 5.09902 = (5* read_noise) * 1.0198 = 1.98% increase over LP noise alone

From this you can see that if you opt for read noise to be x3 smaller than LP noise - it will be the same as having only 5.4% larger LP noise and no read noise, and if you select x5 smaller read noise - it will be like you increased LP noise by only 1.98% (and no read noise).

Most people choose either x3 or x5 - but you can choose any multiplier you want - depending how much you want to impact final result. Thing is - as you start increasing multipliers - gains get progressively smaller, so there is really not much point going above ~ x5

Ok, but how to measure it?

That is fairly easy - take any of your calibrated subs and convert to electrons using e/ADU for your camera. CCD will have fixed system gain, while gain on CMOS will depend on selected gain. Pay attention when using CMOS cameras if your camera has lower bit count than 16 bits. In that case you need to additionally divide with 2^(16-number_of_bits) - or divide with 4 for 14 bit camera, with 16 for 12bit camera and 64 for 10bit camera.

When you prepare your sub - just select empty background and measure mean, or even better median electron value on it (median is better if you select odd star or very faint object that you don't notice). This will give you background value in electrons.

Square root of this value is your LP noise. You need to increase exposure until this LP noise value is your factor times larger than read noise of your camera.

Alternatively, if you want to get exposure from single frame - take your read noise, multiply with selected factor, square it and this will give you "target" LP level. You need to expose for "target" / "measured" longer (or shorter - depending on number you get).

Makes sense?

 

Is it possible to use this calculation for an uncalibrated sub? Or is there an alternative method for uncalibrated subs, so you can easily and quickly determine if you're swamping the RN while out in the field?

Link to comment
Share on other sites

1 hour ago, Adam1234 said:

Is it possible to use this calculation for an uncalibrated sub? Or is there an alternative method for uncalibrated subs, so you can easily and quickly determine if you're swamping the RN while out in the field?

You can do it on uncalibrated sub, but you need to know mean dark value.

1. Read mean background value in ADU

2. Subtract mean dark current value in ADU

3. Convert to electrons ...

rest is the same (it works on "reverse" method as well - (read noise * 5)^2  = exposure_factor * (mean_background_adu - mean_dark_adu) * e/ADU )

Link to comment
Share on other sites

23 minutes ago, vlaiv said:

You can do it on uncalibrated sub, but you need to know mean dark value.

1. Read mean background value in ADU

2. Subtract mean dark current value in ADU

3. Convert to electrons ...

rest is the same (it works on "reverse" method as well - (read noise * 5)^2  = exposure_factor * (mean_background_adu - mean_dark_adu) * e/ADU )

Is the dark current a value you need to measure, or can this be taken from the graph of dark current vs sensory temp displayed in the camera specs? For example, for the ASI1600mm pro at -20 the graph gives 0.0062e/s/pix. 

If the former, how do you measure it? Or if the latter, how do you convert to adu?

Link to comment
Share on other sites

56 minutes ago, Adam1234 said:

Is the dark current a value you need to measure, or can this be taken from the graph of dark current vs sensory temp displayed in the camera specs? For example, for the ASI1600mm pro at -20 the graph gives 0.0062e/s/pix. 

If the former, how do you measure it? Or if the latter, how do you convert to adu?

Best to measure it as you want darks that include any bias offset.

Either take darks that you already have (from previous session or ones you prepared for this session) - or take single dark in the field matching light you want to try out.

Measure mean adu of that dark sub and use that value.

Btw - electrons to ADU conversion is simple - there is published e/ADU for gain you are using (you can read that off the graph).

Divide with that value to convert from electrons to ADU and multiply with that value to convert from ADUs to electrons.

Just be careful if your camera has lower bit count than 16 bit. In that case there is additional step between sub values and ADUs (although we mean ADU when we say values measured directly from sub). If camera has less bit (like 12bit) - sub values are actually multiplied with 2^(16-bit_count).

In case of ASI1600 - since it is 12bit camera - all sub values are multiplied with 16 (2^(16-12) = 2^4 = 2 * 2 * 2 * 2 = 16). If your camera is 14 bit - this multiplier is 4 (2 to the power of 2).

Link to comment
Share on other sites

12 minutes ago, vlaiv said:

Best to measure it as you want darks that include any bias offset.

Either take darks that you already have (from previous session or ones you prepared for this session) - or take single dark in the field matching light you want to try out.

Measure mean adu of that dark sub and use that value.

Btw - electrons to ADU conversion is simple - there is published e/ADU for gain you are using (you can read that off the graph).

Divide with that value to convert from electrons to ADU and multiply with that value to convert from ADUs to electrons.

Just be careful if your camera has lower bit count than 16 bit. In that case there is additional step between sub values and ADUs (although we mean ADU when we say values measured directly from sub). If camera has less bit (like 12bit) - sub values are actually multiplied with 2^(16-bit_count).

In case of ASI1600 - since it is 12bit camera - all sub values are multiplied with 16 (2^(16-12) = 2^4 = 2 * 2 * 2 * 2 = 16). If your camera is 14 bit - this multiplier is 4 (2 to the power of 2).

That's great thank you! 

Link to comment
Share on other sites

Here's my sky swamping exposure excel chart for the ASI 1600 which I'd posted before, but now modified to include vlaiv's criticisms of the original formula which was posted on the CN forum. It uses the bias ADU value instead of manipulating the offset with camera bit depth and the change in ADU per offset increment. The 10xRN^2 CN formula actually swamps the read noise by √10 (as vlaiv pointed out) which is x3.16, so I've included it as a swamping figure in the chart. These are using offset 50 and my bias reading. If you want to download the excel chart to enter your own values, (in the editable green cells) and also use the handy exposure calculator from a sample ADU and time here it is. 🙂

ASI1600, Sky Background ADU v2 Prot.xlsx

862976808_SkyBGADU.png.4a0ff2f4c681ea051a18366bf56a1f49.png

If @vlaiv would be kind enough to check it's valid, here's the unprotected excel chart so he can check the formulas. ☺️

ASI1600, Sky Background ADU v2.xlsx

The graphs actually reflect the values entered in the table so is a double check you've read them right from the ZWO camera data graphs.

The spreadsheet will work for any camera if you have the read noise and the e-/ADU gain values available. The ZWO offset and gain values entered are for information only and aren't used in the calculations.

If vlaiv gives it the OK I can make a spreadsheet for any camera that anyone is interested in. 🙂

Alan

Edited by symmetal
  • Like 4
Link to comment
Share on other sites

18 hours ago, symmetal said:

If @vlaiv would be kind enough to check it's valid, here's the unprotected excel chart so he can check the formulas. ☺️

ASI1600, Sky Background ADU v2.xlsx 23.26 kB · 2 downloads

The graphs actually reflect the values entered in the table so is a double check you've read them right from the ZWO camera data graphs.

The spreadsheet will work for any camera if you have the read noise and the e-/ADU gain values available. The ZWO offset and gain values entered are for information only and aren't used in the calculations.

If vlaiv gives it the OK I can make a spreadsheet for any camera that anyone is interested in. 🙂

I had a look and yes - it seems very well made.

All formulae are ok.

It would be good to have general type calculator, but I'm afraid that read noise and e/ADU gain are not as easy to do in table form for many cameras (maybe just two fields instead?).

On the side note - ZWO uses 0.1db system for their gain.

If we know that 139 is unity gain, then it is easy to calculate any other e/ADU value given actual gain setting.

For example - let's calculate e/ADU for Gain of 250 (it is present in above spread sheet).

250 - 139 = 111 (in units of 0.1db) = 11.1db

Ratio = 10^(11.1/20) = = 10^(0.555) = ~3.59 = 3.6

Actual e/ADU is then 1 / 3.59 = ~0.2786

(not quite 0.25 that is in spread sheet but not far from it).

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, vlaiv said:

I had a look and yes - it seems very well made.

All formulae are ok.

It would be good to have general type calculator, but I'm afraid that read noise and e/ADU gain are not as easy to do in table form for many cameras (maybe just two fields instead?).

Thanks for checking the results vlaiv. 😊 I know the 1600 Zwo gain values for multiples of unity aren't exact but they were the values used by Zwo initially so I kept them here. As 60 Zwo gain units is 6dB which is 2x voltage gain, then half unity gain would be 139-60=79 rather than the 76 Zwo quoted. Likewise 2 x unity gain should be 199 rather than 200. I can redo it with with the actual Zwo gain settings needed but as you say they are close enough. 🙂

4 x unity is 12dB gain so gain Zwo gain 250 should really be 139+120 = 259. 

For the more modern cameras only one or two gain settings are actually used so the full table is not necessary but as Zwo provided the data I thought I'd use it anyway. For CCD cameras there would be only one table entry so the graphs are then redundant and you'd have a small spreadsheet. 😀

For the 1600 a 2 second dark is probably better to use than the bias ADU, as it's more representative of the 'bias' in longer exposures. They do differ by about 4 ADU, while for later cameras there is no significant difference between bias and shorter darks.

Alan

Edited by symmetal
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.