Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

Exposure: Time on Target


BlueAstra

Recommended Posts

I've noticed a lot of people say how many 'hours' of total exposure they have achieved on a target. I'm still trying to get to grips with photography, but the way I understand it is that the exposure duration of a single sub will determine the fundamental faintest detail that can be pulled out of the image by stretching. Adding more subs (more 'hours') will reduce the backgound noise after stacking, allowing you to more easily stretch the image and try to achieve that fundamental faintest detail. However, if your sub exposure length is inadequate no amount of extra subs, or hours, will bring out more detail than that contained in the single sub.

Have I understood this correctly?

So 20x2m and 10x4m have the same time on target ('hours'), but are you likely to get more detail out of the 4m subs since they would have a longer sub exposure? Is there a general rule that you should aim for the longest single sub exposure time that tracking/LP will allow, since using more subs of a shorter length will never reveal as much detail?

With my mobile setup I generally can achieve 4-5m easily, but recently I've managed to push it up to 7-8m. I've often wondered whether I should be doing 20x4m or 10x8m.

Link to comment
Share on other sites

Your understanding is spot on :)

But the answer to the question is, "it depends".

It depends on your camera, sensitivity, and inherent noise, and ambient light, and ambient temperature.

It depends on your target, and on what filters if any you are using.

As a general rule, the fainter the stuff you want to image, the more useful longer subs will be. But dont do longer subs just because you can, do them just because you need to.

Given the choice of 20x4 or 10x8, I'd go with 20x4, depending on the target. Bright clusters, like the DC in Perseus might be better with 40x2 or 80x1.

If you branch into narrowband imaging, say with a Hα filter then it all changes again, and longer exposures really do make a huge difference.

It might be helpful to see what works best for you one night, do a side by side test of various exposures, and then see how it affects final picture performance.

The other thing of course, the longer the sub, the more likely its gets ruined by aircraft, satellites, cloud etc. Cant tell you how many times I have wanted to shoot down a plane as it flies right over the telescope at 29m:50s of a 30m exposure :s

Cheers

Tim

Link to comment
Share on other sites

I think in the real world, you understanding is basically correct; though it's not quite as fundamental as that, and as TJ says 'it depends'...

If you had no extra noise in your imaging system (no read noise/digitization noise/saturation/bleeding/tracking errors/planes/clouds/satellites/electronic glithes/etc); it wouldn't matter wither you had a ten thousand 1 second exposures, or a single ten thousand second exposure. The result would be the same when you added all the short exposures together. The longer exposure *wouldn't* show you fainter details.

In the real world however, there is a trade-off... Whenever you read out the camera (1 sub), you get a contribution from "detector noise" (read noise and digitization noise), which is independent of sub exposure time. What you want to make sure is that your exposures are long enough that the "detector noise" is insignificant compared to noise you can't control (usually this is sky background). Once you reach that point, longer subs won't give you an improvement over adding together more shorter subs. You should stop and read out the chip and get the data onto disc; less anything untoward should happen and you lose the whole frame (i.e. a plane flies over the top of you). For narrowband imaging, the sky background is very low, so typically here you get the biggest gains from exposing for a long time; because it takes a long time for the sky background to be bright enough to dominate over the detector noise...

So you have to trade things that get better with long exposures;

read noise, digitization noise

with things that get worse with long exposures;

saturation, bleeding, tracking errors, unexpected 'mistakes', planes, satellites, clouds, glitches, etc...

There's never a nice simple answer is there :)

Link to comment
Share on other sites

However, if your sub exposure length is inadequate no amount of extra subs, or hours, will bring out more detail than that contained in the single sub.

Actually this is not correct. More subs will *always* bring out more detail - even if they are dominated by read-noise (its just that you will need more of them if you are so dominated than you would otherwise).

NigelM

Link to comment
Share on other sites

Well, I thought I understood but its obviously much more complicated than I thought! Thanks for the information. Some comments on the replies:

10000x1s = 1x10000s. If you had a system with no noise surely 10000x1s would just give you the mean of a 1s sub after stacking, since the stacker s/w doesn't add the image intensities. The 1x10000s would therefore show a 'brighter' image since its collected more photons. (?)

More subs always gives more detail. Again, more subs = less noise in the stacked image so more detail would be apparent. But if you haven't been collecting enough photons with a longer exposure (or if you have a perfect noiseless system) some detail will never appear because the stacker s/w doesn't add the images. (?)

Sorry if I've got the wrong end of the stick, but its a steep learning curve!

Link to comment
Share on other sites

An analogy that helped me [i think I saw it on an SGL post, but can't remember who posted it]:

  • Imagine having ten bricks and a lawn. You place five bricks in specific locations (signal) and throw the other five randomly onto the lawn (noise). Show someone a plot of the ten bricks and they would not be able to identify which bricks were placed and which were thrown.
  • Do this ten times, and you now have a plot of 55 locations, 50 points where the random bricks fell, and five piles of ten bricks. Seeing this plot, someone would be able to identify where the bricks were 'placed' and where they were 'thrown'.
  • However, suppose you do it a hundred times, but now every tenth time, you place an additional brick in a new location (faint signal). Your plot now contains 500 individual random bricks, five piles of 100 bricks and an additional pile of 10 bricks, which would stand out from the background 'noise'.

HTH

Link to comment
Share on other sites

Hmmm. I guess what you say is true if the stacked images are added (summed). However, DSS does not add (sum) the light frames if I understand it correctly. The star/nebula image remains the same brightness regardless of the number of subs. The number of subs only influences (reduces) the background noise.

This is an extract from the DSS help file (which I've just read-should have looked at it earlier!):

Why combine?

The answer is simple: only to increase the Signal to Noise Ratio (SNR).

Is the resulting image more luminous? No.

Is the resulting image more colorful? No.

The goal of combing many images into one is only to increase the SNR. The resulting images are neither more luminous or more colorful but they contain much less noise which will let you stretch the histogram a lot more which will give you more freedom to bring back colors and details.

Are 100 x 1 minute and 10x10 minutes giving the same result?

Yes when considering the SNR but definitely No when considering the final result.

the difference between a 10 minutes exposure and a 1 minute exposure is that the SNR in the 10 minutes exposure is 3.16 higher than in 1 minute exposure.

Thus you will get the same SNR if you combine 10 light frames of 10 minutes or 100 light frames of 1 minute. However you will probably not have the same signal (the interesting part). Simply put you will only get a signal if your exposure is long enough to catch some photons on most of the light frames so that the signal is not considered as noise.

For example for a very faint nebula you might get a few photons every 10 minutes. If you are using 10 minutes exposures, you will have captured photons on each of your light frames and when combined the signal will be strong.

If you are using 1 minute exposures you will capture photons only for some of your light frames and when combined the photons will be considered as noise since they are not in most of the light frames.

Link to comment
Share on other sites

Hmmm. I guess what you say is true if the stacked images are added (summed). However, DSS does not add (sum) the light frames if I understand it correctly.

It depends on your combining algorithm, but remember that an average is just sum_of_frames/number_of_frames -- so an average is exactly equivalent to summing, except that you introduce rounding (digitization) errors when you divide.

For example, imagine you have set of 100 frames, and some really faint signal that produces only 1 count in every 10 frames (like the bricks on the lawn). You sum all the data up, and your faint object has produced 10 counts in total. However, if you now divide by the number of frames (to get the average), you're back to 0.1 counts, which if you round to a 16-bit integer fits file (which most programmes use) will end up as zero... It's not that the signal wasn't detected, it's just that the rounding in the averaging process didn't have enough numerical accuracy to record it.

The same thing can happen in the detector, even before you start averaging frames together. On a lot of detectors, the 'gain' is set such that you need 2--3 electrons (1 electron==1 photon) to make 1 ADU in the fits image. So, if you detect an average of less than 2--3 photons from your object in each exposure, it won't register in the image and you won't detect the object no matter how many exposures you take**. However, that's just an effect of the detector; you can equally well set the detector so that 1 electron (photon) equates to 1 ADU, and then you don't have this problem.

However you will probably not have the same signal (the interesting part). Simply put you will only get a signal if your exposure is long enough to catch some photons on most of the light frames so that the signal is not considered as noise.
That is a limitation of the chosen combination algorithm, it's not fundamental to the signal.

All these arguments are quite theoretical of course. It's good to understand the limitations, but in 99% of applications, your original understanding is perfectly right.

(** for the data pedants: yes, there is a high end tail to the photon distribution, so you would *eventually* detect the object)

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.