Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Different exposure times vs same exposure times


Recommended Posts

I am just learning the basics of astro photography and have been experimenting, but

I noticed some people do a series of exposures that are the same with their calibration frames. Example

100 x150 second shots

But other people do a series of exposures. For example 

20 x 30 seconds

10 x 300 seconds

4 x 500 seconds

I was wondering if anyone had any insight on which methods are better, or at least the why's behind doing it with multiple exposures

Link to comment
Share on other sites

  • Replies 76
  • Created
  • Last Reply

If you're just starting out, I'd go along with Martin. This is a tough enough learning curve without adding extra complexities. Once you understand all the processes involved and feel comfortable with what you're doing, you'll also probably find that you become less happy with the results, and want to improve. Then you can look at taking subs of different lengths and merging them to produce an improved image. 

When you're combining, you should have, ideally, matched calibration frames for each set of lights, so that means multiple sets of darks etc.

Link to comment
Share on other sites

I am just learning the basics of astro photography and have been experimenting, but

I noticed some people do a series of exposures that are the same with their calibration frames. Example

100 x150 second shots

But other people do a series of exposures. For example 

20 x 30 seconds

10 x 300 seconds

4 x 500 seconds

I was wondering if anyone had any insight on which methods are better, or at least the why's behind doing it with multiple exposures

Nice question Vertigo.

To answer the question I will take you on a short ride (sans mathematics !) through the theory as to the effect of taking longer exposures and why one stacks images. To ask  of  merit of the " individual methods"  is to miss the point of what one is trying to do ie. Make a low noise image.

It is all about an animal called  "Signal to Noise Ratio"  (SNR) . One can never remove noise. Once it is there it stays there!! What we do is try to find the best estimate, and I'll use the terms, of the signal (our images) in the presence of noise. Noise in this context is referred to as "random Noise" not systematic noise, not added noise like light pollution, not biases etc.

Now the higher your final SNR the better looking will be the image.

The quality of the final image will be determined by the SNRin (Signal to Noise ratio of the image signal collected, via the optics, in the wells of the CCD). The SNRin is degraded further by noise generated by the input amplifiers, uncertainty in measuring the voltage (number of electrons) ie. read out noise etc. So we have two sources of noise :

1. Photon noise in the signal

2. Noise in the system.

1. The nature of the Photon noise is such that the SNRin will vary as the Square_Root (of the number of photons collected) the photons do not all arrive together. So by collecting more photons one gets a better SNRin. If you make the exposure 4 x longer the input SNR is improved by a factor of 2. This SNRin will determine the final quality of your image.  Notice that there is a point of diminishing returns ... 16x gives  x4 improvement in SNRin but x25 gives a x5 improvement in SNRin. Is it worth going much longer ? 16m exposure cf. 25m exposure.

2. The random noise in the system is the next issue. This noise (K.T.Bw) can be reduced by cooling but it is still there!! Now one cannot simply subtract this noise (as one does know what the value is). But this noise is a random process (the term is ... stationary, stochastic process) and statistics says that the expected value of the noise is 0 (zero). What this means is that if this noise is added to a signal and we find different instances of this signal and noise and average them then the noise will tend down approaching zero. This works as the signal stays the same but the noise properties do not change,and I'll use the term, so the expected value will be zero . We have just described stacking.

So we see that Stacking will improve the original SNRin

Hence the excellent image of Mr V.I.P

Jeremy.

Link to comment
Share on other sites

ok, that helps a little. 

I will reseach SNR so that things make more sense.

Tonight I went out and, with a huge amount of light pollution attempted my first light, dark, bias, with no flats as I am waiting for a small softbox. But this was my horrible results

40 x 60 second light

15 x 60 Dark

25 Bias 1/800 same ISO

And not quiet as nice as yours 

although, the 2nd picture is in a much better area with a lot less light pollution which is a single exposure and that is it.

any advice on what I am doing wrong would be helpful. Clearly the light pollution and humidity seemed to be the major factor, but no getting the spirals

post-38374-0-12227700-1413447811_thumb.j

post-38374-0-70262200-1413447869_thumb.j

Link to comment
Share on other sites

The occasions on which you need different exposure times are few and far between. It is widely asserted on the net that you need them for all sorts of targets but I simply don't find this to be the case. I use very long exposures because I have a dark site and on only a handful of occasions have I used short exposures for bright parts of a target. Fewer than five in hundreds of images.

How can you tell if you need short subs for bright parts? Easy; look at the unstretched (linear) image and see what is visible. Check out this demo which I often use here;

Short%20subs%20fallacy-L.jpg

We look at the image on the left and say the three bright stars, Alnitak, Alnilam and Zeta Orionis are blown out so we need some short subs, right? No, this is wrong.

When we look at the linear image in the middle we see that the stars are not blown out. The data is already there. The trick is to stretch the image more cleverly - though it isn't all that clever in reality! There are many ways to do this but masking and the use of Layers are the ways.

The carefully stretched data gave the image on the right. Now look at the Flame nebula as it rises past Alnitak on the left. You can make it out quite easily. But had I tried to control Alnitak by using short subs I very much doubt that this data would have been in there, and this is often the problem with short subs.

M31 is a marginal case. My main luminance came from 30 minute subs. I took some shorter ones in a bigger scope for the core and they helped a tiny bit, but I had much of the core data in the long ones, surprizingly.

http://ollypenrice.smugmug.com/Other/Best-of-Les-Granges/i-xbvjFDF/0/X3/M31%20Outer%20HaloLHE-X3.jpg

But one target that does need shorts is rising at the moment. M42 absolutely does need short exposures. I use three sets, 10 seconds, 50 seconds and then as long as possible. You can combine like this; http://www.astropix.com/HTML/J_DIGIT/LAYMASK.HTM Jerry Lodigruss' tutorial is first class and is my favourite way of doing it.

M42%20CROP%20web-M.jpg

Olly

Link to comment
Share on other sites

Olly, your pictures are very impressive.

So I will do what I have been doing. Although It appears I am doing something wrong. Although I didn't get a chance the other night to get orion, a fog bank came in after the first shot. And it seems I can get orion fairly well.

When I try things like heart, bubble, andromeda, vail, I get close to nothing.

Any advice on this? or is it something I am not doing in processing?

post-38374-0-35480600-1413450387_thumb.j

Link to comment
Share on other sites

ok, so for example, this is tonights adrameda stretched. But of course, me winging it without really studying it too much. But I'm missing all the data like yours. I am assuming this is because of the length of your exposures as opposed to a simple 60 second one.

post-38374-0-87104100-1413452538_thumb.j

Link to comment
Share on other sites

Yes. I just had a lot more data, over 40 hours, and no LP to cut down my exposure times.

You are also red-dominated. Try to line up the top left of the histogram peak in each colour channel by moving the black point, left hand, slider to the right to pull the peak to the left in the necessary channels.

levels%20aligning-L.jpg

Olly

Link to comment
Share on other sites

I converted my canon 7D to full spectrum, and it seems the humidity makes the picture red. I'm not sure if this is what is supposed to happen. Although the night before, orion doesn't have any red in it, so I'm guessing it's the water particles in the air doing it

Link to comment
Share on other sites

Pity about the light pollution (LP). In your case before you do anything you should attempt to remove the light gradient due to the LP.

You need to estimate the light gradient. A cheap and nasty method, that works if the gradient is not severe, is by using the layers feature in photo shop, the eyedropper and gradient tools.

From the picture that you posted here is version with the LP reduced (some what):

post-37798-0-08043900-1413453622.jpg

There are more sophisticated methods of estimating the LP gradient.

Jeremy.

Link to comment
Share on other sites

I converted my canon 7D to full spectrum, and it seems the humidity makes the picture red. I'm not sure if this is what is supposed to happen. Although the night before, orion doesn't have any red in it, so I'm guessing it's the water particles in the air doing it

I think most users find that the filter replacement gives a strong red bias in modified DSLRs. The red cannel needs to be pûlled back to reduce this effect.

There is a good Photoshop plug in for gradients; Gadient Xterminator by Russel Croman. Best of all is Dynamic Background Extraction in Pixinsight.

Olly

Link to comment
Share on other sites

I don't know what I was thinking. I was out the other night in a spot a found, which is rare around where I am, that is by the ocean, in the mountains, and dark skies. But the weather is spuratic. I was getting great shots and the clouds rolled in. 

I went home where my original learning spot where you can't see much and Orion was glowing with the half moon. I took that one frame which was right on, and another fog bank came in ruining it right as I started the sequence.

Today, I really wanted to go out and get my orion shot, so I went to the same spot early, and started taking some test pics, and everything was horrible. I then left before orion came up because the conditions were aweful. 

But truth be told. I have no clue why I went there, except that weather reports show horrible weather for weeks and I wanted the shot. I should have known better to exert the energy on such a bad location that I knew was bad surrounded by city lights.

But, I need all the practice I can get. Because I'm still trying to figure things out. And I think we all know how it feels to get the good shot and create something of beauty which drives most of us.

But I should have known better being in the middle of a city where light pollution is rated I think 6. Just hate waiting for good weather. Just kills me I didn't get the Orion sequence of the Orion picture

Link to comment
Share on other sites

I will take a look at Gadient Xterminator . I don't exactly know what you mean by gradient as I've been using the color balance in photoshop to adjust the colors. I will research this

A gradient is a common artefact which appears as a gradual change in brightness from on side or corner of an image to another. It shows as a change in brightness or as a gradual drift in colour, say from too red to too green. It has nothing to do with the real data captured. Gradients afflict astrophotos because we stretch them so hard. In dragging out the faint details we drag out the gradient as well.

Olly

Edit; How far are you from the Central Valley or the Mojave desert? The Mojave would be great. I saw fantastic skies from there. Cycling down 101 from Seattle to San Francisco we found the coast permanently cloudy and inland permantly roasting and clear.

Link to comment
Share on other sites

I'm about 4 - 5 hours away. I'm in san fernando valley. Would make a nice vacation to get some shots. But to learn or just quick shoots, I go out by malibu in the hills. The south lighting is all ocean, and the north is hidden by the mountains. The downside is the weather and moisture. 

But it's the best I've seen around here. Maybe some closer desert, but I can get away with a 40 minute drive. Or go out locally and you saw those images. I might be able to find some good elevated spots if I google earth some elevation around here, but there isn't much.

Link to comment
Share on other sites

There's a very straightforward tutorial on Youtube which gives a great introduction to combining, then stretching and colour correcting stacked shots of different exposures in photoshop:

It's by Dave Rankin and the link is

It's not for advanced imagers, but if, like me, you're still on the steep learning curve, it's easy to understand and produces a good result. 

Regards

StevieO

Link to comment
Share on other sites

I think most users find that the filter replacement gives a strong red bias in modified DSLRs. The red cannel needs to be pûlled back to reduce this effect.

There is a good Photoshop plug in for gradients; Gadient Xterminator by Russel Croman. Best of all is Dynamic Background Extraction in Pixinsight.

Olly

Wow, The things one learns.

Thankyou Olly for pointing us to the above code. I have installed a trial version in my CS3 extended and without any effort I produced :

post-37798-0-36130100-1413497256_thumb.j

After a bit more testing it looks as if I'll be purchasing.

Jeremy

Link to comment
Share on other sites

I am just learning the basics of astro photography and have been experimenting, but

I noticed some people do a series of exposures that are the same with their calibration frames. Example

100 x150 second shots

But other people do a series of exposures. For example 

20 x 30 seconds

10 x 300 seconds

4 x 500 seconds

I was wondering if anyone had any insight on which methods are better, or at least the why's behind doing it with multiple exposures

Hi,

Without making things too complicatedand provided that you are not imaging from an LP zone the longer the length of the sub the higher the recorded signal. At times because of LP or skyglow it maybe preferential to do a a large number of shorter subs so the stacking software has a chance of building up the signal above read noise levels. You need to experiment with your equipment to find that minimum exposure length below which the signal is burried in the noise. In general and assuming a dark location a realatively large number of long exposures will give you the best results. If you do Narrow Band imaging the loger the sub the better the signal with OSC and RGB the sub length may have to be reduced. Some targets in the sky really do need very long subs and integration times. Have a look at the latest image of the OU4 ( the Squid ) by Boren in the Deep Sky section of the forum. You could do 100 subs of 120s on this target without much showing.

A.G

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.