Jump to content

ISO 1600 v ISO 800


earth titan

Recommended Posts

In the ol' days of film, the grain structure of the silver halide was modified (think of it as larger pixels) when a "faster" film was required ie higher ISO was used.....

The ISO rating then meant something different.

I believe from the Photographic forums that the ISO in a digital camera is just the gain factor applied (to the RAW image recorded) within the camera to brighten the image.

The underlying pixel sensitivity, Bayer Matrix and pixel size doesn't change - it's still the same, what ever ISO you select.

For my spectroscopy, which is basically like trying to image very faint nebula (the light being spread out along the spectrum) I use 200ISO on both the 450D and the Modded 1000D.

Link to comment
Share on other sites

  • Replies 83
  • Created
  • Last Reply

I believe from the Photographic forums that the ISO in a digital camera is just the gain factor applied (to the RAW image recorded) within the camera to brighten the image.

The underlying pixel sensitivity, Bayer Matrix and pixel size doesn't change - it's still the same, what ever ISO you select.

It would make almost no sense to implement ISO as a stretch to the RAW file - if that was the case we would shoot every picture at ISO 100 and do the stretch ourselves in DPP.

According to this link, ISO up to about 1600 is done by amplification on the chip (i.e. astronomically useful) while higher ISOs are merely post-processing when writing the RAW.

http://photo.stackexchange.com/questions/2946/how-is-iso-implemented-in-digital-cameras

Link to comment
Share on other sites

Well I'm reading all this with interest. It would appear opinions are divided and nobody really knows.

I've tried my own experiments on this (highly unscientific) which resulted in this post, and I am firmly convinced the higher ISO is better. My results on M31 for the same overall exposure times suggest I can get a better, more detailed image, with double the ISO and half the exposure time. I always throw in lots of darks (almost as many as lights) as I feel this is important.

Typed by me on my fone, using fumms... Excuse eny speling errurs.

Link to comment
Share on other sites

But the question seems to be..........if the total integration time is the same, e.g 60x60 sec exposures or 30x120 sec at the same iso, which image will be better or will the end result be the same.

Or is it .....can the same image be produced in less time by increasing iso and reducing the exposure time e.g 60x60sec iso 800 or 30x 60sec iso 1600.

Link to comment
Share on other sites

It would make almost no sense to implement ISO as a stretch to the RAW file - if that was the case we would shoot every picture at ISO 100 and do the stretch ourselves in DPP.

According to this link, ISO up to about 1600 is done by amplification on the chip (i.e. astronomically useful) while higher ISOs are merely post-processing when writing the RAW.

http://photo.stackex...digital-cameras

Good and interesting info there, thanks for the link. :)

Link to comment
Share on other sites

But the question seems to be..........if the total integration time is the same, e.g 60x60 sec exposures or 30x120 sec at the same iso, which image will be better or will the end result be the same.

Or is it .....can the same image be produced in less time by increasing iso and reducing the exposure time e.g 60x60sec iso 800 or 30x 60sec iso 1600.

My purpose in prompting the initial discussion, was to determine if more noise is added using ISO1600 rather than 800. If the better image comes from lower ISO but longer exposure, then I may drop back to 800 and take longer subs.

Typed by me on my fone, using fumms... Excuse eny speling errurs.

Link to comment
Share on other sites

It is a common fallacy that the ISO setting of DSLRs can affect the amount of signal obtained in an image. It is also generally true that using a higher ISO setting has no effect on the amount of noise in the image either, and therefore no effect on the Signal to Noise Ratio (SNR). This latter point about the noise is not quite true though, so read on.

CCD and CMOS sensors detect photons when they hit the individual sensor elements and cause electrons to be displaced in to the sensor's 'well'. The sensor's quantum efficiency (QE) determines how good it is at turning photons in to displaced electrons. A perfect sensor would convert every photon of the wavelengths you wish to measure in to an electron and thus have a QE of 100%. A good CCD camera might have a QE of 90% at certain wavelengths. I am not aware that manufacturer's of CMOS sensors used in DSLRs publish the QE of their devices, but as ever in life you get what you pay for and a super-expensive back-illuminated astronomical CCD camera will have a higher QE than a consumer or professional-grade DSLR.

The point is that the QE of any sensor is fixed at the point of manufacture, and there is nothing you, the camera settings or your software can do to make the sensor intrinsically better or worse at capturing signal or noise (except as noted below):

- Of course you can capture more signal by increasing exposure time, using a larger aperture, a shorter focal length, properly matching sensor element size to focal length, or less signal by using filters (which of course you may do if using a monochrome CCD for RGB or narrowband imaging, etc.), but the sensor's ability to capture signal is fixed.

- Of course you can increase the SNR by using more subexposures, the right length of subexposures for your kit and situation, good use of bias, darks, flats, etc.

In other words, increasing the ISO does not increase the amount of signal captured or (generally) the amount of noise in the subframe.

Still not convinced? Think about any astronomical CCD camera (as opposed to a DSLR or webcam). These cameras do not have an ISO setting because it is meaningless for this type of camera.

The incorrect assumption people generally make that lower ISO = less noise is simply because at higher ISOs, the background noise in the image is readily apparent because the overall image is 'brighter' to start with.

As others have already noted, if you post process two single of different ISOs but all other factors being equal, the amount of noise you will see in the final stretched image will be pretty much the same.

So what is ISO? ISO is just another word for "Gain". When we measure the signal collected by each element we have to 'count' the number of electrons captured and turn it in to a number which forms the pixel value in the image. Crudely put, the ISO setting is simply a multiplier which is applied to the conversion between electrons and pixel values (or ADUs) in the final image. If the multiplier is 1, then 1 electron = 1ADU, 10 electrons = 10ADU, etc. If the multiplier is 2, then 1 electron = 2ADU, 10 electrons = 20 ADU, etc.

In theory, if we can obtain the raw/unprocessed values read from the sensor elements, applying a gain/ISO multiplier in-camera serves absolutely no purpose in astronomical imaging. Again, if you are using an astronomical CCD camera, you generally do not have the opportunity to apply an ISO/Gain setting when capturing images. You can apply your multiplication in a much more controlled manner in image processing software after the event, e.g. by doing a histogram stretch, curves or using pixel math operations to directly manipulate the ADU values in to the required range.

In practice, there are several reasons why gain/ISO settings are used during the capture process:

1. During the process of measuring the electrons in each sensor element, the signal must be amplified sufficiently that the Analogue to Digital Converter (ADC) circuit can convert the measured voltage (number of electrons) to a number. Generally, the raw electrons in the sensor element's well will not produce enough voltage for this process to work sufficiently well, so it is first fed through an amplifier circuit to boost the voltage. For some sensors the gain (ISO) setting can influence the level of amplification applied. For other types of sensor, the amplification is fixed at manufacture.

If changing the gain/ISO setting affects the pre-ADC amplification level, this may be useful. The reason is that the moving the electrons from the sensor element's well to the amplifier and then to the ADC circuitry are processes that will introduce additional noise (called read noise or read-out noise). Depending on the circuitry, using a (usually) higher gain/ISO setting will minimise the effect of this additional read-out noise in the final image.

If changing the gain/ISO setting does not affect the pre-ADC voltage in any way, there is no effect on Signal or Noise from using a higher or lower gain/ISO setting.

2. Once the voltage has been amplified, measured and converted to a number by the ADC, it's game over. Any ISO/gain multiplier applied after this point is just a mathematical operation and does not affect the SNR in the final image. The astro imager just wants the unadulterated numbers please! Let us do the maths in software so we know what is going on.

Conversely, if you are taking a few holiday snaps, you don't want to wait until you can get back to your computer so you can spend hours converting them in to attractive images. Therefore DSLRs perform a lot of the processing that we astro imagers would prefer to do ourselves to produce instant, useable results for one of the target audiences of these cameras (consumers). For one thing you want to preview the image on the screen on the back of the camera, for another you want to upload your latest snaps to Facebook pronto.

Professional photographers (and advanced amatuers) also use this instant preview, but generally save the captured images in 'RAW' format so they have direct access to the less adulterated sensor data. They can then make informed decisions about white balance, sharpening, noise, etc. back in the calm of the office.

3. Some astronomical CCD cameras may have access to gain settings which are applied in-camera, in the driver software or in both. This is useful for initial frame and focus operations where you need to be able to see what is going on instantly. When doing the actual capture though, you're generally not going to apply any additional gain, or you will go with whatever the manufacturer says is the optimal gain setting to minimise read-out noise.

4. Web cams (and web-cam or video cameras sensor derived astro cameras) have gain/brightness and other settings for the same reasons as DSLRs, i.e. instant useable images. It is more of a black art to figure out what the on-camera hardware, firmware and the driver software are doing when you change these settings, but the message is still the same; you'll get the same amount of signal (and noise) on the sensor, and the only benefit for a final image of allowing the camera to apply gain is if it reduces read-out noise.

There are other ways in which ISO settings can affect the SNR of your final image though:

A. If you use too high an ISO setting, then you lose 'Dynamic Range'. For example, assume the maximum ADU value our image format can store is 65535 (16 bit mono image), we have two different pixels, one with 65535 electrons in it and one with 65534 electrons in it. if we apply a multiplier of 1 to convert electrons to ADUs, then the corresponding pixels now contain values of 65535 and 65534. If we increase the multiplier to 2 (i.e. a higher ISO), then the pixels will both contain the maximum value of 65535. By using a higher ISO we have thrown away some detail.

There is never a good reason to do this. You may say 'okay but I needed to use a higher ISO so I could see fainter parts of the image and I will use a separate exposure to fill in the highlights I have just blown out'. You could, but then you could just use the lower ISO and you would actually have more headroom in the image to expose the darker parts for longer, and you would still end up with the same amount of signal and noise in the unprocessed image in the dark parts, whilst not blowing out any of the bright parts (or less of them).

B. Quantisation effects. If you don't use sufficient gain (ISO) you may end up mapping two different electron counts to the same ADU value. Imagine you have a sensor element with 3 electrons, and another element with 4 electrons. If you use a multiplier of 1, then the pixel ADU values will end up being 3 and 4. If you use a multiplier of 0.5, then both ADU values will end up being 2 (since you can't store 1.5 in the image format it gets rounded up or down to the nearest whole number; before anybody pulls me up on using floating point values in FITS files, the quantisation process happens at the ADC stage and the format of the ADU values is irrelevant to this, so you can still map two different electron counts to the same ADU value). Generally this is not a problem you have to worry about yourself, the camera manufacturer sets the minimum gain (and may indeed be throwing away useful data for some good or bad reason due to the camera/sensor design) but you can't change it.

The final issue to be aware of in relation to DSLR images (as opposed to CCD images) is that 'RAW' does not mean you are getting the raw ADU values from the ADC process. As per the first article linked below, Craig Stark has proved that (at least some) DSLR cameras perform on-camera processing of the data that the end user has no control over and until he published this article based on his tests, nobody seems to have realised was happening. He has measured dark current noise and shown that is does not increase in a linear fashion with increasing exposure time (which is typically what happens with astronomical CCD cameras and was previously assume to be the case for DSLRs as well).

Therefore DSLR users need to be really careful when trying to apply techniques and general wisdom derived from CCD imagers, since some of the key assumptions used by the latter group may not apply 100% to DSLR images.

Here are some more useful articles on the topic:

Profiling the Long-Exposure Performance of a Canon DSLR: http://www.cloudynights.com/item.php?item_id=2786

Key findings regarding the non-linear response of DLSRs due to on-camera/chip processing of image data.

Pixel Response Effects on CCD Camera Gain Calibration: http://www.mirametrics.com/tech_note_ccdgain.htm

The first part goes in to the theory of CCD Gain (ISO = Gain).

Sub-Exposure Times and Signal-to-Noise Considerations: http://www.hiddenloft.com/notes/SubExposures.pdf

Note that the next article below disputes the findings of this paper.

Subexposure calculator: http://www.ccdware.com/resources/subexposure.cfm

This calculator is based on the paper above, but only has a few models of CCD camera listed and doesn't deal with DSLRs or ISO settings.

Finding the Optimal Sub-frame Exposure: http://www.cloudynights.com/item.php?item_id=1622

Note that in some of the text the character lambda was converted to a "?". If you see ? = 10, or ?/2, that should be lambda = 10 or lambda/2.

Signal to Noise Ratio and the Subexposure Duration: http://www.starrywonders.com/snr.html

Determining the best Subexposure duration for your camera, sky conditions, etc.

Measuring Skyfog from Camera JPEG: http://translate.googleusercontent.com/translate_c?act=url&hl=en&ie=UTF8&prev=_t&rurl=translate.google.com&sl=ar&tl=en&twu=1&u=http://www.pbase.com/samirkharusi/image/37608572&usg=ALkJrhjdHY0USyrHWyG4OzvD9vM6j2HgpA

This is another article by Samir Kharusi (in Arabic, so I have linked via Google Translate).

Link to comment
Share on other sites

Wow !!! you have too much time on your hands. :smiley:

Cracking post and highly interesting. It may take some time to digest all this info, but i'm sure it will be worth it. As a daylight photographer, i am aware of the benefits and pitfalls of using higher iso values in order to capture the images I want, but everything I know about iso, f/stops and shutter speeds seems worthless when applied to astrophotography. Its a whole new ballgame and I'm still trying to get my head round it.

Link to comment
Share on other sites

This question keeps coming up. Someone should make the answer a sticky! And also ban terrestrial photography magazines which claim higher ISO makes your camera more sensitive to light (which, as IanL has admirably pointed out, it doesn't). So just to summarise - what matters for a decent image is signal-to-noise - changing ISO does not change the number of photons your camera detects - the signal-to-noise in your image is determined by the number of photons (well, the square-root thereof) and the relative amount of read-noise - the absolute amount of read-noise may change with ISO, but it is usually lower at high ISO. Ergo, ISO doesn't really matter unless you are dominated by read-noise (that means short exposures or narrow band imaging), in which case the higher ISOs are probably better.

One proviso - my Canon 1000D shows pseudo-non-random bias structure (vertical lines) down one side of the chip at ISO800 and below, but not at ISO1600. It does not subtract out with bias or darks, but does not average away. I read somewhere once that the Canon1000D uses different circuitry to generate ISO1600 images, so this may be the reason. Hence I have to use ISO1600 to get a decent image. So it may be that on any particular camera there may be overriding reasons for using a particular ISO, irrespective of the usual statistical calculations.

NigelM

Link to comment
Share on other sites

There will be a 'Fixed Pattern Noise' element which will result from one or more factors, but most typically due to the design of the read-out circuitry. CMOS and CCD sensors are typically arranged differently:

- The CCD sensor will use one or a small number of read-out circuits, and will "shuffle" the charges across each row of pixels, reading the charge currently contained in the last pixel in the row in sequence, shift all the remaining charges one place to the right and repeat until empty, and then shuffle the next row down and repeat. This is why image read-out times in astronomical CCDs are relatively slow.

- The CMOS sensor in a DSLR or webcam has read-out circuitry for each of the sensor elements. This greatly speeds up the read-out process, at the cost of the read-out circuitry reducing the available surface area of the chip to capture light. Being much faster to read, they are generally more suited to still and video cameras.

Either way up, the operation of the amplifiers, ADCs, timing clocks and associated circuitry will introduce unwanted artefacts in to the image. This can be dealt with in two ways:

- Using Dark Frames. Heat in the sensor will cause electrons to fall in to the sensor wells just like incoming photons will. It's just another source of energy that can excite an electron enough to make the leap in to the well. That is why cooled cameras are better than uncooled ones for long exposure images since fewer "dark current" electrons are created if there is less thermal energy available to excite them. Some of the camera circuitry can generate a lot of heat (particularly in older designs of CCD and CMOS sensor) and you may notice 'amp glow' where the circuits are heating up one part of the sensor chip more than the other.

By taking a number of dark frames (camera is blocked from the light), which are the same exposure length as our image and taken at the same (or nearly the same) sensor temperature, we can measure the average amount of electrons that each sensor captures due to heat rather than light and subtract it from our images.

- Using 'Bias' Frames. This is just a special case of a dark frame. We take the shortest possible exposure with the camera covered (typically 1/4000th of a second for a DSLR). The dark current generated during such a short exposure is pretty much zero, and all that will be left is:

a) Any current that was already in the sensor well prior to the exposure. CCD cameras may put a fixed amount of current in to the well prior to the start of the exposure, the 'bias', which needs to be measured and subtracted from the image. CMOS-based DSLRs may do the same, but they take care of any bias subtraction on-camera prior to writing out the final image.

B) Any additional current (or loss of current) created by the read-out process, i.e. the read-out noise. By taking multiple bias frames and averaging them, we can subtract the result from our image to remove the bias. In the case of a CCD this is a genuine bias frame, in the case of a DSLR it would more properly be called a 'Fixed Pattern Noise' frame since there is no actual bias as there is with a CCD.

The take away point here is that both Dark Frames and Bias (Fixed Pattern Noise) Frames measure the repeatable noise elements that the sensor/read out process adds, and by measuring it we can subtract it from our images.

It is worth noting that this is a statistical process; the dark current will almost all be due to random processes, it may be that due to amp glow or similar there is a noticeable pattern, but in the local areas of the frame the distribution of noise will vary randomly from frame to frame (except any broken 'hot' pixels that always give a high reading or 'cold' pixels that give a low reading). The fixed pattern noise will likely also show a random nose distribution in local areas but a repeatable pattern on the larger scale. When we subtract the bias and dark frames, we are not 'perfectly' removing the exact noise that exists in our image, but it can be demonstrated statistically (by someone better at maths than me) that the SNR of the image has improved.

When it comes to DSLRs, there are other problems that are not so susceptible to statistical method of removing them. Many Canon DSLRs are reported as suffering from 'Canon Banding Noise', and indeed I have noticed it with my 500D. What you get is a pattern of dark and light bands running across the image (in landscape view) which become apparent when you stretch the dark background as we typically do when processing astro images. It is a pattern in that the noise is not randomly distrubted, but it is not fixed.

It is impossible to remove using dark or bias frames, and depending on your luck, the usual stacking/averaging and subtraction techniques used with bias, darks, flats and lights can actually make the banding pattern more apparent, not less. I do not know the cause of this banding noise, but it is widely reported. The only way to remove it I have found is to use post-processing noise reduction filters of some sort or other (e.g. PixInsight has a specific 'Canon Banding' NR process designed to reduce the effect).

I do not know if the artefacts described by NigelM are another variant of this banding noise issue, but it sound suspiciously like it to me. If you can't use bias/darks and it doesn't average consistently then it must be something of that type. Even more suspicious if changing the ISO setting completely eliminates it.

I am speculating here, but I suspect these non-fixed pattern issues may be another consequence of Canon's on camera 'voodoo' processing similar to that already identified by Craig Stark in respect of dark current non-linearity. It may be that the camera firmware is applying processing/noise reduction techniques to the image data that work well for normal/daytime images (perhaps to overcome or mitigate design compromises they have made with the camera/sensor package), but that do not work well in low-light circumstances such as astro-imaging. I seriously doubt that Canon's engineers have spent a huge amount of time worrying about optimising their algorithms for images that are not much brighter than if the user had left the lens cap on.

Link to comment
Share on other sites

Ian - WOW! Thats a TON of information there. Will definitely have to read that through a 2nd or 3rd time to take that all in. Thanks for the crazy right up. I agree it must be a sticky.

Couple clarification questions -

So you are saying that if shooting RAW with DSLR then the higher ISO just starts you closer to the finished processed image than a lower ISO? So say on a scale of 1 to 10 (where 10 is a completely finished image) iso800 would start you, lets just say, at 5 then you have to process your way 5 more levels to get your finished image. But if I shoot at iso1600 it starts me at 7 and then I only have to process 3 levels to get my finished image.

Also, iso only effects an image (in regards to AP) in its Dynamic Range? Where lower iso allows you to control DR better than a higher iso? So if that is true then DSLR users should always shoot iso1600 unless their target has high dynamic range? i.e M42 vs Rosetta Nebula.

Sorry if these are explained in one of your links but I dont have time to read them all while im at work. Still haven't gotten to Agnes' link yet.

Link to comment
Share on other sites

So you are saying that if shooting RAW with DSLR then the higher ISO just starts you closer to the finished processed image than a lower ISO? So say on a scale of 1 to 10 (where 10 is a completely finished image) iso800 would start you, lets just say, at 5 then you have to process your way 5 more levels to get your finished image. But if I shoot at iso1600 it starts me at 7 and then I only have to process 3 levels to get my finished image.

Also, iso only effects an image (in regards to AP) in its Dynamic Range? Where lower iso allows you to control DR better than a higher iso? So if that is true then DSLR users should always shoot iso1600 unless their target has high dynamic range? i.e M42 vs Rosetta Nebula.

What I am saying, in simple terms, is that the ISO setting affects the pixel values in the raw image. Higher ISO = Higher Pixel Values for the same amount of light captured. All you are doing is multiplying the measured value by a larger number for a larger ISO value. What the ISO setting does not do is affect how much light you have actually captured (which is what most people imagine it does).

To an astrophotographer the pixel values are (largely) irrelevant. If your image is too dark, you simply open it up in an image processing package and strech the histogram, i.e. multiply the existing pixel values by some number to make them in to bigger numbers, and thus appear brighter on your screen. If it sounds like exactly the same thing as using a higher ISO value on the camera, then bingo you've grasped the key point, it is!

The reason we want to use a lower ISO value is to that there is a bigger range of numbers in to which our raw image can fit in the first place. If I use a high ISO value (1600 or 3200 for example) then the brightest pixels in the final image will become saturated sooner than at a lower ISO value (100, 200, 400 or 800). At high ISOs, one of two bad things will happen:

- First none of the camera pixel sensors will be completely full of electrons. But when you read those sensor elements out, and multiply by your big ISO number, many of the pixels in the final image are at the maximum value the picture format can store, so they all look saturated, i.e. bright white and blown out. You have just destroyed a bunch of useful data for no good reason.

- Second, you realise this has happened by looking at the image histogram in your capture software. You see a whole bunch of pixels at the right hand end of the scale and think 'I've blown out the details here' and so you reduce your exposure time to stop the pixels becoming saturated in the final image. So now your sensor elements end up with less information in them than you had before, and you have a worse signal to noise ratio than you started with.

What you should have done is reduced the ISO setting, and exposed for longer! More information captured, no saturation in the final image. In practice for most of us this only becomes an issue on really bright targets like M42 or bright stars, since for fainter stuff our scope tracking will usually start trailing before we hit saturation of the sensor elements.

Normally, the only reason to use a higher ISO is if you suffer from read noise. As others have noted, narrowband imaging and similar may result in a very small amount of signal being captured (since you are blocking most of the light), and depending on the sensor's internal electronics, it can be beneficial to use a higher ISO or gain setting to boost the signal above the read noise, assuming that the increased gain is applied (at least in part) prior to the ADC process.

If you are trying to one-shot something like M42, then it still doesn't make any sense to use a high ISO. You will just saturate the image pixel values sooner than you have saturated the sensor elements in the camera. Use the lowest ISO setting that deals with any read noise problems (read the linked articles, some of which explain in detail how to measure this stuff for yourself).

Link to comment
Share on other sites

I can see now why higher ISO works for me... I use short exposures not quite a minute long so the higher ISO would help both with read noise and quantisation of the very dim parts of the image :-)

Link to comment
Share on other sites

Ian,

I for one, really appreciate the time and effort you have put into this discussion.

It makes me feel vindicated...well just a little...

many thanks.

Hey... I feel vindicated too!

Link to comment
Share on other sites

OK Agnes finaly got to reading your link on the short-sub theory.

So I'm just going to start at the beginning and work my way down and respond accordingly.

He stars off by explaining that basiclly you need to look at your histogram and from that try and detirmine your iso setting and exposure length based on your location (LP suburbs or dark site). Which is a no brainer. Heavy LP suburbs restrict your iso setting. Not sure why he chose a narrowband image as his example because narrowband filters block out most of LP, even in very LP suburbs. Works better than the standard LP filter. And he used a modded camera that increases his image signal on top of the narrowband filters. So I think that this example is a bit squed in his favor.

He then moves on to stacking, which supports the already very widely excepted theory/fact that total time trumps single-sub exposure time. No arguement there. Well maybe a little. I still believe that single-sub exposure length effects the depth of an image and long exposure lengths are needed to get extremely faint detail in an image that would not be attainable with shorter length subs. Or that you would need much much more short subs and longer total time.

He then goes on the explain that a dark site is 15x 'better' than a suburb site. No doubt that a dark site is extremely better than a suburb site but I dont think that it is to the extreme that he is suggesting. It hard for me to believe that 64min of exposure time at a dark site would require 16hrs in a suburb site. Unless he is usuing the most heavyily LP suburbs and comparing it to the middle of a desert as his dark site.

He then goes on to explain remedies of the combo of a lousy mount and bad LP. Which everything he said is true. He says that you can take 15x shorter length subs but have to take 15x as many subs which just supports total exposure time thing again.

My summary of what he wrote (from my understanding of it) is that he has based his entire experiement on two major things. 1) You have a 'crappy' mount that can't track for long and 2.) You what your LP is at your location, whether it be from a darks site or a LP suburb. I still have a problem that all but is last image was taken with narrowband with a modded camera. With that, being in a dark site doesnt matter and all his theory starts unraveling because the only reason you would need to adjust your iso and exposure length (normally) would be to counteract the effects of LP and to make sure you dont blow out your stars and bright parts of your object. Shortening the exposure length would only help to increase the success of your imaging. i.e. not lossing subs to trailing, planes, satelite, etc.

I also don't see how his last image is suppose to deture skeptics. Its a relitively bright object, shot from the Australian outback and shot at f2.8. With those combos of dark site, bright object and fast optics you could capture pretty much any object with 1minute exposures.

In regards to depth of an image, that you said in your post, I think that if someone took 1min exposures of M27 it would takes hours and hours of data to be able to get to see the outer saint shell but if you took 30min exposures you would be able to capture it in maybe as little as 5-8hrs. (I'm taking a educated guess there on the times I really have no idea how long it would take)

On his whole write up I think that he took a very descriptive, yet long, way of saying that if you only had X amount of time to get image signal sub length doesn't matter. (Besides signal depth) Total exposure time is what your aiming for in your final image. Which I agree with.

Link to comment
Share on other sites

Amazing what can be done with many short subs but I also think sub length affects how deep you can go.

I found that with a mass of very short subs, the very faint stuff seemed to be in the area of what I would guess is read noise (very streaky lines of noise) and I couldn't manage to pull it out, even with darks and bias. It was just too low down for me in the really nasty noise! With longer subs, I could get at those details, they were lifted away from that no go area of noise.

But I don't know the technical ins and outs of read noise etc, I must work my way through all the great info and links to try and get a more technical understanding of what's going on!

I still believe that single-sub exposure length effects the depth of an image and long exposure lengths are needed to get extremely faint detail in an image that would not be attainable with shorter length subs. Or that you would need much much more short subs and longer total time.

Link to comment
Share on other sites

I see your point, I suppose the point I took away from the article was that UHC images on some targets are viable with 1 minute subs. In the course of the article (or perhaps another one) Samir does claim that much shorter subs (15 seconds) are possible under heavy light pollution - if you are prepared to do sixteen hours integration time. I can't help feeling his maths must be wrong, because the faint parts of a 15 second sub would quantize to zero, so nothing would emerge from the stack.

Link to comment
Share on other sites

Blimey.

I step out for 5 minutes and look what happens.

Ian, thankyou.

Loads of very useful info there which I will need to read carefully.

Can this become a sticky? Might be a useful guide, once it is all digested.

Typed by me on my fone, using fumms... Excuse eny speling errurs.

Link to comment
Share on other sites

I think, ( in my limited experience ) that you and others are correct Agnes. Shorter subs are a viable method of taking subs in heavily light polluted areas or when you can't track efficiently for longer subs, and raising the iso will let you do this but at a cost. The cost would seem to be more noise and less detail captured for each sub and so require lots more subs to make the final image, and even then, the detail may not be as good.

I tried the shortcut of raising the iso to the extreme ( because my dslr is capable ) and taking lots of very short subs, but the pics were washed out with LP and totally lacked any detail. In short, It was perhaps an inexperienced approach and I should have chosen a more realistic iso than 25000. I learned that there are very few shortcuts to be had. The best images require patience and time. I will experiment with iso and exposure times until I find the right combo for my site, but will persue the longer exposure, lower iso method until I have some data to work with.

I personally have not found yet, that a big pile of short subs is equal to a smaller pile of long subs, and trying to be clever because you have a flashy camera does not always work out the way you expect. (Oh how I hate being humbled) Thus my initial posts at SGL about doing AP on a budget have gone astray and now I must invest time and money and more time to achieve my goals.( Oh how I love spending money )

Link to comment
Share on other sites

ISO 25000 is mighty high... How long were your subs? As others have said, the light you gather is determined only by the f-ratio and the exposure length, you can't use ISO to speed up the capturing. I think you need at least minute-long subs on a bright target at F5. Higher ISO can only help to spread the dimmer parts of the image over more digital values, but this will overexpose the brighter parts of your image. According to a link I posted earlier in this thread, the highest ISO you can use for this purpose is 1600, as values higher than that are achieved by multiplying the digital signal, so an ISO over 1600 will only destroy data.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.