Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

120, 240 or 480 seconds. What is better ?


jsmoraes

Recommended Posts

An analyse of capture of an object with different time of exposition and ISO. They should contain the same data (amount of signal) although.

a) 120 seconds and  ISO 800

b ) 240 seconds and ISO 400

c) 480 seconds and ISO 200

All photos were conveted from RAW CR2 to JPG in Photoshop without any processing.

Diference of Histogram.

histogram-comparative.jpg

In animated GIF we can perceive:

comparatieAnimation-B.gif

1) less time exposition, more sharp the star.

2) less ISO, less noise

3) There isn't significant color difference

4) There is loss of visual resolution when long time exposition.

Question:

The apparent loss of focus and shaprness with long time is due to:

a) RMS of guiding

B) capture of background glow (perhaps for humiditiy) around of star

c) capture of true data (signal) emited by star

d) contamination from excessive luminosity of adjascent pixels of sensor

e) others issues, inclusive those above ou some one of them.

This other animated GIF shows the difference having the 120 seconds 800 ISO as reference.

reso-Animation.gif

note: I will stack in DSS and compare the difference. I did a number of frames that will be the same total time of exposition for each one.

Link to comment
Share on other sites

  • Replies 37
  • Created
  • Last Reply

Effectively the exposures don't contain the same data.

If you increase ISO and reduce the exposure time you SNR will be lower.

Never reduce exposure time when increasing ISO, always use the same total exposure time.

Link to comment
Share on other sites

Effectively the exposures don't contain the same data.

From histograms:

         120 sec - 800 ISO    480 sec - 200 IsO

Mean              24,16             25,87

Std Dev            5,93              6,14

Median             24               25

They are numbers very close. There are differences, maybe because with 480 sec - 200 ISO the Moon is begin to be present in the sky. It is not a strong difference.

Shorter exposure is much noisier and somewhat sharper due to less time exposed to seeing effects. Bad seeing, shorter subs, it's true, but if seeing is good, long subs reveal more faint detail.

Yes, make sense. And I agree.

However if you compare the shape and border of stars with ISO 800 you will see that after stacking in DSS it will be show less natural stars than with the others ISO. It seems that ISO 800 saturate some stars, causing some strong (and weird) lines stars borders

Link to comment
Share on other sites

Analyse 2:

Two groups of capture:
1) 240 seconds ISO 400
2) 120 seconds ISO 400

Two stacking:
1) only 11 x 240s 400iso
2) 11 x 240s + 21 x 120s

Comparasion between Autosave.tif 32 bits DSS - linear - with magnify of 200x - screen copy by Photoscape. None graphic processing.

dss-linear-120sE240s.jpg

Comparasion between Autosave.tif 32 bits DSS - autolevel photoshop - with magnify of 200x - screen copy by Photoscape. No other graphic processing.

dss-autolevelPS-120sE240s.jpg

Comparison of histogram

hist-linear-comp-0406.jpg

note: stacking all photos give appearance of better color of stars, noise reduction, and reduction of "false Airy disk".
Useful if we only want reduce the noise when capturing star cluster with strong stars near to saturation. We don't need 32 photos of 4 min. Only 11 x 4 min e others 21 x 2 min ! winking70.gif

Comparison of my normal convertion to 16 bits from Autosave 32 bits - linear - by HDR in photoshop - CS3.
 

myHDRconvertion.jpg

Link to comment
Share on other sites

They are numbers very close. There are differences, maybe because with 480 sec - 200 ISO the Moon is begin to be present in the sky. It is not a strong difference.

Aren't these numbers ADU? You need to multiply by the gain to get electrons to work out the true signal. And the gain will be 4x different between ISO200 and 800.

NigelM

Link to comment
Share on other sites

the gain will be 4x different between ISO200 and 800

yes, because this I did 120 (480/4) seconds with iso 800 to compare with 480 seconds ISO 200.

See, what i have in my hand is the file from a camera. When this file is openned by a graphic processor as Photoshop it shows the image and some numbers (or graphic) about this image.

What I have: few difference between the images with the method I did. There are some difference, although. And those differences is what I am searching to decide how will be my session of capture of a specific DSO.

Many texts talk about sensors, analog/digital converter, noises here and there.

But, it seems that people forget: you haven't access to the RAW signal of a sensor, analog or digital. You have only access to a file, graphically processed by camera. The camera is a black box. No matter what it does.

you can use some theory or informations about response or performance of a sensor, internal A/D converter of a camera, and others to HELP the planification of a session of capture. But you will work with the file provided by camera, with all interference that it does on the signal: noise, format convertion, amplification, distortion and others.

I read that the RAW from Canon is a false RAW. If it is true, the camera cause more interference on the signal from sensor than you can expect.

No matter. I will work with the file. And the signal there will be the truth, not the theory.

note: with CCD cameras, or other DSLRs not Canon, can be different. But, what I have is a Canon 1100D !

Link to comment
Share on other sites

If you want true unprocessed RAW get a Nikon D5100 or D7000 and apply the Nikonhacker true RAW firmware mod. There is a menue of mods to group together into one firmware upgrade. There is also one to eliminate black point clipping, and one to allow generic batteries among others.

Link to comment
Share on other sites

I read that the RAW from Canon is a false RAW. If it is true, the camera cause more interference on the signal from sensor than you can expect.

Enlighten us with the false RAW.

What are Canon doing to the data?

I don't see the problem when thousands of decent images have been produced with Canon cameras.

Link to comment
Share on other sites

As I wrote:

I read that the RAW from Canon is a false RAW. If it is true, the camera cause more interference on the signal from sensor than you can expect.

There is a topic in CloudyNights forum about suggestion to change the Canon cameras. It was made a list with some ideas to send to Canon people. http://www.cloudynights.com/topic/497060-what-featureschanges-would-you-like-to-see-in-an-astronomy-specific-dslr/

And there, I read:

mRAW and sRAW that are really RAW and not a lossless form of jpg encoding.

Provides raw RAW files not the cooked variety that we deal with currently.

Multi-sizes of RAW settings that produce real RAW files, not lossless compressed files.

RAW files, including mRAW & sRAW, that are real RAW, not compressed (my personal favorite wish-list item)

Have you, or any mate, more information about it ? What influence it has for us ? Up to now, it was not an important matter for me. I must work with the file that Canon give to me. That is my truth. True RAW or false RAW ... I will work with it and do my best !

Link to comment
Share on other sites

Not knowledgeable, but there is discussion of black point manipulation in CR2. It's not a question of whether decent images can be produced, it's a question of whether it's unmanipulated data as it comes off the sensor (as in a CCD) or not. My understanding is that it is not. The level of manipulation in unmodified Nikons may be higher, but there is a firmware mod to provide true RAW data available. Net effect? I don't know. Just trying to respond to the OP.

Link to comment
Share on other sites

A question: With RAW mode,  the sliders to adjust bright, contrast, saturation, sharpness, color tone is done on sensor ? Or it is done after the sensor, in analog signal, before the Analogic/Digital converter ? After that convertion, on digital information ?

Well, if they are not acting in/on the sensor ... the RAW signal of sensor isn't the Raw signal provided by the camera.

As I told, this is only for curiosity. We will work with raw signal inside the file provided by the camera. Real RAW, false raw ... no matter.

----------------

Someone can tell me, Jorge, this information is in the file RAW as information for debayer task. They don't change the RAW data.

Good, but I can't access to the signal RAW without debayering task. At least with Photoshop or DSS. The guilty now isn't the camera, is the algorithm of debayer. But the final result is the same: I haven't the real raw signal of sensor. At least with Canon 1100D.

As I told others cameras or trademark can do it.  But use it with what software ? Photoshop and DSS will do debayer, and I read that there are some debayer algorithm that are better than others.

Better than others ?!?! It will be right to say that if I debayer with one or other software I can have different photos ? Different data infomations ?

It seems that yes. Therefore the discussion about the real RAW, false RAW and some details of sensor responses isn't a fatal issue. IMHO !

Link to comment
Share on other sites

Your original question was: "120, 240 or 480 seconds. What is better ?"

The answer is clear - 480 seconds is better because it captures twice as many photons as 240sec and 4x as many as 120sec.  By "better" I mean the image has a higher signal to noise ratio i.e. it looks "cleaner" and it will be possible to extract fainter objects.

Using different ISOs for each test does confuse things a little but the exercise is still useful because you have successfully shown that increasing the ISO cannot compensate for the reduced exposure time.

So you have confirmed all the theory!

Mark

Link to comment
Share on other sites

yes, because this I did 120 (480/4) seconds with iso 800 to compare with 480 seconds ISO 200.

No - the gain has no relevance to exposure time. You still need to convert to electrons if you wish to make any scientfic comparison.

NigelM

Link to comment
Share on other sites

The answer is clear - 480 seconds is better because it captures twice as many photons as 240sec and 4x as many as 120sec.  By "better" I mean the image has a higher signal to noise ratio i.e. it looks "cleaner" and it will be possible to extract fainter objects.

If I take the 3 photos of first message of this topic, my choice would be 240 sec ISO 400. Let me explain why:

1) 120 sec ISO 800 has the border-line of star very sharp, crisp. Stack frames like it in DSS will do it very strong. We know that all telescope distort the image of point source of light, called Airy disk.

2) the 480 seconds ISO 200 shows the border-line of star very blurred.

3) 240 seconds ISO 400 has a middle appearance. Stack frames like it in DSS would be the best option.

Link to comment
Share on other sites

No - the gain has no relevance to exposure time.

With normal photo, not astrophoto, you can find the same 0 (zero) in the luminence indicator of the camera with different ISO (gain) and time of exposition.

At automatic mode, the camera by software chose the best ISO and the speed necessary. Why it use those values, I don't know.  Since we have more than only one option.

What I know is that with fast speed you get more depth: sharp objects at long distance of the main object.

I see the camera not only as sensor and ADC. I see the camera as a  black box with many internal functions. I don't know if the electronic of the camera work like an audio amplifier: with volume, bass, treble, reverbeation ... adjust, that I prefer to call "interference on the signal".

Normaly many people talk as it (the camera) does only simple amplification of the level of signal: volume.

Because this I am doing these tests. How the theory and knowledge about the response of sensors can help me to plan a session of capture, at my site, with my enviroment, temperature and optic equipments. The answer will be showed in the tests photos. The best of them with relation to noise, stars shape and colors, clouds details and colors will be the best to use in DSS and Photoshop CS3. And because of presence of variable not controlled the option can be different from the theory suggestion. No matter. What I want is the best to try to do my best !

Link to comment
Share on other sites

As I wrote:

There is a topic in CloudyNights forum about suggestion to change the Canon cameras. It was made a list with some ideas to send to Canon people. http://www.cloudynights.com/topic/497060-what-featureschanges-would-you-like-to-see-in-an-astronomy-specific-dslr/

And there, I read:

Have you, or any mate, more information about it ? What influence it has for us ? Up to now, it was not an important matter for me. I must work with the file that Canon give to me. That is my truth. True RAW or false RAW ... I will work with it and do my best !

Why shoot in sRAW or mRAW, forget them, it's RAW or nothing in my book.

Don't even know why the CN OP even bought up these two pseudo RAW methods.

Link to comment
Share on other sites

With the observation that a group of 240 seconds plus another group with 120 seconds give me less noise after stacking in DSS, I did another test.

21 x iso 800 180 sec + 25 x iso 800 240 sec.

the original DSS 32 bits:

post-43725-0-88789900-1433803905.jpg

with two different kind of convertion from 32 bits DSS to 16 bits in Photoshop CS3

post-43725-0-42765600-1433803949_thumb.j

Just now I am in session trying to do at least 2 hours or more with unique iso. 180 seconds ISO 800, to see if with more data I can convert from 32 bits DSS to 16 bits without any action, in DSS or Photoshop, and get the same or better enhancement of faint areas and stars.

I think that if it is possible, the shape of star will be more natural, since both methods that I used above make the stars more large and with a border fringe.

Link to comment
Share on other sites

Seeing changes between sessions, even during a session. I believe very difficult, perhaps impossible to make such comparisons without a very large sampling of each setup, AND some way to quantify the seeing. Stars will tend to look somewhat sharper in short exposures than in very long if neither is bloated. Your guiding program has a wide range of settings, some of which will make star edges softer as it chases the seeing.

Link to comment
Share on other sites

Thank you for attention, kalasinman. I did 3 hours of NGC 3766: 60 frames of 180 seconds iso 800. i will process them in 3 way:

10 frames - my usual number of frames

30 frames - many people say that this number is near to the ideal to reduce noise in stacking. Much more will not enhance very much the result.

60 frames - to see what it means about the amount of signal and if will  saturate on stacking, making all stars white.

As soon it is ready, I will post here.

note: a mate does analyse of some of my frames. He told me that my great issue isn't thermal noise as I am thinking.  The great source of my photos are skyfog and thermal skyfog. Normally for cluster I only do 10 frames - 30 or 40 minutes. Maybe with more frames and dithering I can reduce this skyfog noise.

I didn't dithering at this time. But, at least, I have 6 times the usual number of frames: 60 frames.

He used the photo below with dark, light and areas with adjustment of the brightness to better see the noise in the image.

Single frame with 180 seconds iso 800

post-43725-0-29468600-1433887338_thumb.j

Link to comment
Share on other sites

Usual procedure is to take a test exposure and adjust exposure time to a histogram value of 1/3 to no more than 1/2 at a standard ISO, example ISO 800.

The reduction in noise with increasing numbers of frames is a declining logarithmic curve.  Therefore the noise of 10 frames will be halved with 40 frames, and halved again with 160.

30-40 frames is thought of as a point of diminishing returns. The curve is practically flat once one is at 100.

Star saturation and bloating are the result of the photo sites filling to capacity. This is a product of time and ISO. DSLR cameras have a sweet spot, or ISO that seems to be a good compromise in performance for their design, usually 400 or 800. It is convenient to settle on one ISO and adjust exposure lengths to the preferred histogram reading. This makes calibration frames easier, as a library can be saved without much trouble. If skies are dark, I prefer 400 over 800 ISO as it means longer exposure times, which enhance fine detail.

The way to get more detail is to collect more photons. Increasing aperture or exposure length are the only ways to achieve this.

If your individual frames do not have washed out stars, then stacking 30, or even 100 frames won't change that.

Your guide camera has very large pixels, which means even with an OAG, you are quite undersampled for guiding. I can't determine the imaging and guiding pixel scales as I don't know the FL of your scope. As it is a constant in the equation it's fair to say image scale to guiding scale is as 5/8. I believe you would benefit with tighter stars by using a guide cam with smaller pixels, such as the QHY5L-II mono or equivalent model from ZSO. This would make your guiding oversampled by a ratio of roughly 5/3, a very significant improvement.

Link to comment
Share on other sites

Thank you for words, they confirm what I am thinking and my usal choice of ISO.

About guiding I can say:

1) the pixel of ASI120MC is 3.75 microns
2) my set of equipments give a resolution for it of 0.45"/px ( 305 mm fl=1700 mm)
3) my RMS drift for RA and DEC is around +/- 0.5" with few spikes wihth +/- 1.0" . See the image attached.

My Canon has pixel size of 5.18 and give me a resolution of 0.65"/px, and as I use OAG, the focal length for ASI and Canon are the same. So my guiding has more resolution than my Canon.

Mas great issue with guiding is refraction. The guide star always are changing its shape and with this PHD2 find different center of mass. Therefore, my RMS is a little false. My polar alignment is excellent.

PHD grapphic
The curve shows movements of stars in ds and dy. The guiding camera shows the star movement as horizontal for DEC and vertical for RA.

post-43725-0-85894200-1433942926.jpg

note: I invite you to see some of my work at: http://jsmastronomy.30143.n7.nabble.com. There are some good photos there.

Link to comment
Share on other sites

If you don't perform a stretch in order to extract faint detail then the benefits of long exposure will not readily present themselves and their drawbacks (inc the need for better guiding, better seeing and fewer aircraft  :grin: ) will make them look favourable. Stars may be sharper in short exposures but a stack of long ones allow you firstly to stretch, and then software sharpen, faint details. The image processing process is, then, and extended one in which the merits of longer and shorter subs will favour different parts of the image. Where possible, for this reason, I make starfields out of shorter RGB-only subs but use longer LRGB subs for the faint fuzzies themselves.

This is not to criticise your post. Far from it, it's a good one, but I'm just inviting us to consider the whole process of processing.

Olly

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.