Jump to content

SkySurveyBanner.jpg.21855908fce40597655603b6c9af720d.jpg

Hit the DSLR quality buffer with IC434


Twisted Lip

Recommended Posts

Hi Stephen,

Any reason why 20 is the magic number? Yes the law of diminishing returns kicks in but as with all things the more the merrier.

Hi Russell,

I hadn't read your post when I commented above about ISO1600. I haven't tested the camera so I can't say what ISO is best, but have you read something that suggest ISO1600 was best? Was there testing to prove this?

In no way having a dig, but a lot of things I read is just opinion and is very subjective. Statements like "it looks noise free" don't cut the mustard with me. If it is noise free then the numbers won't lie. I was wondering if your source can back up their statement.

Cheers

Paul

Link to comment
Share on other sites

  • Replies 78
  • Created
  • Last Reply

Hi Stephen,

Any reason why 20 is the magic number? Yes the law of diminishing returns kicks in but as with all things the more the merrier.

Hi Russell,

I hadn't read your post when I commented above about ISO1600. I haven't tested the camera so I can't say what ISO is best, but have you read something that suggest ISO1600 was best? Was there testing to prove this?

In no way having a dig, but a lot of things I read is just opinion and is very subjective. Statements like "it looks noise free" don't cut the mustard with me. If it is noise free then the numbers won't lie. I was wondering if your source can back up their statement.

Cheers

Paul

Thanks Paul, you are quite right to be cautious when the source of information is unknown or unsubstantiated.

I based my findings on what I've read on this forum and in particular on work carried out by member Ags:

http://stargazerslounge.com/topic/167076-the-noise-produced-by-a-canon-1100d-at-various-iso-settings-and-temperatures/

I would like to do my own analysis with my camera my I don't know the method to use or how to then analyse the data. So it's a real non-starter!

Link to comment
Share on other sites

Well, this has become a most fascinating thread... So, it looks like some controlled scientific experiments to study the relationship between noise and ISO for the 60D will have to be conducted... At least that gives me something useful to do while the weather is rubbish and the moon is glaring! Results to follow... Does anyone know how to actually analyse the results?!!

Link to comment
Share on other sites

The testing method is known as photon transfer, it's the industry standard way of testing image sensors. When performed it will tell you the gain, read noise, full well, linearity, PRNU. ISO changes the gain and read noise and by implication the full well.

The method is straightforward enough.

Set up the camera facing an evenly illuminated surface, eg a wall. You might want some grease proof paper over the lens bayonet to diffuse the light further. I haven't attached a lens when doing this as it may introduce vignetting but it should work with the lens on if you want.

Want you want to do is collect 2 flats at varying illumination levels from very dark to over exposed. Before you begin assess how long it takes to over expose. Divide the very dark-over exposed internal in to say 10 or more divisions. You may want a few more around over exposure to accurately assess when the pixels are full.

Then you need a bunch of bias frames. The more the merrier.

Repeat for different ISOs if you want.

Data collection done.

Analysis:

Stack the bias and subtract them from every flat. For each pair of flats subtract one from the other. You need to add a constant when you do this t avoid zeroes. This can normally be done using pixel math type operations. The subtraction removes the fixed pattern noise component leaving just the random stuff. Let's call this the Frame Difference Flat. For analysis, always use the same set of pixels. A 100x100 crop will give 1% accuracy. Use the same crop each time. From the frame difference flat find the standard deviation. Take this an divide by sqrt(2). This factor comes from the subtraction of two frames. This quantity is the random noise at that signal level. Go back to the original flat and using the same pixels take the average. This is the signal level. Do not use the frame difference flat for measure the average!

So now you have signal and random noise for a bunch of different signal levels.

In excel, plot this.

Right click on each axis and select axis properties, log axis. You should see how the random noise varies for different signals. You should see that at low signals the graph is fairly flat, this is the noise floor, or read noise. Then you should see the noise increasing in a linear fashion (on log-log axis). There are different ways to calculate gain, which converts electrons to ADU.

The first is to fit a power law to this graph in excel. If it allows you to specify the power you want to use, select 0.5. The curve will be fit to all values, including this at low levels. We don't want this. We only want to fit the curve to the data showing a straight line dependence. Select only this data. Excel will show you the equation of fit. The constant out the front is what you want. Here, if the constant is C, then the gain is 1/C^2.

Or you can extend this straight line back until it crosses the y=1 line. The intercept is the gain. Or you can calculate. Firstly find the signal from the flats (these have already been bias corrected). Then frame difference two of your bias frames, again adding a constant. Take the std Dev of this. Find the average of two bias frames. Then you find the gain by G=(F1+F2)-(B1+B2)/(std dev^2(F1-F2)+std dev^2(B1-B2)).

That's the hard bit done. Know we know how ADU corresponds to the number of electrons. Never trust ADU always trust e-. Read noise is the std Dev of the frame difference bias/sqrt(2)*Gain.

Full well is the point on the straight line graph where the noise begins to fall sharply as full well is reached. Take this signal level multiply by gain.

Dynamic range is full well/read noise

PRNU can be found graphically or by use of an equation. PRNU is how much fixed pattern noise you will be present at a given signal. We know the read noise, we know the signal and if we measure the std Dev of a single flat frame we know the total noise. We can then solve for the fixed component.

Total Noise^2=read noise^2+Gain*average of single flat+PRNU^2*Gain^2*Average of single flat^2

We know every term there apart from PRNU.

Linearity we can plot the signal versus exposure length on a graph.

That's the full photon transfer analysis. A bit tedious but not too bad. One complication is whether your analysis program displays values on a 16bit scale. I believe most raw frames are 14 bit.

You can repeat the same analysis for darks.

However for the purposes of finding what is best I suggest maybe foregoing the full thing.

Capture 2 flats at various signal levels as before, and capture 2 bias frames. Using the equation for the gain as it doesn't involve graphing. This will tell you the gain for all signal levels you have measured. Take the average gain. You have already measured the std Dev of the frame difference bias, so multiply by gain and divide by sqrt2 as before to find read noise. Find the point where the noise sharply drops off. This is full well, multiply by gain. Divide full well by read noise, to get dynamic range.

You know have gain(e/DN), read noise, full well, dynamic range.

You could also plot total noise from a single flat versus the average of the single flat, to see how your total noise varies with signal.

As for darks, you can do a similar analysis. How do you get signal in your darks? Expose for a while....

Take say 1s, one minute, 2 min, 5 min and 10 min darks.

Take a bunch of bias, average, and subtract them from the darks.

I would plot the average versus time. Find the gradient of the slope from excel. Multiply by gain to get e/s. This is you dark current

I would plot total dark noise (std Dev of the dark) versus time to see how noisy your camera is for different exposure times. When comparing things you must AlWAYS use electron units, never ADU. So if you compare at different ISO make sure you compare the value in electrons not ADU. In this case of total dark noise versus time, plot the dark noise in electrons by multiplying by the gain.

With these quantities which can be deduced fairly easily, the frames are simple to acquire and analysis isn't too bad, you can objectively compare camera quantities to see what is best.

Hope this was understandable! Maybe read it a few times or google search to reinforce things. The reduced method I outline second should take no more than 15 mins to capture data and about an hour to analyse. Excel is your friend!

Good luck and post so,see silts if you get them!

Paul

Link to comment
Share on other sites

Excellent write up Paul, thanks!  I know what I'll be doing while the clouds are out now :)

One question, would you get a more accurate result if you stacked a set of flats instead of using a single flat? That would even out the signal, dark noise and bias I'd think?

Also, would keeping the flat and bias files at the same temp reduce errors from dark noise?

Link to comment
Share on other sites

Excellent write up? I'm not convinced myself. It's hard to put in words. Talking bout it is much easier.

As for using averages, you need to be very careful as you will reduce the random component which will skew the gain.

I did some mathematical analysis of using multiple frames, though never tested it in practice. I haven't seen that analysis in a while though I'm sure I typed it up.

So there are complications involved in this method. If you understand the maths of noise then it's quite understandable. You know for example that the std Dev will fall in proportion to sqrt No of frames.

Try to keep the camera at the one temp, easier said than done though.

Paul

Link to comment
Share on other sites

Going back to the beginning and reasons for rejecting frames; once you have over ten or a dozen images in a stack then satellite and plane trails will start to disappear in a good Sigma routine. The one in AstroArt 5 is very effective. Even if some vesitge of the trail remains, you can make two stacks, one with and one without the trails. Give them both an identical stretch/cut back in Ps, paste the deep one with trails on the top, erase just the trails, flatten and then carry on with a corrected deeper stack. It's best to milk every last drop of your capture data.

Dithering is also a phenomenal weapon against noise, especially in stacks with lots of subs run through Sigma. If you don't have a dither facility in your capture software just switch off your guider and restart it again in a few seconds. That should give you enough.

What I've noticed with DSLR images is that the faster the optics the more DSLR approaches CCD. (In other words fast optics help DSLR even more than they do CCD.)

Olly

Link to comment
Share on other sites

Excellent write up? I'm not convinced myself. It's hard to put in words. Talking bout it is much easier.

As for using averages, you need to be very careful as you will reduce the random component which will skew the gain.

I did some mathematical analysis of using multiple frames, though never tested it in practice. I haven't seen that analysis in a while though I'm sure I typed it up.

So there are complications involved in this method. If you understand the maths of noise then it's quite understandable. You know for example that the std Dev will fall in proportion to sqrt No of frames.

Try to keep the camera at the one temp, easier said than done though.

Paul

Hi Paul,

What I have learned is that no two sensors are the same, noise charactristics are sufficiently different as to render the suggestion of a single ISO for DSO imaging rather pointless, depending on the camera anywhere from 400~ 1600 seems to yeild acceptable results if the sensor temp is not too high. I have a Canon 1000d modded, an 1100 d modded and an 1100d unmodded and I have used all of them for DSOs up 600s per sub and they all behave differently at different ISOs and exposure times. The latest one a modded 1100d did not impress me at ISO 400 when I tested a few nights ago with 30subs and 300s subs , the noise was just not acceptable ( atmospheric conditions ? ) so further experiments are obviously needed.

Regards,

A.G

Link to comment
Share on other sites

Hi All,

I attach my honours astronomy lab back in 2009. It was the characterisation of the Starlight SXVH16 featuring the KAI4022 sensor. We had some issues and Terry was very quick to help. It hopefully explains the photon transfer more clearly and it shows some graphs I was referring to earlier. Its about 40pages so it does go in to more detail.

I cant find the average PTC analysis I did. [removed word]. Correction!!! Look at page 40. Bingo!

....................................................................................

Indeed, no two sensors are the same. Thats why I am suggesting testing your own camera rather relying on a generalisation. There are a myriad of things going in an image sensor and so to fully evaluate takes a lot of investigation. I am suggesting that this simplified yet still accurate analysis will show how the properties vary with ISO and you can select the ISO that gives the best parameters for your needs.

By testing not on the sky you learn about the sensors behaviour. Testing on the sky brings variables outside of your control.

Things like total noise at different temps and exposure times is an important one for DSLRs but its hard to measure as the sensor warms up with use. Thats the advantage of a cooled CCD, there is no temp change with use.

Taking a picture of an object on different days with different cameras at different temps (probably) is not a fair comparison. Only one variable must change during an experiment.

Photon transfer is an objective method for comparison. If its good enough for NASA.....

Paul Kent CCD Characterisation.pdf

Link to comment
Share on other sites

Russell, 

Back to the averaging idea. There is no harm in averaging flats or darks together for the purpose of plotting them against time to measure linearity and dark current. In fact, for the darks I recommend this to smooth things out.

Its when you start playing with noise where you have to be very careful.

Paul

Link to comment
Share on other sites

Not sure yet :)

How critical is the flat field without gradients? I mean, if the area in the centre where you take the 100x100 pixel sample has no gradient/vignetting is that ok?

Also, DSLR's have a bayer matrix so the RGB pixels will have different values. I think this might affect the analysis?

Link to comment
Share on other sites

Are you using a lens? Perhaps use no lens and stretch some opaque material across the bayonet fitting.

Take a flat and stretch and try to use as flat an area as possible. Remember to use the same selection box throughout for a fair comparison.

Vignetting is a form of fixed pattern noise, so it will find its way in to your measurement of PRNU if you are measuring that.

It will disappear in the frame difference fiat though.

So it is and is not important.

As for the Bayer matrix, yes I think it's best to do the interpolation.

I'm not 100% sure on the best way to do it, normally it's 16bit mono CCDs that get photon transfer analysis, and it's easier to work with.

You could do a quick test to compare the method though.

I imagine you raw files are 14bit? The programmes I used display this 14bits on a 16bit scale. If you can keep working in 14bit space then that is probably for the best. That may mean keeping it in raw.

What about 2x2 binning, that would sum all the charge from the Bayer matrix. How dies this affect the measurement? Well hopefully in a similar way to a 2x2bin on a CCD.

So there are several things to think about for the DSLR case

I will help where I can.

Paul

Link to comment
Share on other sites

I was trying with it fitted to my scope. The centre is pretty flat but out at the edges the vignetting starts to creep in.

What you can do is separate the red, green & blue pixels into 4 separate frames (2 green pixels) without any interpolation. Maybe it would be better to analyse these frames individually?

Yes, my camera is 14bit. I'll need to find some software that will work in 14 bit, as the ones I have all convert to 16bit
I'm not aware you can hardware bin a DSLR, it could only be done in software which wouldn't really give the desired effect I would think.
Also, I wasn't clear on how you selected the timings between max and min capture duration. As the DSLR is restricted to the exposure lenght when dealing with fractions of a second, maybe I can just use the preset figures? I.e. 1/1000 1/500 1/400 1/200 1/100 etc...?
Link to comment
Share on other sites

Separating might be an idea, but you would have alternating gaps where no light was detected and that will skew the std Dev. I think we want a Bayer interpolated image and then extract the luminance channel.

Yeah using it in 14 bit might be an issue.

You are right about binning, it's a software bin.

The shortest would be 1/4000. Then find where overexposure begins to kick in. Use 10-20 preset exposure times to span this range. I was speaking very generally earlier and should have mentioned this.

Paul

Link to comment
Share on other sites

Gentlemen, I commend you for your ability to understand what the heck you are doing with all this PRNU, full well deviation stuff. I believe it is just too much for a bear of little brain such as I.

Anyway, just to return to the OP briefly...

Will, did you dither your subs at all? I started processing my horsey subs last night and the results have amazed me. Aside from issues with meridian flip images and not working out how to flip the raw files back so that Nebulosity can process everything in the same orientation... I have stacked five 900s subs and the result seems the most noiseless image I have made yet... The reason... The subs are significantly shifted from one to the other... They are significantly dithered. This happens to have been thanks to an accident, i think I was having guiding problems after the flip and there were guide gaps between subs, so the image has shifted significantly on the chip. However the results are very very significant! So, I'm going to be looking at how to make a large dither deviation between subs using PHD and BackyardEOS.

Will, worth giving a try as I think your images will benefit massively.

Now, moon, go away, clouds, go away, wind, go away, work, go away, I would like to try and do some AP!!!!

Link to comment
Share on other sites

Gentlemen, I commend you for your ability to understand what the heck you are doing with all this PRNU, full well deviation stuff. I believe it is just too much for a bear of little brain such as I.

Anyway, just to return to the OP briefly...

Will, did you dither your subs at all? I started processing my horsey subs last night and the results have amazed me. Aside from issues with meridian flip images and not working out how to flip the raw files back so that Nebulosity can process everything in the same orientation... I have stacked five 900s subs and the result seems the most noiseless image I have made yet... The reason... The subs are significantly shifted from one to the other... They are significantly dithered. This happens to have been thanks to an accident, i think I was having guiding problems after the flip and there were guide gaps between subs, so the image has shifted significantly on the chip. However the results are very very significant! So, I'm going to be looking at how to make a large dither deviation between subs using PHD and BackyardEOS.

Will, worth giving a try as I think your images will benefit massively.

Now, moon, go away, clouds, go away, wind, go away, work, go away, I would like to try and do some AP!!!!

Hey Gav, no I didn't. It was just captured using EOS Utility (the Canon) and PHD (guiding) and so wasn't dithering between subs. I'm not actually sure if PHD can do this as a separate entity - I think I need an APT or Nebulosity to marry up the capturing. I'm sure I read somewhere a while ago I could dither in processing but I have to be honest I'm not sure where I read it or how to do it.

Have you got an example of the image you too to see how it compares?

Cheers

Will

Link to comment
Share on other sites

Will, worth checking out BackyardEOS for camera control - works well and communicates with PHD to do the dithering. I don't use APT, so can't speak for that. Would be interesting to know how it compares.

I will post an image just as soon as I can, hopefully tomorrow.

Link to comment
Share on other sites

Will, worth checking out BackyardEOS for camera control - works well and communicates with PHD to do the dithering. I don't use APT, so can't speak for that. Would be interesting to know how it compares.

I will post an image just as soon as I can, hopefully tomorrow.

good point, I'd forgotten about BEOS. Like you I haven't used either APT or Nebulosity but I've heard good things about both (plus they will both control DSLR and CCD's which is a bonus).

Anyone able to give a quick thumbs up or thumbs down for APT, BEOS or Nebulosity?

Will

Link to comment
Share on other sites

Will,

I have posted my processed horse here: http://stargazerslounge.com/topic/204703-horse-with-a-flip/

It's still pretty noisy and I had to run Noel's Deep Space Noise Reduction a couple of times. I'm hoping that more data with large dithering will help to reduce the noise further.

Ultimately I think that you are basically right and a DSLR is not the weapon of choice to make astrophotos. However, do you have £3k hanging around to drop on what is just a bit of a fun hobby?!

I'm saving...

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.