Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep33_banner.thumb.jpg.75d09b4b1b4e5bdb1393e57ce45e6a32.jpg

narrowbandpaul

Members
  • Content Count

    2,055
  • Joined

  • Last visited

Everything posted by narrowbandpaul

  1. Hi Russell, Good to see you have some numbers already. Those links will hopefully allow access to the 14bit data. The graph looks OK but with a constant of 16.5 the gain is very low, which is a bit suspect. Some good research you are doing and hopefully other can follow. How can you use overscan? I'm not sure of the exact operation of the CMOS in an SLR as canon will keep that hush hush but assuming it's an active pixel sensor then each pixel will contain an output amplifier which means they offset will. vary from pixel to pixel and there is no ability to read a larger area than the physical size of the sensor. I am not a 100% on that so please correct me if I'm wrong. Cheers Paul
  2. That's really not a bad start. The moon is a bit overexposed, possibly because the camera doesn't have enough control over its exposure. Perhaps there is a setting which gives you some control over how bright the object appears. It will vary between manufacturers. A shorter focal length eyepiece will magnify the moon for you. Don't be too disheartened. There are far worse photos than this kicking about on the internet. Paul
  3. Hi AG Ivo is very good at writing software and helping people with it. It's a big job for just one person. I like the programme. On trial I found it easier and more effective than my current processing. I hope he continues with it's it does most of what PI does for a fraction of the cost. I think if more folk knew about including some of the big wigs then that would raise its profile. Paul
  4. There are many mosaics available of the Milky Way online. Have a look through these. This should be enough to convince you. It would be nice if you had discovered a galaxy toward the centre of the Milky Way but unfortunately you haven't. It's not possible. The amount of dust on that line of sight would create so much extinction as to render it very faint in visible light. You would need to look in the IR and your camera is IR blocked. Paul
  5. Really nice job there Sara. Is it HaO3O3? You could add about 30% of the Ha in to the blue channel to simulate the Hb emission. I like the nice bold colours! Paul
  6. Hi lensman, Good job. I see you used startools. I have played around and I think it's very good. What do you make of it? Paul.
  7. This image is taken looking south,looking toward the centre of the Milky Way. Andromeda is no where near the general direction of Aquila and scutum. It's definitely a lens flare. As pointed out it's the same colour. I don't know of any large astronomical objects in that direction that have large sodium emission. Very nice image though, but it's a lens flare. Paul
  8. Separating might be an idea, but you would have alternating gaps where no light was detected and that will skew the std Dev. I think we want a Bayer interpolated image and then extract the luminance channel. Yeah using it in 14 bit might be an issue. You are right about binning, it's a software bin. The shortest would be 1/4000. Then find where overexposure begins to kick in. Use 10-20 preset exposure times to span this range. I was speaking very generally earlier and should have mentioned this. Paul
  9. Are you using a lens? Perhaps use no lens and stretch some opaque material across the bayonet fitting. Take a flat and stretch and try to use as flat an area as possible. Remember to use the same selection box throughout for a fair comparison. Vignetting is a form of fixed pattern noise, so it will find its way in to your measurement of PRNU if you are measuring that. It will disappear in the frame difference fiat though. So it is and is not important. As for the Bayer matrix, yes I think it's best to do the interpolation. I'm not 100% sure on the best way to do it, normally it's 16bit mono CCDs that get photon transfer analysis, and it's easier to work with. You could do a quick test to compare the method though. I imagine you raw files are 14bit? The programmes I used display this 14bits on a 16bit scale. If you can keep working in 14bit space then that is probably for the best. That may mean keeping it in raw. What about 2x2 binning, that would sum all the charge from the Bayer matrix. How dies this affect the measurement? Well hopefully in a similar way to a 2x2bin on a CCD. So there are several things to think about for the DSLR case I will help where I can. Paul
  10. Yeah, that's fine. What method are you trying, the full shebang or the abbreviated version Paul
  11. the atmosphere twinkles at scales around the few tens of ms. By taking a lot of very short exposures you are hoping to get lucky and catch a few where the atmosphere is very steady, so that you reach the diffraction limit of the telescope. The more you take the more selective you can be. With 6300 frames I would have thought a few would show almost no signs of being affected. I would be very selective and only stack the best few. As for averaging out seeing conditions, well that's not my understanding. The term isoplanatism refers to the angular patch of sky where the effects of twinkling are correlated. This can be just a few arcsecond, ie within the disk of jupiter. What this means is that the atmosphere is composed of little cells that have an angular size roughly equal to the isoplanatic angle. So just because one part of jupiter say was sharp doesnt mean that another part is. whether it is or not depends on wavelength, the value of the fried parameter and the height of the atmosphere causing the twinkling. Stacking these wont necessarily increase the resolution or reduce the seeing By being selective I think you are really looking for an image where all of jupiter is unaffected by seeing. The probability of this is very low, so you have to be very selective. Thats the way to resolution Paul
  12. Russell, Back to the averaging idea. There is no harm in averaging flats or darks together for the purpose of plotting them against time to measure linearity and dark current. In fact, for the darks I recommend this to smooth things out. Its when you start playing with noise where you have to be very careful. Paul
  13. Hi All, I attach my honours astronomy lab back in 2009. It was the characterisation of the Starlight SXVH16 featuring the KAI4022 sensor. We had some issues and Terry was very quick to help. It hopefully explains the photon transfer more clearly and it shows some graphs I was referring to earlier. Its about 40pages so it does go in to more detail. I cant find the average PTC analysis I did. [removed word]. Correction!!! Look at page 40. Bingo! .................................................................................... Indeed, no two sensors are the same. Thats why I am suggesting testing your own camera rather relying on a generalisation. There are a myriad of things going in an image sensor and so to fully evaluate takes a lot of investigation. I am suggesting that this simplified yet still accurate analysis will show how the properties vary with ISO and you can select the ISO that gives the best parameters for your needs. By testing not on the sky you learn about the sensors behaviour. Testing on the sky brings variables outside of your control. Things like total noise at different temps and exposure times is an important one for DSLRs but its hard to measure as the sensor warms up with use. Thats the advantage of a cooled CCD, there is no temp change with use. Taking a picture of an object on different days with different cameras at different temps (probably) is not a fair comparison. Only one variable must change during an experiment. Photon transfer is an objective method for comparison. If its good enough for NASA..... Paul Kent CCD Characterisation.pdf
  14. Excellent write up? I'm not convinced myself. It's hard to put in words. Talking bout it is much easier. As for using averages, you need to be very careful as you will reduce the random component which will skew the gain. I did some mathematical analysis of using multiple frames, though never tested it in practice. I haven't seen that analysis in a while though I'm sure I typed it up. So there are complications involved in this method. If you understand the maths of noise then it's quite understandable. You know for example that the std Dev will fall in proportion to sqrt No of frames. Try to keep the camera at the one temp, easier said than done though. Paul
  15. The testing method is known as photon transfer, it's the industry standard way of testing image sensors. When performed it will tell you the gain, read noise, full well, linearity, PRNU. ISO changes the gain and read noise and by implication the full well. The method is straightforward enough. Set up the camera facing an evenly illuminated surface, eg a wall. You might want some grease proof paper over the lens bayonet to diffuse the light further. I haven't attached a lens when doing this as it may introduce vignetting but it should work with the lens on if you want. Want you want to do is collect 2 flats at varying illumination levels from very dark to over exposed. Before you begin assess how long it takes to over expose. Divide the very dark-over exposed internal in to say 10 or more divisions. You may want a few more around over exposure to accurately assess when the pixels are full. Then you need a bunch of bias frames. The more the merrier. Repeat for different ISOs if you want. Data collection done. Analysis: Stack the bias and subtract them from every flat. For each pair of flats subtract one from the other. You need to add a constant when you do this t avoid zeroes. This can normally be done using pixel math type operations. The subtraction removes the fixed pattern noise component leaving just the random stuff. Let's call this the Frame Difference Flat. For analysis, always use the same set of pixels. A 100x100 crop will give 1% accuracy. Use the same crop each time. From the frame difference flat find the standard deviation. Take this an divide by sqrt(2). This factor comes from the subtraction of two frames. This quantity is the random noise at that signal level. Go back to the original flat and using the same pixels take the average. This is the signal level. Do not use the frame difference flat for measure the average! So now you have signal and random noise for a bunch of different signal levels. In excel, plot this. Right click on each axis and select axis properties, log axis. You should see how the random noise varies for different signals. You should see that at low signals the graph is fairly flat, this is the noise floor, or read noise. Then you should see the noise increasing in a linear fashion (on log-log axis). There are different ways to calculate gain, which converts electrons to ADU. The first is to fit a power law to this graph in excel. If it allows you to specify the power you want to use, select 0.5. The curve will be fit to all values, including this at low levels. We don't want this. We only want to fit the curve to the data showing a straight line dependence. Select only this data. Excel will show you the equation of fit. The constant out the front is what you want. Here, if the constant is C, then the gain is 1/C^2. Or you can extend this straight line back until it crosses the y=1 line. The intercept is the gain. Or you can calculate. Firstly find the signal from the flats (these have already been bias corrected). Then frame difference two of your bias frames, again adding a constant. Take the std Dev of this. Find the average of two bias frames. Then you find the gain by G=(F1+F2)-(B1+B2)/(std dev^2(F1-F2)+std dev^2(B1-B2)). That's the hard bit done. Know we know how ADU corresponds to the number of electrons. Never trust ADU always trust e-. Read noise is the std Dev of the frame difference bias/sqrt(2)*Gain. Full well is the point on the straight line graph where the noise begins to fall sharply as full well is reached. Take this signal level multiply by gain. Dynamic range is full well/read noise PRNU can be found graphically or by use of an equation. PRNU is how much fixed pattern noise you will be present at a given signal. We know the read noise, we know the signal and if we measure the std Dev of a single flat frame we know the total noise. We can then solve for the fixed component. Total Noise^2=read noise^2+Gain*average of single flat+PRNU^2*Gain^2*Average of single flat^2 We know every term there apart from PRNU. Linearity we can plot the signal versus exposure length on a graph. That's the full photon transfer analysis. A bit tedious but not too bad. One complication is whether your analysis program displays values on a 16bit scale. I believe most raw frames are 14 bit. You can repeat the same analysis for darks. However for the purposes of finding what is best I suggest maybe foregoing the full thing. Capture 2 flats at various signal levels as before, and capture 2 bias frames. Using the equation for the gain as it doesn't involve graphing. This will tell you the gain for all signal levels you have measured. Take the average gain. You have already measured the std Dev of the frame difference bias, so multiply by gain and divide by sqrt2 as before to find read noise. Find the point where the noise sharply drops off. This is full well, multiply by gain. Divide full well by read noise, to get dynamic range. You know have gain(e/DN), read noise, full well, dynamic range. You could also plot total noise from a single flat versus the average of the single flat, to see how your total noise varies with signal. As for darks, you can do a similar analysis. How do you get signal in your darks? Expose for a while.... Take say 1s, one minute, 2 min, 5 min and 10 min darks. Take a bunch of bias, average, and subtract them from the darks. I would plot the average versus time. Find the gradient of the slope from excel. Multiply by gain to get e/s. This is you dark current I would plot total dark noise (std Dev of the dark) versus time to see how noisy your camera is for different exposure times. When comparing things you must AlWAYS use electron units, never ADU. So if you compare at different ISO make sure you compare the value in electrons not ADU. In this case of total dark noise versus time, plot the dark noise in electrons by multiplying by the gain. With these quantities which can be deduced fairly easily, the frames are simple to acquire and analysis isn't too bad, you can objectively compare camera quantities to see what is best. Hope this was understandable! Maybe read it a few times or google search to reinforce things. The reduced method I outline second should take no more than 15 mins to capture data and about an hour to analyse. Excel is your friend! Good luck and post so,see silts if you get them! Paul
  16. I should say the flaming star is quite o3 weak so getting any blue in there is unusually difficult, but you have managed it. Paul
  17. Looking good, and with plenty data. Perhaps you HST version could be pushed a bit further. As for colour anything goes really. The one thing I try to do is have reds greens and blues showing. Don't let the image become dominated by a single colour (usually Ha) Make sure you see all three colours in the final image. I wouldn't mind having a go at this myself should you make the data available. Cheers Paul
  18. Hi Stephen, Any reason why 20 is the magic number? Yes the law of diminishing returns kicks in but as with all things the more the merrier. Hi Russell, I hadn't read your post when I commented above about ISO1600. I haven't tested the camera so I can't say what ISO is best, but have you read something that suggest ISO1600 was best? Was there testing to prove this? In no way having a dig, but a lot of things I read is just opinion and is very subjective. Statements like "it looks noise free" don't cut the mustard with me. If it is noise free then the numbers won't lie. I was wondering if your source can back up their statement. Cheers Paul
  19. I would avoid relying on that kind of hearsay unless you know the source to be very reliable. These things can be easily measured and you will find what works best for your camera. True the read noise might be lower at higher ISO which is good but the full well is reduced too, which impacts dynamic range. Cheers Paul
  20. If cameras baffle you then I refer to this little ditty I made a while ago. http://stargazerslounge.com/topic/52031-choosing-a-ccd-camera/ This should go someway in explain some of the properties of a ccd. They aren't particular baffling but they can seem that way! Paul
  21. For planetary, stick with the VX8 at its native focal length, get the fast frame rate camera, gets loads of them then stack only the best. A decision would need to be made for either a mono or OSC planetary cam. Paul
  22. Hi Olly, I assumed there was a typo in there. Well all have our moments. I have never used a bad pixel map but will experiment at some point I'm sure. As for doing 100 30 min darks. What's so wrong with that. It just takes 2 days of continuous imaging . I feel even worse for Tim with 2 hour subs! It raise the point that the law of diminishing returns applies not only to light frames, but to darks, flats and bias. And costs. Think about how much you would have to spend to get just a small improvement in image quality. Beyond a certain point improvements whilst small come at great expense. Paul
  23. The law of diminishing returns comes from the fact that in principle the SNR increases proportional to the square root of time or combined images. Compared to a single sub, to double the SNR we need 4x as many subs, to double it again we need 4x as many again, and so on. A 10x improvement over a single sub will require 100 subs. So to get better and better you need to throw a disproportionate amount of subs at it. Consider a single sub that detects S photons in a pixel. The random error on this is sqrt (S). And so the signal to noise is s/sqrt(S). I.e. Sqrt(S). If you take one more sub, or double the exposure length then you have twice the signal (as that is linear) and the random error is sqrt (2S). The signal to noise then is 2S/sqrt(2S), I.e. SNR=sqrt2*sqrt(S). But sqrt(S) was the SNR of our single sub. So in other words SNR for 2 subs or of twice the exposure length = sqrt(2)*SNR for the single sub. This is trivially extended for combining N images, the factor is just sqrt(N). This assumes that the random error, or shot noise is the only factor. Read noise plays a part and analysis including this shows that it is beneficial from a SNR point of view to take a few long subs rather than many shirt ones (for the same total integration time). So a single 60min sub will be better than 60 one minute subs. In practice this is borne out. However when stacking pixel rejection algorithms are used and these rely on having several subs. So this may be the best overall compromise Hope that helps, Cheers Paul
  24. I have no idea where this idea of a ratio came from. I assure you it's a myth, with no foundation in science. Olly,,are you saying that stacking many frames will reduce the non random noise? When you stack many it's the random stuff you reduce so as to not inject any noise when you subtract or divide out the fixed stuff (vignetting and hot pixels etc)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.