Jump to content

Stargazers Lounge Uses Cookies

Like most websites, SGL uses cookies in order to deliver a secure, personalised service, to provide social media functions and to analyse our traffic. Continued use of SGL indicates your acceptance of our cookie policy.

stargazine_ep33_banner.thumb.jpg.75d09b4b1b4e5bdb1393e57ce45e6a32.jpg

narrowbandpaul

Members
  • Content Count

    2,055
  • Joined

  • Last visited

Everything posted by narrowbandpaul

  1. What the author has done is take his full well in electrons and divided by the gain to get full well in ADU. This will be around 60-65000 for all 16bit cameras. So saturation ADU should always be around 60-65000. As for the signal to use per flat. I see a lot of different numbers from different people. Most of it is old wives tales without a shred of fact. The quality of a flat can be expressed as Q=SN, where S is the signal per flat and N is the number of flats. This term actually appears in the mathematics of flat fielding. By using a low value of S you only hurt yourself. Especially when combined with just 10-15 flats as seems so common. I know of the need to avoid non linear effects but modern CCDs with anti blooming are still linear over the majority of the dynamic range. That's why I always use about 2/3rds saturation, ie around 40000 ADU. I also take a good number, 30-50. You need to minimise the injection of random noise during the process so a high quality flat is essential. Hope that was of use. Paul
  2. I guess this was taken by your Canon? RBI to the best of my knowledge won't be found in a CMOS sensor, though I can't confirm that. You need a full frame sensor for that. Also the RBI would have the same appearance as your previous light frames. As Dave said I really don't think it's RBI. Perhaps focuser flexure causing an severely offset vignetting? Paul
  3. Sounds like a nice tool. I don't know where the idea of 25% saturation comes from. But as long as the user can adjust the desired value then that's fine. Why is the saturation ADU given as 40000? 16 bit gives you 65000. If the manufacturer gives you a camera with a gain that high without good reason then send It back. Overall a nice idea. Paul
  4. Hi Stuart Yes I agree that fitting the high dynamic range offered by a low ISO in to a 12 or 14 bit a RAW could lead to some quantising noise. I also agree that the limit hasn't been reached. Long subs and a high quality calibration with dithering would be a good start. I believe you posted an image with many hours so the DSLR has much use. Paul
  5. Hi Olly, Of course there is more to the camera than the chip. I struggle to see how a manufacturer can seemingly get it wrong since there is very little control he has plus all the control signals are written in the data sheet. Still it is possible. I would never claim to be technically expert in anything. I'm very far from it. This stuff interests me and so I read about it. I have got an incredible amount still to learn! Paul
  6. Thanks Rob, Sounds like a read out electronics issue from what you describe. It's just about the only thing the camera manufacturer has some control over. A high noise floor means a longer integration time for a given SNR. Paul
  7. Hi Rob, I have never seen the SX version but regularly have chance to use the QSI583 version and have never found it lacking. Camera manufacturers won't have a lot of influence in sensor properties but the data sheet for the sensor shows some good numbers. So it's very odd as to why it seemed so poor. Maybe SX used some lower grade sensors or never optimised the cameras performance. There are many happy 8300 users so I'm disinclined to think it's the sensor. Are you able to elaborate more on what was so poor? High noise floor, inadequate cooling...? Cheers Paul
  8. Nice job there Gordon. Love the colour. What do you make of the Atlas with the PL16803, I have my eyes on that! Have you seen RDC's 60 odd hour version? It's just not fair!... Cheers Paul
  9. Nice image Yves! Olly, what doubts did you have with the 8300? It's a fairly good offering from Kodak (Truesense). Paul
  10. Sorry, I think you have the wrong number. This is imaging deep sky, think you are looking for Observing deep sky Paul
  11. Hi Russell It mentions this issue in the paper I sent you about reading raw files in Matlab. I think dcraw also has the option of outputting just the linear data. I think it varies from cameras type to camera type, so the 1100d may or may not modify the data before giving you the raw file. I think dcraw has some good uses, though I haven't used it. I hope to characterise the KAF8300 sensor soon. Paul
  12. Hi Ags, Can you show that a single 2.5hr sub well calibrated with good flats and darks is bad. Tim on here has experimented with multi hour subs. Don't think they were bad. Long subs are simply the best way to a high SNR for a given time. Cheers Paul
  13. Hi Gav, You are most certainly correct about long subs. A fair few threads recently have highlighted this both from a practical point of view (Olly is a big fan of long subs) and from a more mathematical point of view by myself. Long subs are good because the imaging time is split up in to just a few subs which means just a few instances of read noise. Short subs are bad because of many reads. An argument could be made for have enough for a good rejection algorithm. The random noise would be lower with more shorter subs because there is less signal to create that noise. With double the subs the random noise in a single frame would be lower by a factor of 1.4 but signal has fallen by a factor 2. This is a better sub than the long one? I reckon there is a trade off, it is between a s few subs as possible (subject to,guiding, LP, saturation etc) to limit the read noise contribution and having just enough for a pixel rejection algorithm. I would have though from 6-10 subs is enough for that. So when you go out next and think how long do I want on this object divide that by something like 6-10 and use that length as your sub. Also good calibration frames and dithering will help too. Paul
  14. Were you able to linearise the data in the raw file? Does the 1100D do any on board non linear processing in the raw file? Paul
  15. Good job Russell, data looks sensible. And the gain is now much nearer unity, which makes sense. The hard bit is done. You know know the gain (e/ADU) so you can know state things like read noise, full well etc in physical units rather than the ambiguous ADU. Looks like you may need to take a few more measurements at low ISO near over exposure to accurately determine full well. You have done the hard bit. A lot of work went in to that as it's not as straightforward as CCD so well done. Paul
  16. Sounds promising Russell. Non clipped data on a 14bit scale! Are these stats just for one colour or the whole thing? I think ideally you need to look at just one colour otherwise you will end up skewing the std Dev to a higher value. I wonder if the subtract dark option uses the thermally generated electrons in the light shielded overscan region. This would be quite an inaccurate way of doing it (although nothing wrong with it in principle) and using a master bias would probably be best although for short exposures there wouldn't be a difference. If you subtract off the bias using the light shielded pixels then you can account of fluctuating bias levels if present. Good progress! Paul
  17. I hear what you're saying Mike. I like the idea of HDR, I think it has more uses than just orion and andromeda. Keep experimenting! Paul
  18. Yeah we use cameras for hobby purposes, but some people (inc me) think that image sensors are interesting in their own right and that by testing you might just learn something that will help your imaging. Scientific value is certainly inherent in the CCD as it is this device that has revolutionised observational astronomy. Yes Raw data is not truely raw, but it can be linearised and could then be used for photometry for example. When an image is taken it has the potential to be used for scientific purposes even though we chose not to do so. In fact scientific integrity of an imaging system is paramount to capturing good images. It's how you get a faithful representation of the image. Paul
  19. That Craig stark article is really quite good and very informative. Raw data is hardly raw! This makes the analysis trickier but clearly it can be done. It also highlights how nice a cooled CCD is with thermally stability and true unadultered data. Paul
  20. It's the same procedure as I outlined. Except he plots variance (std dev2 vs Signal) so that the gain is the gradient of the straight line fit. Paul
  21. http://www.cloudynights.com/item.php?item_id=2786 It is useful. Good find.
  22. Hi Russell 1. I wasn't sure if there was an option for this or not, my bad! 2. I wonder if everything is multiplied by 4 to convert to 16bit. This would make conversion easy 4. That's an unusual value! Don't know why that would be. 5. If fits work will handle neg values then that's fine, most software will set any pixels less than zero to zero. 6. I'm thinking this would make the std Dev quite high as the signal through the different filters will be different. This would in turn make the gain low, which is what you observed. Try the std Dev of the or green etc and the corresponding mean. This should work. Having fun yet?! Paul
  23. I had a look at your links, and I stand corrected on the overscan thing. On most sensors there are some pixels that don't receive light. True overscan is where the output amplifier tries to sense nothing. It could create an image arbitrarily long. With the light masked pixels there is still the possibility for dark current to be present since these are real pixels made from silicon. With the overscan there will be no dark current as the sense node is measuring nothing. Not that much dark current would be present in a fraction of a second though! An interesting find. Paul
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.