Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

cuivenion

Members
  • Posts

    1,805
  • Joined

  • Last visited

Posts posted by cuivenion

  1. On 13/02/2018 at 17:02, vlaiv said:

    Can be done with flat darks only, no need for bias files.

    To be sure - do simple experiment. Take set of flat darks (just basically darks at short exposure - same that you use to get your flats). Then power off everything and disconnect camera. Power on again, use same settings, and do another set of flat darks. Stack each group using simple average stacking to get two "masters". Subtract second master from first and examine the result. Result should have average 0 and have only random noise in it - no pattern present. If this is so, you can simply use following calibration:

    master dark = avg(darks)

    master flat dark = avg(flat darks)

    master flat = avg(flats - master flat dark)

    calibrated light = (light - master dark) / master flat

    Note that lights and darks need to be taken on same temperature and settings, and flats and flat darks on their own temp, exposure and settings (which ever suit you to get good flat field).

    You can on the other hand check if bias is behaving properly by following:

    Take two sets of bias subs, and do the same as above for flat darks (two stack, subtract and examine). If you get avg 0 and no pattern - that is good.

    Now to be sure if bias work ok, you need to do the following as well.

    Take one set of darks of certain exposure (let's say 10s), and take one set of darks with double that exposure (so 20s, same temp, same settings).

    Prepare first master as avg(darks from set1) - bias. Prepare second master as avg(darks from set2) - bias. Then create using pixel math following image: master2 - master1*2 and examine it. It should also have avg 0 and no patterns visible. If you get that result then bias functions properly and you can use it (although for standard calibration it will not be necessary as you can use calibration mentioned above).

    Hi I tested the bias. This was the workflow.

    1. Took a set of ten bias.
    2. Shut down the camera.
    3. Restarted the camera, took another set of ten bias.
    4. Integrated both sets using these settings:

    01.thumb.jpg.7e2c2ef8dd600a2316c08e469dff86a9.jpg

    5.Subtracted integration2 from integration1 and integration2 from integration2 with these settings:

    02.thumb.jpg.5775bc67d4efade3feb68357c4d59996.jpg

    6. And these are the results side by side. Looks like the bias is not useable for calibration then. Unless something in my settings was wrong:

    03.thumb.jpg.d4871e47d907010fc458c4bb2327c213.jpg

    Here are the the original two integration files before calibration:

    04.thumb.png.140f6a50652450b9c3fdc985b7eaa649.png

    All the files were autostretched.

    One question I have now is how do I use the Basic CCD Parameters script in Pixinsight to find my electrons per data if I can't use the bias? This also leaves me unable to use the SubframeSelector process.

    I've also posted about this on the Pixinsight forum.

     

  2. 34 minutes ago, vlaiv said:

    Quantization noise is rather simple to understand, but difficult to model as it does not have any sort of "normal" distribution.

    Sensor records integer number of electrons per pixel (quantum nature of light and particles). File format that is used to record image also stores integers. If you have unity Gain - meaning e/ADU of 1 - number of electrons get stored correctly. If on the other case you choose non integer conversion factor you introduce noise that has nothing to do with regular noise sources. Here is an example:

    You record 2e. Your conversion factor is set to 1.3. Value that should be written to the file should be 2x1.3 = 2.6 but file supports only integer values (it is not up to file really, but ADC on sensor that produces 12bit integer values with this camera), so it records 3ADU (closest integer value to 2.6, in reality it might round down instead of using "normal" rounding).

    But since you have 3 ADU and you used gain of 1.3 what is the actual number of electrons you captured? Is it 2.307692..... (3/1.3) or was it 2e? So just by using non integer gain you introduced 0.3e noise on 2e signal.

    On the matter of gain formula - it is rather simple. ZWO uses DB scale to denote ADU conversion factor, so Gain 135 is 13.5db gain over the least gain setting. So how much is lowest gain setting then?

    sqrt(10^1.35) = ~4.73 and here it is on graph:

    224-Gain-RN-DR-FW-vs-gain-.jpg

    Ok, so if unity is 135 (or 13.5db or 1.35 Bell), we want gains that are multiples of 2. Why multiples of two? In binary representation there is no loss when using powers of two do multiply / divide (similar to decimal system where multiplying with 10 and dividing with 10 only moves decimal dot) - so guaranteed quantization noise free (for higher gains, for lower gains you still have quantization noise because nothing is written after decimal dot).

    DB system is logarithmic system like magnitude. So if you multiply power / amplitude you add in db-s. 6db gain is roughly x2.

    (check out https://en.wikipedia.org/wiki/Decibel there is a table of values - look under amplitude column).  Since gain with ZWO is in units of 0.1db - 6db is then +60 gain.

    Btw, you should not worry if gain is not strictly multiple of 2 but only close to it, because closer you are to 2 - quantization noise is introduced for higher values (because of rounding) - and with higher values SNR will be higher (because signal is high already).

     

     

    Hi, I'm not going to pretend I completely understood that. But basically you're saying if you choose a gain that is 6db,12db,18db, 24db etc (195, 255, 315, 375) above unity gain then I'll avoid quantization noise?

  3. 5 minutes ago, vlaiv said:

    Software should be able to pick up stars regardless of the gain - it is SNR that matters, not if star is visible or not on unstretched image.

    You also don't need to go that high in gain, unity should be ok - not much difference in read noise between unity and high gain. But if you want to exploit that last bit of lower read noise, use gain that will give you multiples of 2 of ADU to avoid quantization noise. That would be 135 + n*60 so 135, 195, 255, ...

    Hi, I thought that would be the case, but as I wasn't sure I just played it safe, thanks for the info. The last point has gone over my head a bit. I've not come across quantisation noise. Looks like I've got some reading to do. You've obviously got a deeper understanding of this than me, you couldn't finish off that equation showing the correct gains could you please?

  4. 5 minutes ago, artoleus said:

    Very nice. 
    The local deringing on the deconvolution settings are causing some some black ringing around the stars. You may need to refine the star_mask somewhat by adding some convolution to it so it blurs the edges of the mask. 

    Thanks for the feedback. I knew the star mask had something to do with it, but I wasn't sure what to do about it as I'm still quite new to Pixinsight. I'll give it another go soon. Cheers for the help.

  5. 12 minutes ago, Demonperformer said:

    Just realised there is a cooled version of the 224 - I previously assumed yours was uncooled - is this correct? Thanks.

    Just looking on FLO and apparently the cooled version has been discontinued, one left at £390. Wish I had the spare cash.

  6. 5 minutes ago, michael.h.f.wilkinson said:

    I have a Peltier cooler lying around somewhere. Must get round to making my own cooler. Really curious what e.g. my ASI178MM would do with H-alpha filter on my Canon 200mm F/2.8 L lens. Should be very interesting.

    Looking forward to checking out the images.

  7. Yeah there is a cooled version, not mine though. Out of my price range at the moment. There's the 385 as well, similar noise and performance but a slightly bigger chip. I'll probably go for that at some point.

    • Thanks 1
  8. 5 minutes ago, michael.h.f.wilkinson said:

    Very interesting. I might well use my ASI224MC on my 80mm F/6 with reducer. That should work. Using one of the other ASI cameras as autoguider on the ST80 should even allow longer subs. Very interesting.

    It has 19k full well at minimum gain and the read noise is still at 3-3.5 or so which is a lot lower than most Canon Dslr's at minimum iso. Should work great.  I'll give longer exposures a go myself on some fainter objects.

  9. Thanks, the only problem with the short exposures is I'm losing the star colour because I need to up the gain so there are enough stars for the stacking software to detect. Maybe there's a way to have Pixinsight detect the stars that aren't visible before a stretch, I'll have to look into it. I thought I'd play it safe with this one.

  10. Hi, This was imaged with a ZWO ASI224 and 130PDS on a Skywatcher HEQ5 using Sharpcap software. This image of M51 was taken on Tuesday night/Wednesday morning.

    It is 683 x 10 second exposures with 200 darks. Stacked and processed in Pixinsight. I was shocked at the amount of detail from only 10 second exposures, I'm quite happy with this one. Any comments and constructive criticism welcome.

    Integration2Final.thumb.jpg.81d297cf78bc3ba7814fc5a93771b719.jpg

    • Like 18
  11. On 18/02/2016 at 22:27, vlaiv said:

    According to ZWO, Gain of 350 should give you the least read noise for that model. Any gain setting above 200 is good - aim closer to 200 to get better dynamic range. Set exposure time as low as possible to get at least 50% (you can go lower than that but at expense of SNR), higher values are better (like already suggested 75% of histogram, provided you don't clip histogram or have too long exposure for given seeing). I usually leave gamma at 50 (it is only digital so no real benefit, and certainly so for planetary imaging), white balance at 50 for both blue and red (again no benefit since it is digital control and can be adjusted later after stacking). One thing to do as well - there is brightness setting (at least for 185mc, but I guess it is same for 224), when you decide on exposure length, cover telescope (as if taking dark frames) and look at the histogram. If it is clipping to the left, increase brightness setting. Minimum values of your dark frames should be just above 0. Do take at least 256 dark frames and process your recording in pipp (to do dark frame subtraction - this will remove both dark and bias signal).

    For DSO there is no real benefit in going over 135 with Gain, so keep under. For shorter exposures stay above 60 for gain, for longer exposures you can go below 60 with gain. There seems to be a switch in read mode around gain value of 60 that has effect on read noise.

    Last tip - don't use bias frames - take as many dark frames as you can at exactly same settings - without touching any of controls - just cover telescope with a cap.

    It seems that these sony cmos sensors have some internal calibration that is applied whenever you change either gain or exposure length.

    Having just bought one these I've been reading up a little and I've just come across this. Does this mean you can't calibrate flat frames properly due to this 'internal calibration', as they need bias or dark flats. Is seems that calibration of any sort is going to be tricky if this is happening.

  12. 5 hours ago, ollypenrice said:

    I think that by 'stacking the stacks' Carole means combining a stack from one night with a stack or stacks from other nights, and you can certainly do this. You'd need to find the best way to weight them in the process. Under no circumstances would I put different sub lengths or settings into one stack. So, assuming consistent skies and the same settings, you'd just weight the stacks based on their total integration time.

    The best way to do it, though, is to calibrate each individual sub in the entire shoot without combining them and so generate a new collection of individual subs duly calibrated. You then take these calibrated subs aside and stack them as a separate operation, obviously without any calibration files since that has been done to each sub already. This way you get the maximum value from the Sigma Clip algorithm and the best benefit from dither (even if you weren't dithering between subs.)  I suspect that this method would be of even greater benefit to DSLR imagers since it would attack the background 'colour mottle' problem most effectively.

    Now, different sub lengths. Do it only if you know why you are doing it. Of the 100 or so images on my gallery site I've done it precisely twice, once on M42 (which everybody has to image in multiple sub lengths) and once on M31 (where I'm not even sure it did any good at all.) I never worry about white clipping stars. You can pull the colour into the cores in post processing and if you expose so as to avoid clipping stellar cores you won't go deep enough. The key thing is to look at your linear stack. If a galaxy core isn't burned out there then it doesn't have to be burned out in the final image because you already have the data in the stack.

    Olly

    Hi Olly. So if stack a set of subs from one night, get an image and then stack images from a different night and get an image; if I stack the two produced images is that stacking the stacks?

  13. 38 minutes ago, carastro said:

    When you say 

    Do you mean the orientation is different because you removed the camera between imaging sessions?  If so then you'll need new flats, if the orientation is different because of slightly different GOTO then you can use the same flats.  

    I always find it easiest to stack the different nights individually and then stack the stacks.  there is a group stacking function on DSS but I am not sure if it works well/at all if the exposures etc are different.  Even when I used to use the group stacking method I still found I got better results from stacking the stacks.

    HTH

    Carole   

     

    Hi Carole, what do you mean by 'stacking the stacks'?

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.