Jump to content

NLCbanner2024.jpg.2478be509670e60c2d6efd04834b8b47.jpg

vlaiv

Members
  • Posts

    13,106
  • Joined

  • Last visited

  • Days Won

    12

Posts posted by vlaiv

  1. 1 minute ago, R26 oldtimer said:

    The bias were at iso800 1/4000, but I must have done something wrong converting them to fits.

    Would you mind if I upload another Google drive link in a couple of hours with the . arw  bias files?

    I know this must be getting annoying for you, so I'll understand if you wanna call it a day.

    No, not at all, yes, please upload .arw files and let's see what sort of information and master bias I came up with.

    It now looks like you won't be able to use dark scaling efficiently - I had a look at two subs that have roughly same parameters (ISO12800 and exposure 3s - not sure if that is true or not - you will know if you ever took such subs and kept them) - and these two look rather similar and pretty much the way you would expect - but problem is that they have very different mean value for some reason. This could mean there is bias instability (different mean adu value for bias level each time you take exposure - this happens when sensor does some sort of internal calibration).

     

  2. I'm not sure what is going on here, but your "bias" subs are all over the place.

    I don't know if info in fits header is correct or not, but you have "bias" subs with ISO12800 and 3s exposure mixed with ISO800 and 900+ seconds of exposure (that is really a dark sub not bias).

    Now I would normally think that fits header info is simply not correct as bias needs to be taken at same ISO settings as darks and needs to be shortest possible exposure length - milliseconds, but stats on these subs kind of confirm that these subs are all over the place:

    image.png.a8d9da59743bbadfcf8a40c3c3ff034a.png

  3. 15 minutes ago, R26 oldtimer said:

    P. S. :Those bias frames were shot at room temperature, same as DSC263. Should they match the temperature of the darks (and lights in the future), or it doesn't matter?

    In theory bias signal should not depend on temperature as it is not time dependent (shot at shortest possible exposure time) so there is no build up of thermal things. But in practice ambient temperature can have impact if it can influence readout electronics - it will probably only impact noise levels slightly and not bias signal.

    • Thanks 1
  4. I don't think it worked well - I did get bias to -128 - 3965 range - same as darks, but mean value of bias is -14 (nothing wrong with that on its own - but quite a bit far from dark average values and I suspect it's not correct for that reason).

    Once I remove bias from darks - there is residual bias signal there - which means that this bias is not good (either because of PI way of handling things or because bias is not stable).

    Bias stretched:

    image.png.7f677682867fcef365c467f23e832119.png

    Original dark (third one - 265):

    image.png.87b7ceeb5b21853674624ca6928ca90f.png

    Same dark with bias removed:

    image.png.ce492ac14cb8c3e16e5f4fac981982f9.png

    Almost good - bias is just a bit mismatched - otherwise that looks like rather clean pure dark - not much hot pixels, there are some temperature gradients - but at the bottom edge you will notice same pattern that exists in bias - this means that bias is not completely removed (bad for dark scaling - which is your aim since you don't have set point cooling) and can be because of scaled bias or bias instability in general.

     

  5. 10 minutes ago, R26 oldtimer said:

    In case I've messed up something in averaging, here is a link to Google Drive with the individual .fits bias frames

    Ok, PI just messes things up - again we have bias that is fractional and in range 0-1 for some reason.

    I'll try to make it work, as I suspect that PI is just mapping (min, max) range to (0, 1) range - but I'm not 100% sure about that - I'll just try to see what happens. Btw - you did not paste link to google drive - but fits file twice :D

     

  6. And here is how I process above data:

    First I work on Ha that will be luminance, and first step is to do linear stretch just below saturation point - I use Gimp and levels for this:

    image.png.2030a2e0aadba79b44e16d731ebba4ce.png

    Identify brightest part of target and move top slider until you hit place where any more movement to the left will saturate brightest part of target - you want to avoid saturating highlights.

    Next step is really to ones liking - you want to adjust curves so that target is nicely exposed - I do it with round of levels by moving middle slider a bit and then using curves:

    image.png.5aa2c65ae08c74a9a2088a6e0d307ef6.png

    Now we need to do some clever denoising to make things smooth yet not too blurry.

    image.png.66e0ec6290f829b971f43ce640b514c6.png

    To do that you create another layer - that one you denoise, I personally like wavelet denoising under G'MIC Qt filters in Gimp. After you have that denoised layer, you add layer mask to it - inverted value of that denoised image. This will blend more of denoised version in dark areas then in bright areas (mask will be inverted - where it is dark it will have higher value and apply more denoised version) - you can control overall contribution via mix slider (above is 77% of such denoised and masked layer on top of original).

    Now it is time to process our color subs. We start by doing the same, but each channel we do separately, but this time don't try to push things overboard - start with first linear stretch and then stretch a bit more. Try to keep things "balanced" - about same brightness of each channel.

    image.png.005eeecc777b41839737ec22cb797fb7.png

    image.png.ce7aee9a0b6669772010f18fdb2d8f97.png

    image.png.a90122ec0166e62bb69b3a5c3b988d8d.png

    Do RGB composition of those stretched subs - SHO style:

    image.png.903b03bae03b0600786e8bd04ee04d89.png

    Do little hue tweaks here to adjust palette - either hue shift or channel mixer which ever you prefer

    Hue shift to "equalize" background (in fact it made it more blue and less red)

    image.png.4bd8fa99304b0a7cd366fe0aac589261.png

    or Channel mixer:

    image.png.e7e31b8787c43e3c87afe81355b7fedd.png

    Now, once you found color combination that looks best to you, it's time to do luminance transfer. There are many options available, but let's go with basic one - copy ha stretched lum channel and paste it as another layer on top of this color image.

    Set layer mode to luminance (Gimp offers three different kinds LCH and HSV - lch has luminosity and lightness, while HSV has value) and voila, you have your composition with Ha as luminance:

    image.png.e4016b6e3057343ed88d825f963b1f10.png

    Now you can go and reduce color noise and do other processing touch-ups as you please.

    • Like 2
  7. Ok, so here is result of my effort at data reduction (not sure if I can call it success :D ) :

    Ha.fits

    OIII.fits

    SII.fits

    Above are linear wiped 32bit stacks.

    Here is RGB linear composition (with fairly aligned black points) - in 16 bit format (for those that can't work with 32bit formats):

    sho-16bit.tiff

    It's been linearly stretched (so it is still linear) to preserve dynamic range of data in 16 bits

    Full RGB composition (SHO) in 32bit format (again fairly aligned black points):

    sho.tiff

  8. 5 minutes ago, Ryan_86 said:

    Im not sure I could replicate that haha.

    I'll post my workflow so you can try to replicate it, but there are some steps that I'm not sure you will be able to replicate directly - because I use custom written software, however steps are "general" and you seem to have right tools for them - namely background gradient removal / wipe.

    First I'll try to see what sort of data I can get by different stacking tweaks and then I'll post workflow a bit later.

  9. 12 minutes ago, I Albert said:

    Hello everybody,

    And thanks a lot for your help. @Billy Skipper I really appreciate sharing all the material with me! @Vlaiv I am actually working on a visual 'phenomenon' called Eigengrau that takes place in the retina while no photons are present, which is exactly the type of dark noise temperature related you can experience on a camera sensor. That's why working on darks seems quite appropriate.

    Ok, still not sure what you are after, although I do understand what your work is about (just don't see relation to darks yet).

    Dark subs are meant to correct for dark current but also other phenomena on sensor - like amp glow and hot/warm pixels. In principle it is Poisson type noise from thermal source, but there are other things with sensors that happen as well.

    Now if you are after just Poisson type noise (or any other type of noise - Gaussian, uniform distribution, ....), then you can generate it quite easily with different parameters (something you might need). For example, here is small image 256x256 with Poisson noise and signal of "1" (let's say 1 photon/pixel/s and exposure 1s):

    image.png.8d5ceb5f286f927c7a6b43bd7d254e1f.png

    I'm not sure if you are doing "stills" or something like "video" in your research, but just for fun, here is "simulation", or rather animated gif made out of 16 such frames:

    1774483933_UntitledwithmodulatoryPoissonnoise.gif.69d460ff91e6a664b46ce908c271723a.gif

    If above is something that you will find useful - just let me know and I'll explain here what software to get (ImageJ - it is free, java based and therefore OS agnostic) and what plugin for it to get (RandomJ) and how to generate such things.

  10. Unfortunately I can't do anything with that master bias as its somehow scaled 0-1 by stacking application or when raws were read (differently than FitsWork). Bias no longer represents ADU values.

    21 minutes ago, R26 oldtimer said:

    I didn't use an intervalometer just a stop watch to count approximately 900sec, so that's why there is a variation.

    If you want to have get good results - you better find a way to make your subs last the same time (or as close as possible - less than a second would be good).

    23 minutes ago, R26 oldtimer said:

    Since I won't be able to dither with this camera, I will have to use calibration frames (darks and flats). Is there a need for bias?

    Dithering is good regardless if you use calibration frames or not. Why do you feel that you won't be able to dither? You can manually dither PHD2 (if that is what you are using) - there is a script that will trigger dither. If you are already timing your exposures and sitting next to your rig - why not do manual dither between each exposure?

    If you want to do proper calibration with this camera (and let's hope it is possible) - you will need darks, flats and bias (you could even use flat darks - but that is just overkill in your case I think - you can calibrate master flat with bias).

    What you need to do is make sure you use dark optimization - it is algorithm that will try to guess scaling factor for darks if temperature is not exact (in fact - it will work if you miss exposure time by a few seconds as well). But for it to work properly, bias must be subtracted - therefore you'll need master bias.

    27 minutes ago, R26 oldtimer said:

    As the cooler is not set point, it will not help with calibration frames, it will just help with the noise especially during summer (temperatures above 30deg celsius), is that right?

    Yes, that is correct - you can already see that it helps with noise a bit - in first measurement StdDev was smaller with cooled subs. To see how effective cooler really is - we need usable bias (to me at least).

    Would it be too much trouble for you to do another master bias - but this time using FitsWorks to produce fits from raw, just make sure you turn off certain options like this:

    image.png.0114e19d4cee15a9a7b3abe2e7a326ed.png

    Then do simple average on those 32bit fits with any program of your choice.

    34 minutes ago, R26 oldtimer said:

    You've used Fitswork to convert the .ARW files to .fits. Should I do the same before stacking them or feed DeepSkyStacker with the .ARW files?

    Whatever you choose - stick with single approach - when mixing things you might get unusable results - like above, I used FitsWork you used something else - and bias won't match darks.

    I would personally stick with FitsWork - because it seems nice and there is batch option (I just realized that).

    36 minutes ago, R26 oldtimer said:

    And finally, do you see any real life advantage in using the cooler, or the benefits are minimal (as in wait a couple of hours for the ambient temperature to fall 5-8 degrees later in the night). Take into account that I am still new at AP, and far from chasing the tiniest detail to improve my pictures.

    Depends how much trouble it is for you - if it's not much trouble, then yes, keep using cooling - why not? (there will be no difference in processing if you want best results, and you are always chasing for finest detail to improve your images - you just don't know it yet :D - even as beginner you appreciate if your image is a bit better).

  11. 5 minutes ago, Ryan_86 said:

    I am unable to combine FIT files in PS, so I hope you don't mind that I have attached all three new stacks including 3x3 median noise reduction and a 3 pixel cosmetic hot pixel removal. Apologies I forgot to change to 1 pixel hot pixel removal.

    Thanks again Carole

    Ryan

    On quick inspection, these look much better I must say, I'll have another go at processing now.

    • Like 1
  12. Just now, Ryan_86 said:

    Thanks for your efforts Vlaiv.

    Yes that's correct, due to no set point cooling and SX's H18 manual recommending a 3x3 median noise reduction over dark frames,. I didnt take any. Next time I'm out I will take the necessary darks in between each filter change. 

    Can I upload a new stack of my Ha data including the median noise reduction to see if it helps. Also I will apply a cosmetic removal of hot pixels in DSS. What size pixels should I use to remove them? 

    Thanks

    Ryan

    You don't need a clear night to get your darks, nor you need to do it between filter changes. You need single set of darks, even if you are using different exposure lengths for different filters - as you will attempt to scale the darks. Just take a set of darks somewhere where it is close to temperature you were working at (but needs not be precise that temperature - so you can leave it in shed or basement or somewhere during cloudy night to record).

    Don't apply 3x3 median filter - it will just kill any chance of stacking properly. You can try single pixel cosmetic correction though - that might work. In fact one of steps when trying to get that Ha signal was to apply my own cosmetic correction thing on whole stack.

    You can even post all subs (maybe drop box or google drive) to see what sort of stacking would work the best on your data.

  13. Even triplets don't bring whole spectrum to same focus point - they bring only 3 points or rather three wavelengths of light to exact focus vs 2 points doublets bring together.

    If you want whole spectrum in focus - need to use pure reflection systems.

    APO performance is somewhat loose term - it should read as no chromatic aberration perceivable by observer. That is the key - there is no color aberration that you can see.

    How much of color aberration will you see depends on defocus for particular wavelength but also for eye sensitivity at that particular wavelength.

    Have a look at this graph:

    image.png.578b52f504f18f3b1842f6c29c475ad8.png

    Blue line is simple lens - or a singlet - it just brings one wavelength at any one time to focus. Next is green dotted graph - that is doublet lens (this particular one is achromat) - it brings two wavelengths to focus at the same time.

    Orange and red are APO and four lens combination (optical quadruplet lens - but not photo quadruplet). Amount of color that you will see depends on how much depicted curve lies far away from central 0 axis (how much defocus and related blur there is). Thing with ED doublets is that they produce much "tighter" curve than green achromat one. It will be of same shape, but it will be very close to 0 at 400-700nm range.

    If you get really good ED doublet you will virtually see no color aberration.

    So why do people purchase APO triplets at all?

    One obvious reason is astro photography. CCDs are much more sensitive than human eye across larger range of frequencies, and while human eye can see color aberration - CCD will be able to record it.

    Other reasons might include - sensitivity to color (some people can see residual chromatic blur because they are more sensitive to it), just wanting higher class instrument (if instrument is of higher cost to manufacture - there is higher profit margin and incentive for maker to be extra careful of how they figure lens and QA things and so....). There is also issue of F/ratio - ED doublet can be corrected only at F/ratios from 7 or 8 (depending on scope size) and upward. If you want F/6 or F/5 instrument that is free of color - you need APO triplet to be able to do it.

    But for all other purposes (casual visual without color) - ED doublet is simply sufficient.

     

    • Like 8
    • Thanks 3
  14. I'm not sure I can make anything out of this data to be honest. It is just too noisy and noise is not "nice" kind of noise.

    Why didn't you include dark calibration? That could probably help a lot with that kind of hot pixel noise.

    Here is what I could manage to get out of Ha sub (with all kinds of magic):

    image.png.1669cc56a64e4cc525967c97db45850e.png

    There is some sort of bright ridge over that image that I can't get rid of - that is not there in other subs. Also, other subs don't contain this much signal at all (Ha is usually the strongest of the three). So I believe you won't be able to get a decent image out of this data unless you try to remove that noise.

    I'm guessing that you did not want to take darks because you don't have set point cooling, right?

    Why don't you just give it a go with dark scaling? Maybe you will be surprised with results. Do take a set of darks - same exposure and use them as well, but tick "dark optimization" option in DSS (and leave everything else the same, in fact - don't even do sigma clip stacking - use regular average for everything).

    Btw, here is OIII sub processed the same as Ha:

    image.png.ba85d3c113b59c0532a88cc600d32ac9.png

    Not much there really and denoise makes things too soft.

  15. Just as a piece of information on maximum magnification of a telescope - how is it "established" and what it means.

    In reality max magnification of telescope as calculated by x2 aperture in millimeters is based on two things - theoretical resolving power of aperture of certain size (airy disk produced) and theoretical resolving power of human eye (20/20 vision).

    If you want to see if maximum telescope recommended magnification is just right, not  enough or too much - just do the following - try to see two high contrast features - maybe best case would be two black poles next to each other against blue sky (aerial poles or similar) - such that their angular separation is 1 minute of arc. Another test would be to spot a feature on the Moon that is 1/30th of moon diameter - by naked eye of course. Maybe try to spot a Plato crater for example (as a single dark spot)?

    image.png.842cbb74b86a1c907965afcc5bdd6736.png

    If you can do that - then maximum recommended magnification is just right for you - if not (and odds are that you can't), you will enjoy higher magnification.

    Problem with max theoretical magnification for all but smallest telescopes is that it is going to hit atmosphere limit much sooner - around x100 or so in most circumstances, and that would be max for 50mm scope - most of us use larger apertures than that, so we can't reach max magnification of aperture under most circumstances - and that is why going close to max magnification looks blurry - not because it is "real" upper limit of telescope (it is but for a person that has perfect vision).

  16. Ok, this is rather nice now :D

    I used FitsWork to convert raw (ARW) files to fits. This is first time I've used it and it looks ok, I just disabled debayering and color scaling in options to prevent any fiddling with data.

    Here are results of stats on each sub (1, 2, 3 and 4 being DSC0261 - 0269 respectively):

    image.png.75532c5d10c8d7a605e04faa4472f3cd.png

    First few pointers - camera is obviously 12 bits and it shows as min value is -128 and max value is 3965 (that is 0 - 4093 range and max number in 12 bit is 4095). FitsWork applied automatic offset, so minimum value is -128 which is fine, while mean value is above 0 which is again fine and should be expected.

    Mean value increases in each subsequent frame (this can be interpreted as higher dark current noise, or higher signal in general) but standard deviation does not increase like that (and it should if signal increase was due to dark current alone). Now standard deviation number here is not measure of noise since we did not remove bias from above subs (you need to remove bias signal to get pure random noise to measure it's value).

    image.png.d49a442931903666fa0dda9a60649837.png

    All 4 darks look fairly nice - same amp glow + bias pattern and 0269 looks probably the cleanest of the bunch.

    Histograms also look nice - no clipping of any kind and they look the same for all four subs:

    image.png.cdf1a25ad43519c9f934273a7b80fafd.png

    Now, in order to see what sort of noise each one really has - we would need to remove bias (maybe shoot small number of bias subs, like 16 or so and post those as well)?

    For the time being I'll just play a bit more with the data to see what I can come up with:

    Here is last two subs subtracted (0269 and 0265):

    image.png.9df118d79d88a944ed3187156a6b4a32.png

    With set point cooled camera when you have exact temperature mean should be 0 - here it is 1 which is to be expected as respective means differ by 1 (17.955 and 18.977 in first table posted - stats for each sub). But this time noise went down and it is now 13.08 (compared to individual sub results in range of 40-50) - as this is true noise without bias signal. In fact, difference sub looks rather nice:

    image.png.c748897b1ebedecabe6fc961587c90c0.png

    Smooth noise and not much else there (except that nasty offset of 1 - but that is either because of bias instability or as consequence of cooling, but I'm not sure which - we need a set of bias to figure it out).

    Histogram of this difference between two darks also looks nice (bell shaped):

    image.png.d8d052f3273c0ab6bdee2099f1e5000e.png

    and FFT of difference also looks ok:

    image.png.bf16c2aece1a52801c7f7b08bcdb1e14.png

    it is nice uniform noise - except for that vertical line which means that there are horizontal bands in resulting image - meaning not all bias was removed because of difference of temperatures (that is why you need to match the temperature to remove bias features if doing just dark calibration without bias or you need bias removal + dark scaling to get good calibration).

    Doing the same between first and second darks (0261 and 0263) reveals result with more noise:

    image.png.04986959e088288b3f5570ae7cdee308.png

    But cleaner looking FFT (less pronounced vertical line - smaller mean difference and bias better removed):

    image.png.f7f5f1a8ff0556e0bc4f94da74ed416e.png

    Which number was shot under which conditions?

    Another important thing that I just noticed - exposure time is rather different for each of these subs. How did you set exposure time?

    914s, 909s, 931s and 970s

    This could explain why there is "inversion" with subs - less mean level but higher noise (shot at higher temp but for less time than other subs).

    • Like 1
    • Thanks 1
  17. 15 minutes ago, R26 oldtimer said:

    I don't know if the fact that the frames were shot with a custom white balance makes any difference in the channels being different?

    Not an expert on DSLR and the way they work, but I would imagine that white balance information is not applied to raw files, but rather in post processing (or in jpeg conversion step in camera). At least that is what I believe since I can change quite easily white balance in software with Canon DSLR and its raw images (I don't use it for AP but sometimes for daytime photography).

    Let's see if I can find software to convert ARW to Fits without any "distortion" and then examine those.

    • Thanks 1
  18. 8 minutes ago, Ryan_86 said:

    Hi Vlaiv, 

    Thanks for your reply. 

    During integration I didn't drizzle, will try and give a break down of my stack. 

    Used the best Ha frame as a reference for all 3 stacks. Used a 3x3 median noise reduction as the H18 is really noisy. Lights, bias and flats are stacked via kappa sigma clipping and applied to the three stacks. In the cosmetic section I did apply a 3 pixel removal of both hot and cold pixels (will this effect the weired grainines)? 

    Maybe I have gone wrong during the stacking process. 

    Thanks again Vlaiv👍🏼

     

    Ok, let's keep things simple to see if we can avoid some issues.

    No median filter for noise, no cosmetic correction at this stage. Use regular average for calibration frames (which camera are you using btw, since you don't use darks?) and use kappa sigma clipping for final lights integration. Set sigma to something like 3 and kappa to something like 2-3 (sigma being standard deviation and kappa number of iterations). You can use Ha as reference frame, but tell DSS not to include it in stack (or even better - use groups, DSS should align them all to same reference frame, right?)

    Save result as 32bit fits and post those

  19. Best way to asses darks would be to post them in raw format without any sort of transformation / debayering done to them.

    Here is quick analysis of first posted png:

    Due to debayering performed (or maybe as a consequence of light leak?), statistics are not looking good:

    image.png.b23459bd6c754e482edf2cba0f3c9dc7.png

    With regular raw you would expect "RGB channels" to have same mean value and roughly the same StdDev (same as Mode). Here we see that it is not the case - green is significantly lower than other two. This could be due to debayering or even some sort of light leak (different pixel sensitivity to light leak because of R, G and B filters on them).

    We can see what stretched channels look like to see if there is any sort of pattern (amp glow or bias pattern or whatever):

    image.png.b489a085e2173ca5a411827063f920da.png

    You can see that green is much darker (this is same linear stretch on each channel).

    Here are channels stretched roughly to same visual intensity (different linear stretch for each but showing roughly same information), together with histograms:

    image.png.6de83ff1e7453e454bf11e5bb066890a.png

    Channels look distinctly different, and in principle proper dark for color camera should not be different than that for mono camera - channels should look roughly the same, in fact you don't even need to split to channels for dark calibration - you do it while still raw.

    In any case, I would be happy to run this sort of thing on your darks again if you provide me with RAW frame rather than debayered one (to see if there is indeed light leak or all issued above were caused by debayering process).

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.