Jump to content

Banner.jpg.b89429c566825f6ab32bcafbada449c9.jpg

vlaiv

Members
  • Posts

    13,030
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. I'm not sure what is going on here, but your "bias" subs are all over the place. I don't know if info in fits header is correct or not, but you have "bias" subs with ISO12800 and 3s exposure mixed with ISO800 and 900+ seconds of exposure (that is really a dark sub not bias). Now I would normally think that fits header info is simply not correct as bias needs to be taken at same ISO settings as darks and needs to be shortest possible exposure length - milliseconds, but stats on these subs kind of confirm that these subs are all over the place:
  2. In theory bias signal should not depend on temperature as it is not time dependent (shot at shortest possible exposure time) so there is no build up of thermal things. But in practice ambient temperature can have impact if it can influence readout electronics - it will probably only impact noise levels slightly and not bias signal.
  3. And something we could call "final" version - courtesy of StarNet++ (stars removed from color composition and then re added as white stars from Ha channel - no magenta issues no more 😞
  4. I don't think it worked well - I did get bias to -128 - 3965 range - same as darks, but mean value of bias is -14 (nothing wrong with that on its own - but quite a bit far from dark average values and I suspect it's not correct for that reason). Once I remove bias from darks - there is residual bias signal there - which means that this bias is not good (either because of PI way of handling things or because bias is not stable). Bias stretched: Original dark (third one - 265): Same dark with bias removed: Almost good - bias is just a bit mismatched - otherwise that looks like rather clean pure dark - not much hot pixels, there are some temperature gradients - but at the bottom edge you will notice same pattern that exists in bias - this means that bias is not completely removed (bad for dark scaling - which is your aim since you don't have set point cooling) and can be because of scaled bias or bias instability in general.
  5. Ok, PI just messes things up - again we have bias that is fractional and in range 0-1 for some reason. I'll try to make it work, as I suspect that PI is just mapping (min, max) range to (0, 1) range - but I'm not 100% sure about that - I'll just try to see what happens. Btw - you did not paste link to google drive - but fits file twice
  6. Just for your information - such things as simple average of fits and similar can be performed with ImageJ - free software for scientific image manipulation (there is Fiji version of it - loaded with plugins, and there is also AstroImageJ - loaded with plugins for astronomy use like plate solving and photometry and such).
  7. And here is how I process above data: First I work on Ha that will be luminance, and first step is to do linear stretch just below saturation point - I use Gimp and levels for this: Identify brightest part of target and move top slider until you hit place where any more movement to the left will saturate brightest part of target - you want to avoid saturating highlights. Next step is really to ones liking - you want to adjust curves so that target is nicely exposed - I do it with round of levels by moving middle slider a bit and then using curves: Now we need to do some clever denoising to make things smooth yet not too blurry. To do that you create another layer - that one you denoise, I personally like wavelet denoising under G'MIC Qt filters in Gimp. After you have that denoised layer, you add layer mask to it - inverted value of that denoised image. This will blend more of denoised version in dark areas then in bright areas (mask will be inverted - where it is dark it will have higher value and apply more denoised version) - you can control overall contribution via mix slider (above is 77% of such denoised and masked layer on top of original). Now it is time to process our color subs. We start by doing the same, but each channel we do separately, but this time don't try to push things overboard - start with first linear stretch and then stretch a bit more. Try to keep things "balanced" - about same brightness of each channel. Do RGB composition of those stretched subs - SHO style: Do little hue tweaks here to adjust palette - either hue shift or channel mixer which ever you prefer Hue shift to "equalize" background (in fact it made it more blue and less red) or Channel mixer: Now, once you found color combination that looks best to you, it's time to do luminance transfer. There are many options available, but let's go with basic one - copy ha stretched lum channel and paste it as another layer on top of this color image. Set layer mode to luminance (Gimp offers three different kinds LCH and HSV - lch has luminosity and lightness, while HSV has value) and voila, you have your composition with Ha as luminance: Now you can go and reduce color noise and do other processing touch-ups as you please.
  8. Ok, so here is result of my effort at data reduction (not sure if I can call it success ) : Ha.fits OIII.fits SII.fits Above are linear wiped 32bit stacks. Here is RGB linear composition (with fairly aligned black points) - in 16 bit format (for those that can't work with 32bit formats): sho-16bit.tiff It's been linearly stretched (so it is still linear) to preserve dynamic range of data in 16 bits Full RGB composition (SHO) in 32bit format (again fairly aligned black points): sho.tiff
  9. I'll post my workflow so you can try to replicate it, but there are some steps that I'm not sure you will be able to replicate directly - because I use custom written software, however steps are "general" and you seem to have right tools for them - namely background gradient removal / wipe. First I'll try to see what sort of data I can get by different stacking tweaks and then I'll post workflow a bit later.
  10. Ok, still not sure what you are after, although I do understand what your work is about (just don't see relation to darks yet). Dark subs are meant to correct for dark current but also other phenomena on sensor - like amp glow and hot/warm pixels. In principle it is Poisson type noise from thermal source, but there are other things with sensors that happen as well. Now if you are after just Poisson type noise (or any other type of noise - Gaussian, uniform distribution, ....), then you can generate it quite easily with different parameters (something you might need). For example, here is small image 256x256 with Poisson noise and signal of "1" (let's say 1 photon/pixel/s and exposure 1s): I'm not sure if you are doing "stills" or something like "video" in your research, but just for fun, here is "simulation", or rather animated gif made out of 16 such frames: If above is something that you will find useful - just let me know and I'll explain here what software to get (ImageJ - it is free, java based and therefore OS agnostic) and what plugin for it to get (RandomJ) and how to generate such things.
  11. Unfortunately I can't do anything with that master bias as its somehow scaled 0-1 by stacking application or when raws were read (differently than FitsWork). Bias no longer represents ADU values. If you want to have get good results - you better find a way to make your subs last the same time (or as close as possible - less than a second would be good). Dithering is good regardless if you use calibration frames or not. Why do you feel that you won't be able to dither? You can manually dither PHD2 (if that is what you are using) - there is a script that will trigger dither. If you are already timing your exposures and sitting next to your rig - why not do manual dither between each exposure? If you want to do proper calibration with this camera (and let's hope it is possible) - you will need darks, flats and bias (you could even use flat darks - but that is just overkill in your case I think - you can calibrate master flat with bias). What you need to do is make sure you use dark optimization - it is algorithm that will try to guess scaling factor for darks if temperature is not exact (in fact - it will work if you miss exposure time by a few seconds as well). But for it to work properly, bias must be subtracted - therefore you'll need master bias. Yes, that is correct - you can already see that it helps with noise a bit - in first measurement StdDev was smaller with cooled subs. To see how effective cooler really is - we need usable bias (to me at least). Would it be too much trouble for you to do another master bias - but this time using FitsWorks to produce fits from raw, just make sure you turn off certain options like this: Then do simple average on those 32bit fits with any program of your choice. Whatever you choose - stick with single approach - when mixing things you might get unusable results - like above, I used FitsWork you used something else - and bias won't match darks. I would personally stick with FitsWork - because it seems nice and there is batch option (I just realized that). Depends how much trouble it is for you - if it's not much trouble, then yes, keep using cooling - why not? (there will be no difference in processing if you want best results, and you are always chasing for finest detail to improve your images - you just don't know it yet - even as beginner you appreciate if your image is a bit better).
  12. Ok, maybe this is tad better color composition (I did it now the way I proposed in the first reply - via RGB transfer):
  13. I was really struggling with this, and mind you, I'm no expert at NB composition, but here is something resembling a result Here is Ha only: I think it is quite a bit better than previous.
  14. On quick inspection, these look much better I must say, I'll have another go at processing now.
  15. You don't need a clear night to get your darks, nor you need to do it between filter changes. You need single set of darks, even if you are using different exposure lengths for different filters - as you will attempt to scale the darks. Just take a set of darks somewhere where it is close to temperature you were working at (but needs not be precise that temperature - so you can leave it in shed or basement or somewhere during cloudy night to record). Don't apply 3x3 median filter - it will just kill any chance of stacking properly. You can try single pixel cosmetic correction though - that might work. In fact one of steps when trying to get that Ha signal was to apply my own cosmetic correction thing on whole stack. You can even post all subs (maybe drop box or google drive) to see what sort of stacking would work the best on your data.
  16. Even triplets don't bring whole spectrum to same focus point - they bring only 3 points or rather three wavelengths of light to exact focus vs 2 points doublets bring together. If you want whole spectrum in focus - need to use pure reflection systems. APO performance is somewhat loose term - it should read as no chromatic aberration perceivable by observer. That is the key - there is no color aberration that you can see. How much of color aberration will you see depends on defocus for particular wavelength but also for eye sensitivity at that particular wavelength. Have a look at this graph: Blue line is simple lens - or a singlet - it just brings one wavelength at any one time to focus. Next is green dotted graph - that is doublet lens (this particular one is achromat) - it brings two wavelengths to focus at the same time. Orange and red are APO and four lens combination (optical quadruplet lens - but not photo quadruplet). Amount of color that you will see depends on how much depicted curve lies far away from central 0 axis (how much defocus and related blur there is). Thing with ED doublets is that they produce much "tighter" curve than green achromat one. It will be of same shape, but it will be very close to 0 at 400-700nm range. If you get really good ED doublet you will virtually see no color aberration. So why do people purchase APO triplets at all? One obvious reason is astro photography. CCDs are much more sensitive than human eye across larger range of frequencies, and while human eye can see color aberration - CCD will be able to record it. Other reasons might include - sensitivity to color (some people can see residual chromatic blur because they are more sensitive to it), just wanting higher class instrument (if instrument is of higher cost to manufacture - there is higher profit margin and incentive for maker to be extra careful of how they figure lens and QA things and so....). There is also issue of F/ratio - ED doublet can be corrected only at F/ratios from 7 or 8 (depending on scope size) and upward. If you want F/6 or F/5 instrument that is free of color - you need APO triplet to be able to do it. But for all other purposes (casual visual without color) - ED doublet is simply sufficient.
  17. I'm not sure I can make anything out of this data to be honest. It is just too noisy and noise is not "nice" kind of noise. Why didn't you include dark calibration? That could probably help a lot with that kind of hot pixel noise. Here is what I could manage to get out of Ha sub (with all kinds of magic): There is some sort of bright ridge over that image that I can't get rid of - that is not there in other subs. Also, other subs don't contain this much signal at all (Ha is usually the strongest of the three). So I believe you won't be able to get a decent image out of this data unless you try to remove that noise. I'm guessing that you did not want to take darks because you don't have set point cooling, right? Why don't you just give it a go with dark scaling? Maybe you will be surprised with results. Do take a set of darks - same exposure and use them as well, but tick "dark optimization" option in DSS (and leave everything else the same, in fact - don't even do sigma clip stacking - use regular average for everything). Btw, here is OIII sub processed the same as Ha: Not much there really and denoise makes things too soft.
  18. Just as a piece of information on maximum magnification of a telescope - how is it "established" and what it means. In reality max magnification of telescope as calculated by x2 aperture in millimeters is based on two things - theoretical resolving power of aperture of certain size (airy disk produced) and theoretical resolving power of human eye (20/20 vision). If you want to see if maximum telescope recommended magnification is just right, not enough or too much - just do the following - try to see two high contrast features - maybe best case would be two black poles next to each other against blue sky (aerial poles or similar) - such that their angular separation is 1 minute of arc. Another test would be to spot a feature on the Moon that is 1/30th of moon diameter - by naked eye of course. Maybe try to spot a Plato crater for example (as a single dark spot)? If you can do that - then maximum recommended magnification is just right for you - if not (and odds are that you can't), you will enjoy higher magnification. Problem with max theoretical magnification for all but smallest telescopes is that it is going to hit atmosphere limit much sooner - around x100 or so in most circumstances, and that would be max for 50mm scope - most of us use larger apertures than that, so we can't reach max magnification of aperture under most circumstances - and that is why going close to max magnification looks blurry - not because it is "real" upper limit of telescope (it is but for a person that has perfect vision).
  19. Ok, this is rather nice now I used FitsWork to convert raw (ARW) files to fits. This is first time I've used it and it looks ok, I just disabled debayering and color scaling in options to prevent any fiddling with data. Here are results of stats on each sub (1, 2, 3 and 4 being DSC0261 - 0269 respectively): First few pointers - camera is obviously 12 bits and it shows as min value is -128 and max value is 3965 (that is 0 - 4093 range and max number in 12 bit is 4095). FitsWork applied automatic offset, so minimum value is -128 which is fine, while mean value is above 0 which is again fine and should be expected. Mean value increases in each subsequent frame (this can be interpreted as higher dark current noise, or higher signal in general) but standard deviation does not increase like that (and it should if signal increase was due to dark current alone). Now standard deviation number here is not measure of noise since we did not remove bias from above subs (you need to remove bias signal to get pure random noise to measure it's value). All 4 darks look fairly nice - same amp glow + bias pattern and 0269 looks probably the cleanest of the bunch. Histograms also look nice - no clipping of any kind and they look the same for all four subs: Now, in order to see what sort of noise each one really has - we would need to remove bias (maybe shoot small number of bias subs, like 16 or so and post those as well)? For the time being I'll just play a bit more with the data to see what I can come up with: Here is last two subs subtracted (0269 and 0265): With set point cooled camera when you have exact temperature mean should be 0 - here it is 1 which is to be expected as respective means differ by 1 (17.955 and 18.977 in first table posted - stats for each sub). But this time noise went down and it is now 13.08 (compared to individual sub results in range of 40-50) - as this is true noise without bias signal. In fact, difference sub looks rather nice: Smooth noise and not much else there (except that nasty offset of 1 - but that is either because of bias instability or as consequence of cooling, but I'm not sure which - we need a set of bias to figure it out). Histogram of this difference between two darks also looks nice (bell shaped): and FFT of difference also looks ok: it is nice uniform noise - except for that vertical line which means that there are horizontal bands in resulting image - meaning not all bias was removed because of difference of temperatures (that is why you need to match the temperature to remove bias features if doing just dark calibration without bias or you need bias removal + dark scaling to get good calibration). Doing the same between first and second darks (0261 and 0263) reveals result with more noise: But cleaner looking FFT (less pronounced vertical line - smaller mean difference and bias better removed): Which number was shot under which conditions? Another important thing that I just noticed - exposure time is rather different for each of these subs. How did you set exposure time? 914s, 909s, 931s and 970s This could explain why there is "inversion" with subs - less mean level but higher noise (shot at higher temp but for less time than other subs).
  20. Not an expert on DSLR and the way they work, but I would imagine that white balance information is not applied to raw files, but rather in post processing (or in jpeg conversion step in camera). At least that is what I believe since I can change quite easily white balance in software with Canon DSLR and its raw images (I don't use it for AP but sometimes for daytime photography). Let's see if I can find software to convert ARW to Fits without any "distortion" and then examine those.
  21. Ok, let's keep things simple to see if we can avoid some issues. No median filter for noise, no cosmetic correction at this stage. Use regular average for calibration frames (which camera are you using btw, since you don't use darks?) and use kappa sigma clipping for final lights integration. Set sigma to something like 3 and kappa to something like 2-3 (sigma being standard deviation and kappa number of iterations). You can use Ha as reference frame, but tell DSS not to include it in stack (or even better - use groups, DSS should align them all to same reference frame, right?) Save result as 32bit fits and post those
  22. Best way to asses darks would be to post them in raw format without any sort of transformation / debayering done to them. Here is quick analysis of first posted png: Due to debayering performed (or maybe as a consequence of light leak?), statistics are not looking good: With regular raw you would expect "RGB channels" to have same mean value and roughly the same StdDev (same as Mode). Here we see that it is not the case - green is significantly lower than other two. This could be due to debayering or even some sort of light leak (different pixel sensitivity to light leak because of R, G and B filters on them). We can see what stretched channels look like to see if there is any sort of pattern (amp glow or bias pattern or whatever): You can see that green is much darker (this is same linear stretch on each channel). Here are channels stretched roughly to same visual intensity (different linear stretch for each but showing roughly same information), together with histograms: Channels look distinctly different, and in principle proper dark for color camera should not be different than that for mono camera - channels should look roughly the same, in fact you don't even need to split to channels for dark calibration - you do it while still raw. In any case, I would be happy to run this sort of thing on your darks again if you provide me with RAW frame rather than debayered one (to see if there is indeed light leak or all issued above were caused by debayering process).
  23. Hi, I just had a look at data you posted and am slightly confused about it. What stage of processing is this? I'm asking because I expected data straight out of DSS after stacking, but attached files feel like something has been done to them and I don't know quite what. It might be as simple thing as doing slight stretch in DSS itself or maybe some sharpening or something. Background does not feel right, stars don't feel right - something is odd. Can you explain how you did your integration? (did you drizzle or do something "fancy" to the data?). To be precise of what I mean, here is a piece of red channel in RGB composition: Grain is too big in the background - like image was enlarged or maybe I don't know, sharpened or denoised or combination of the two done. Btw, to answer your initial question, here is what I would do to do SHO with Ha lum composition (not sure if you can do that in PS, maybe you can with some clever layer manipulation): 1. make sure data is nicely stacked and linear, it is in 32bit float format (instead of 16bit fixed point/int) 2. do background elimination at this stage (wipe) 3. I would do individual stretch on each of R, G and B channels until I get nice looking nebulosity in each and then combine result in RGB image. You can use quite heavy denoising at this stage - this is just color information and it is not as important to preserve it's full sharpness. 4. I do the same with Ha to create luminance layer - stretch it, do sharpening / denoising - full processing thing like you would do mono image until you are happy with result. 5. Take rgb image and create "RGB map" image out of it - which is just R = R/max(R,G,B), G = G/max(R,G,B), B = B/max(R,G,B) . That is tricky part to do in PS, maybe one could do it with layers (make image from 3 layers set to "max value" then make other images with layers set to "divide value" or something like that). 6. Take completed luminance and multiply with Rmap, Gmap and Bmap to get R, G and B channels of completed color image.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.