Jump to content

Banner.jpg.b83b14cd4142fe10848741bb2a14c66b.jpg

vlaiv

Members
  • Posts

    13,026
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by vlaiv

  1. This looks like high altitude clouds rather than anything else. You can check if it is by doing a trick - split subs into two groups - stack each group as it was whole session (you can do this with only one channel if you did LRGB imaging) - compare two resulting stacks for this pattern. If pattern is the same - it is camera related, if it's not the same - it varies with time and it is probably external - most likely high altitude clouds. You can even inspect how many of the frames are impacted - calibrate them and bin them like crazy (so you get very small size - something like 320x240 or so) - this will boost SNR of each sub so you'll be able to see all of the features in each sub. Do same stretch on each and make animated gif out of it - it will show passing clouds if this was in fact what caused issue in the first place.
  2. Here is my processing attempt: It involves use of ImageJ and custom plugin, but I would be happy to both provide plugin and explain procedure for background removal.
  3. Just a preliminary stretch in Gimp to see what is there: There is quite a bit of gradient that needs to be removed. I don't remove gradients in Gimp - I use my own tool for that. I'll do it later and post results (and tool if someone wants to use it - it is plugin for ImageJ).
  4. We need to consider two different scenarios - pixels that have proper value and have gaussian (like) distribution. Strictly speaking pixel values are mix of gaussian and poisson distributions. Poisson distribution for large expectation value looks like gaussian distribution, and read noise is gaussian distribution (well, it should be, sometimes it is not quite if there is FPN and such). In any case - let's assume that pixel values are gaussian type distribution around some value. These are "regular" pixels, but there are anomalous pixels as well. Hot pixels, or cosmic ray strikes or passing satellite or aircraft will sometimes make anomalous pixel value. Pixel values are just numbers and we can't say that number 16 is valid pixel value and that number 52 is not valid pixel value - there is nothing about the number that says it is good or bad value. It is only when we look at them in terms of distribution that we can say something of that number. Setting kappa to 1, in the case we have all proper pixel will eliminate ~31.73% of samples and leave 68.27% of samples. You will be "throwing away" a bit less than 1/3 of all samples. This in itself does not limit how deep you can go - if you image 6 hours - you will stack ~4h in your stack. Image for 60h, you will end up with ~40h in your stack. No matter how many hours you image - kappa of 1 will remove about 1/3 of samples (a bit less). You can still go as deep as you like by accumulating more subs and then throwing 1/3 of them away - granted not sufficient. But why would you choose kappa to be 1? Let's for a moment go back to valid and invalid pixels - like I said you can't tell by a number if that particular number is valid or not. In fact - every number is valid. Imagine you have pixel value of 10 and sigma of 2. This means that 99.73% of all samples lie between 4 and 16 (+/- 3 sigma). Is 52 anomalous number? In reality it is not. You need to "play dice" and decide if that number is valid or not. In this particular case, that number is 14 sigma away. Probability that 52 belongs to mean 10 and sigma 2 distribution, is so low that you need to image for the whole time universe existed and you still would not end up with number 52 in any of your pixels on your own Seven sigma for example is 1 in 390682215445. So we can safely say that 52 is anomalous number - it is far more likely that it is hot pixel or passing airplane or something else than it being genuine member of distribution. Kappa of 1 removes too much, kappa of 14 includes even almost impossible events. What is then a good kappa to take? Well one that is low enough but does not impact your samples significantly if you don't have any outliers. Let's says you want your SNR to suffer at most 1%. This means that you want to throw away at most 2% of subs (1.01^2 = 1.0201 = 2.01%). Sigma of 2 corresponds to loss of about 4.5% - we need a bit higher than that. 3 sigma will eliminated only 0.27%, so that is good, but we can choose number between 2 and 3 do get us 2% of subs loss and 1% SNR loss. This is of course if we use one iteration. What happens with more iterations? In each iteration we can either remove some numbers or remove no numbers. If later happens - we are done regardless of the number of iterations because once we keep all the numbers - average remains the same and standard deviation remains the same - in next iteration we won't remove any numbers and so on ... But if we do remove some numbers we both alter average and standard deviation. If kappa is too low - there is greater probability that we will remove some numbers and continue into next round to remove further numbers. Having kappa higher stops this process sooner even if all numbers are "proper". In general you want your iterations to be 2 to 3 max. It should be based on how many different "hot pixel" values can be present in your numbers. 1 hot pixel group can happen. 2 is very unlikely (you need to have two close by hot pixels with different values and dithering needs to put them on same spot). 3 is almost impossible. I would say that 3 iterations just ensures things and won't change anything if kappa is proper over two iterations (as we have seen if nothing is removed iterations stop early). This is why I recommend kappa 3 and iterations 3. There are cases where above settings don't work well. If you happen to have very fast satellite that is also dim - it will produce values that are not very different than the background - like 10-20% brighter - it will be visible in final stack if you push your image and stretch aggressively. For this case I recommend you do two stacks - first is 3.0 / 3 stack and second is much more aggressive stack - like 1.0 / 4 or similar. Then you take only offending pixels from more aggressive stack and paste them over regular stack - this will remove trail even if its very faint - but it's a bit involving to do as you have to select bunch of pixels and copy paste.
  5. It can take couple shapes, but I'll point out one that I personally had. You have heard of differential flexure. I had the case of it - not severe enough to cause streaking in single sub, but enough to cause difference of 50+ px between first and last sub of the evening. Each sub needed to be shifted something like half a pixel or so to be aligned with previous one - so it was not obvious in single sub, but if you compare every third or fourth sub - it looks like they are properly dithered. Result is the same as dithering every third or fourth sub. This also depends on sampling resolution. For someone sampling at 0.5"/px (and later binning) it is easy to get 1px shift between subs due to drift and still have decently round stars. Let's say that FWHM is something like 2.5" - that will be 5px. Elongation by one pixel will make it 6px in that direction or about 20% larger diameter - that is 1.2 deformation - rather small to be seen, particularly if one orients sub RA-DEC to X-Y. It is easier to spot elongation if its not aligned to horizontal or vertical of the frame. Ideal dither would be exactly integer steps - even single pixel offsets, but never repeated - in random pattern. If you have integer offsets - you don't need to use interpolation when registering subs - you just shift whole image exactly integer number of pixels in X and Y. Unfortunately this is never so - offsets are always fractional. Interpolation algorithms use neighboring pixel values, sometimes even 5x5 pixels or more. This means that single hot pixel will be "smeared out" to multiple adjacent pixels after sub is registered. For sigma clip to work the best - you want to minimize number of outliers. More outliers you have - more impact they have on average value and they are less to be considered outliers. For this reason you want larger than few pixels dithers. Other reason is that dithering direction is random and software usually does not remember all previous dithers to be able to go where it was not before. To illustrate that - just imagine that you can go only left and right in your dithers. Now imagine that you dither by 1px - and in previous dither you moved right. In this dither you have 50% chance to go to position that you already were just one exposure before. Imagine now that you also dither only left and right, but you can go anywhere from 1 to 10px in single dither. Last dither was 5px right. Now you have only 5% chance to return to that exact position (10% chance to get 5px movement and another 1/2 to get direction right). Btw, here is what single hot pixel looks like if you move it something like 0.121, 0.788 using quintic B spline interpolation: It impacts almost 9x9 pixels around it.
  6. I did not want to get too technical, but here is what I find to be the most beneficial part of dithering. People tend to use too few calibration frames. Let's for the moment just consider dark subs. Imagine we have a camera that has 3.5e of read noise (many modern cmos camera have less than that, but CCDs have more). You'll often see recommendation for 20-30 dark subs. Let's go with 25 - its easier to calculate. Stacking 25 dark subs will reduce read noise by x5 in master dark. With cooled sensors read noise is dominant type of noise in darks, so we can say that master dark is having 0.7e of noise in it (3.5e / 5). Now imagine you have perfect guiding and there is no shift between the subs. Each sub is calibrated by first subtracting master dark, and then subs are added / averaged together. If you don't have shift between subs - same master dark will be removed from all subs for a given pixel. This means that we can "pull it in front of the brackets" - mathematically speaking. To be very precise mathematically - it is like subtracting same value from each number in average - it is same as subtracting that number from average. What happens is that you will stack your subs and in the end you will "add in" 0.7e of read noise back in. If you have a lot of subs (and we mostly do) - their total read noise will be small. For example, let's stack 256 light subs. They will also have 3.5e worth of read noise each, but after averaging them out, they will have 3.5 / 16 (16 being square root of 256) = 0.21875e Now we remove our master dark and hence add 0.7e worth of read noise back in, and resulting read noise will be ~0.73338e - or almost the same as read noise of master dark! As a contrast, let's examine case where we have perfect dither between every frame. In this case we know that no pixels from master dark will land in exact position twice and we can mathematically treat those values as linearly independent vectors - same addition applies as with regular random noise. In this case we must calculate each sub - master dark read noise and then stack those 256 resulting calibrated subs. Every single sub will have sqrt(3.5^2+0.7^2) = ~3.56931366e of read noise. We are stacking 256 of them, so result will have x16 less noise, or ~0.22308210372e of noise. If you don't dither and have perfect guiding read noise will be almost the same as master dark - in this case 0.7e, but if you dither perfectly, read noise of stack will be the almost the same as if you did not use master dark at all and it did not contribute anything - 0.22e Or in another terms - just dithering in above imaginary example made our background contain x3 less noise (we did not include LP noise and other stuff - and granted it changes picture a little, but this holds - it is always better to dither because of this).
  7. Kappa is sigma multiplier - above is two sigma range. Number of iterations is used to determine sigma and do clipping. In each iteration - all remaining values are averaged and standard deviation is calculated. Only pixel values that lie in avg +/- kappa*sigma range are "passed" to next iteration. Average value in last iteration is result of stacking for that particular pixel. Above 2.00 and 5 settings are quite aggressive. Out of 100 subs, it will on average leave only ~68.3% samples and reduce SNR improvement from x10 to ~ x3.85 if all subs are good and without issues! (in each turn it will throw away ~32% samples and it will do that 5 times). Much better settings would be - 3.0 and 3. Out of 100 subs that would reduce SNR improvement from x10 to x9.955
  8. That really depends on parameters for sigma clipping. Not sure if people understand how to properly select them. Here is quick explanation: This is important graph to understand when thinking about kappa value in kappa sigma clip (or just plain old sigma clip algorithm - so many names for it ). It is famous "68/95/99.7 rule". When we have gaussian distribution of samples - 68.27% of all samples will fall within one standard deviation away from true value. This means that if all subs are without issues - 68.27% of them will have pixel value closer to true value than single sigma. Going further - 95.45% of pixels will have value within 2 sigma from true value and 99.73% will have value within three sigma from true value. What does this mean for algorithm - it means that if you want to be sure that you include as much values from your subs - you should choose kappa to be 3.0. This will ensure that almost all proper pixel values will be included and that really anomalous pixel values will be excluded.
  9. In my view: - one should dither and preferably dither at every sub. If you find it to be too much a waste of time - dither every couple of subs. In reality, it's not that much a waste of time. My dithers last 10 seconds or so. I guide with 3-4s guide cycle and it takes 3-4 exposures to complete dither. That is at most 20% or so spent on dithering even with very short exposures of 1 minute. If I use 3-4 minute exposures, that goes down to less than 10% of time. This is compensated with additional half an hour of imaging for a night of 4h of imaging - so again not such a waste of time. - If you have good guiding and work on lower resolution - dithering is very important. On high resolutions if you guiding is not so tight - subs will be somewhat dithered "naturally". Dithering is beneficial always, even if you have perfect camera with no issues like hot pixels and such. This is because dithering "spreads around" noise from calibration subs further. Do remember that you need to use specific algorithms for dithered subs to work the best. You won't get anything special with simple average. You will get above mentioned clusters of hot pixels. Use sigma clip stacking (sigma-kappa, or sigma reject or however it is called in your stacking software) to benefit hot pixel removal from dithering. In fact use such algorithms always when dithering.
  10. Don't remove it, I went thru the trouble of making those nice screen shots - don't want it to go to waste
  11. Not real, posterisation due to processing. If you want to know what is real / to have a good reference - there is massively detailed image of M31 online, let me find a link to it. Due to original size of the image (69536px x 22230px and 4.3GB size) here is "zoomable" online version: https://www.spacetelescope.org/images/heic1502a/zoomable/ Here is screen grab of core region: There is some sort of ring like structure, but that is not what is showing in your image. Btw, I just love that extreme resolution image - as it shows features like this: That would no doubt be mistaken for a star but is globular cluster. Or maybe this one - Andromeda Pleiades anyone?
  12. I don't think there is simple answer to that question. Topic is rather complex and depends on the way you process your images among other things. To see what is acceptable level of noise in particular color - we need to examine how people perceive difference in color based on how "distant" colors are in physical world. For example - look at this chromaticity diagram with error contours: Colors inside ellipses are roughly the same visually (same level of difference across the surface of the contour) but sizes / diameters of these error contours are not the same and depend on color itself. It looks like we are the least sensitive in errors in green but we react strongly to errors in blue. This suggests that noise in blue should be the least and eye can tolerate errors in green the most. I've written above just to point out how complicated the topic is - I did not even start to consider relationship between luminance noise and chrominance noise and how do we perceive each. Bottom line - maybe try out couple of approaches - same time spend on each of LRGB, twice more time for L than for R, G and B combined, binning of color and so on and just use the one you like the most, as there is no definitive answer of what is the best way to do it (maybe there is, but I don't think we know it yet).
  13. I'm speaking from position of limited experience with mounts - I've got Heq5 that I've heavily modded to get it to guide at 0.5" RMS. EQ35 is hybrid of EQ5 and EQ3. It is based on EQ3 platform with RA assembly of EQ5 if I'm not mistaken (or according to EQMOD Prerequisites page found here http://eq-mod.sourceforge.net/prerequisites.html) could be that it is in fact RA from EQ6 class mount - both EQ6 and EQ35 have 180 teeth on their RA worm gear. In any case, I'm skeptical that EQM35 has higher payload capacity than EQ5. Both are quoted differently on different websites - but most of the time it is something like 10Kg visual 7kg imaging, and I guess that is about right. However if you look at supplied counterweights you will notice that they are different - EQ35 has 2x3.4Kg while EQ5 has 2x5Kg counter weights. Larger counter weights means you can balance larger scope and it stands to reason that this means mount has higher capacity. I think that Eq35 is more like Eq3 - meant for portability but with improved tracking / guiding performance. On the topic of guiding - we can't say which one of these mounts will be better at guiding. On paper there should be not much of a difference between the two. Both have just a bit less than 0.3" step resolution (0.28125 for EQ35 and 0.287642 for EQ5). EQ35 has much less precision in DEC so that could be a drawback, but it won't be huge disadvantage. In reality - there is so much sample to sample variation that both can perform good or bad and be better than the other and if you want to get the most out of your mount - you will have to tune it at some point. With a bit of luck and a bit of skill you should be able to get rather good mount out of either of the two. I still recommend EQ5 as stability is important part. If on the other hand you value portability - then choose Eq35.
  14. What is your budget in the end? I would not consider EQM-35 as an option - price difference between it and EQ5 is very small and EQ5 is going to be more stable mount. Another a bit more expensive option, but not as expensive as Heq5, would be this: https://www.firstlightoptics.com/ioptron-mounts/ioptron-cem25p-center-balanced-equatorial-goto-mount.html On the other hand, if you are tight on budget but are handy and don't mind a bit of DIY, maybe get plain EQ5 type mount and motorize it yourself. Either some arduino + steppers or maybe kit conversion. https://www.astroeq.co.uk/tutorials.php or https://www.firstlightoptics.com/sky-watcher-mount-accessories/enhanced-dual-axis-dc-motor-drives-for-eq-5.html You can do the same for EQ3 but EQ5 is better mount. In the end, some people use AzGTI in EQ mode or Star Adventurer for very mobile, very low cost imaging solution.
  15. I think you have a light leak. You should not have such gradient on your dark, and there are hints of dust shadows, although, depending on type of light leak - I would not expect them to look like that. It looks like it might be front side IR leak perhaps? If scope cover is plastic, and plastic can sometimes be transparent to IR wavelengths. This would also mean that you did not have UV/IR cut filter in place when you took your darks. In any case - making sure you don't have a light leak would be first priority. I can't explain that brighter patch, but let's see if you have a light leak issue and if solving it solves other issues.
  16. Maybe you did not use sensitive enough camera before on this target / star combination?
  17. Are you implying that ASI 1600 sensor is somehow responsible for those rays? I would say it is down to optics rather than chip used to record the image.
  18. Over exposure of luminance is not important - you end up with very strong stars after stretch anyway - as if they were clipped in recording. Color on the other hand should not be over exposed, but luckily there is simple technique that one can use if they have over exposure (even if it is only star cores), shoot only a handful of short subs and replace over exposed pixels in final stack with suitable values from short stack.
  19. That sort of makes sense. I'll explain briefly why. Duration of exposure (if you worry about that sort of thing - getting last bit of SNR) is related mostly to sky glow related noise. Cooled cameras nowadays have rather low dark current so that is not biggie, we often image faint stuff so shot noise of target is not dominant (and if it is - signal is already strong so your SNR is fine) that leaves sky glow noise as important bit - you want that to be larger than read noise. At that point you enter region of diminishing returns - yes, longer subs will get you better result but difference becomes marginal real fast. RGB filters divide 400-700nm range in roughly 3 distinct equal "zones", each about 100nm wide. While L covers whole range. Sky glow can be thought of as rather uniform over that range. From that it stems that L filter will capture about x3 more signal in same amount of time as each of RGB filters. This means that it will capture Sky glow at same increased rate and consequently related noise will be larger by factor of about square root of x3 = x1.732. To get same ratio of sky noise to read noise you should roughly expose L x1.732 = ~x2 shorter - or about half of RGB exposure.
  20. Completely irrelevant in all aspects and most people use single exposure because it is the easiest approach - single master dark needed. - color balance does not depend on exposure length - if you think it does - you can simply scale channels by multiplication to match what signal strength would be in equivalent exposure. - SNR does depend on number of exposures vs exposure duration based on read noise and LP noise, and in principle you could find certain exposure length for each channel that is equivalent in some metric to other channels based on this, but that is not going to guarantee you same SNR per channel - in the end, most of SNR depends on target signal and that is different for each target. You can control how much SNR per channel you have by using different total imaging time (different number of subs of same duration) instead of sub duration. - I've seen someone talk about color binning - it is almost the same thing with CMOS cameras and CCD cameras, or software and hardware binning. In fact if you are clever and do the math properly - you can make them be the same, but it's not really needed as difference will be small to start with. It is worth binning color and then resampling that to higher resolution of Luminance if you suffer color noise. You don't have to do it - you can do much stronger noise reduction on color channels prior to color transfer - effect will be pretty much the same. This is because we tend to perceive detail in luminance much more than in color.
  21. Maybe it would be best to do calibration in PIPP and then run calibrated thing in Autostakkert In any case, you can use PIPP to turn SER file into FITS sequence and then stack FITS in some other software. Make sure that you turn off all processing in PIPP and select FITS files as output.
  22. Hi and welcome to SGL and yes, I believe this is related to debayering - or particularly wrong bayer matrix order used. RGGB should be proper order, but this also depends on other things - like if software used reads frames "bottom/up" or "top/down" (don't ask - strange convention that screen space is top/down - positive Y direction is down, while file space is, like regular coordinates with positive axis going up and sometimes software developers don't pay attention to this or choose to disregard this for simplicity).
  23. If you want to try to get that "natural" look of this nebula, here is workflow that I would suggest: 1. Make synthetic luminance - use approaches like - weighted average, or max(ha, OIII). If Ha image contains signal in all areas where OIII signal is - use Ha as luminance as it is almost always higher in SNR. 2. Use color picker to pick your blue. Closest color to OIII is something like this: but you might want to choose a deeper blue. Once you select the color you want - check RGB ratios for it 3. Stretch Ha image but don't over do it - if you want to have distinct blue - leave center dark when you stretch - this will be your red channel 4. Stretch OIII image and apply it to RGB with channel mixer with specific ratios for blue color of your choice. 5. Combine red stretched Ha and stretched OIII colorized to blue to get color information 6. Stretch synthetic luminance and do denoising / sharpening, all the processing that you like in your image and finally 7. transfer color from 5 to 6 for finished image
  24. Negative / invert makes issue with color. If you want for example to keep actual color of stars in your sketches when processing them - there is simple way of doing that. Here is example I made couple of years ago to show this effect: Image is that of M82 - what I took with QHY5II color camera at the time (a bit of EEVA back then) and right is what it would look like if done on paper. You can use Gimp to accomplish this (or any other program that can do color space transform). We will do it in one direction, but you can do it in opposite in the same way: Open up image like this one: Choose Colors / Components / Decompose and choose LAB color model. Take L component and do Colors / Value invert and after that do Colors / Components / Recompose This is result: (here stars are black without color because they were saturated in original image - but in your drawing they should contain color if you draw them with appropriate color). HTH
  25. If you will be doing setup each session - limit your choices to CEM60. I would advise CEM120 otherwise regardless of the fact that you will be using relatively light scope (less than 15Kg total), but mount head is 26Kg alone .... I'm sometimes fed up with setting up Heq5 each session . If you can - maybe build a pier if you have an option to leave mount on the pier when not in use (as opposed to being stored away). In this case I would say that CEM120 is still an option.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.