Jump to content

vlaiv

Members
  • Posts

    13,265
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. If you calculated that you need 84h to achieve given SNR on faint part of target, given your conditions - one parameter of your calculation was filter to be used. SNR makes sense per channel or per filter - not overall. When combining LRGB data - about 90% of noise in final image is in form of luminance noise. I've already done this on several occasions, but I'm going to repeat it - here is an experiment. This is our baseline image (random internet search for a bird): This is our baseline with noise added to luminance data (L channel of Lab) This is our image with same amount of noise added to two channels (each gets that amount of noise) that represent chrominance data (a and b components of Lab model): Noise in luminance data is much more noticeable and obtrusive even if there is more noise in second image (two channels polluted versus only one). However - one must pay attention that LRGB is not Luminance + chrominance model unless it is treated so in processing - where only chrominance is extracted from RGB and L is used as luminance of final image. Then above makes sense and it makes sense to spend much more time on luminance channel.
  2. You have 3 hours of LRGB each. Does not make sense to add up integration time of components - unless you just want to compare your previous attempts on same target or something else, and then you say, I have total of 12h of integration time. You might just want to record total imaging time - in that case, it makes sense to note down that it took total of 12h of exposure to make that image.
  3. That is some amp glow, if I'm not mistaken. Here it is additionally stretched
  4. Ok, managed to find reference to that software, it could be tricky to find a download for it: https://www.cloudynights.com/topic/547413-where-to-find-maskulator/
  5. For anyone wanting to try some of different designs - I remember there was an app where you put in image of your spider and it will produce diffraction pattern for you. I can't seem to find it now, but there is alternative. In software that is capable of doing FFT and squaring the image pixel values - you can produce diffraction pattern yourself for any sort of obstructed aperture. Just draw obstructed aperture, make FFT of that image and square resulting pixel values (math operation of square).
  6. Too low SNR for effective deconvolution. Also - since it is very specific kernel, regular gaussian PSF won't be of a much help for deconvolution.
  7. Yes, like in commend above - I would not call it diffraction eliminator mask - I'd call it diffraction reshaper mask. Diffraction happens on edges. It is always perpendicular to the edge. It happens on clear aperture of telescope as well because edges of mirrors or lens also produce the same effect. Long straight edges gather all the light into perpendicular direction and that creates strong spike. Curved edges have many adjacent perpendicular directions - so no direction reinforces the other directions. This is how this mask behaves - it contains full circle but spread into segments. That way there is many "small diffraction spikes" - each of them at a different angle and they all combine to form a "glow" rather than noticeable spikes. Total diffraction is greater than with regular configuration with spider - simply because diffraction edge is longer (total edge that diffracts light when combined). This means that more of light from a star is spread into halo then if we had star and spikes only (spikes contain less total photons than halo, but since halo is spread over larger surface - it is harder to notice). In any case - it works to remove spikes but it does reduce contrast compared to normal configuration.
  8. Was it exactly the same setup? Connectors / extension rings and all? I did some calculations for faster 3" class scopes and 2" focusers - for field illumination. They are very short and have short FL and that causes vignetting on anything but focusers with very short draw tube. If you add different extension - or even use different connection as QHY camera has fixed 2" part and it is removable with ZWO and that means different extension tubes to get the same thing with reducer, this can lead to different focuser position. Different focuser position means that drawtube can be inserted further down into the OTA - and that will create vignetting with rather small central illuminated field - much like in image above. If you want to rule out the dew and other things - just shoot some flats in daytime when it is warm and there is no chance of dew forming - but be careful to have focuser at the same position it was during the night. Examine flats - and if they have similar bright spot - then try to make your setup in such way that focuser is racked further out when you are in focus. This can be achieved by pushing reducer / flattener in focuser tube instead of attaching it at the end of the tube (if it can be done). In any case - above looks to be flats issue and should be solved with flats. I quickly ran synthetic flats on red channel and got this: (so this is not background removal - not subtraction, but actually division with synthetic flat - as flats are supposed to work).
  9. You are of course right - nicer beginner scope is certainly F/6-F/7 ED doublet or F/5-F/6 tripled, but what I really meant (and did not say) is: nice beginner imaging scope on a budget Most people starting out in a hobby don't want to rush in with large cash and some even can't afford whole setup if it includes scope that is 500-600 euro + by itself (as there are a lot of extras to be purchased as well and total quickly adds up). We tend to recommend 5"-6" newtonian in budget role or perhaps 72mm ED scope. There are however abundance of second hand 4" refractors - as most visual observers move to ED glass once budget allows - and sell on their beginner refractor. At F/10 - CA is much easier to manage as you've also noted. I did not mention, but another weak point with such budget scope is focuser. Most require some TLC before they can be utilized for imaging, and some are simply beyond help. BTW, I have 4" F/10 scope waiting to be put to test as imaging scope in various roles - from planetary to long exposure imaging, just need to get around to doing it.
  10. I think that F/10 or so 4" refractor can be really nice beginner imaging scope. Drawbacks are - chromatic aberration, obviously, but there are some filters to help with that. It's a bit long so better mount is needed. As far as over sampling go - you can (and should) bin your data, even after capture in software.
  11. I think that easiest way to combine two datasets to reveal SNR difference is to create "split screen" scenario. Both stacks need to be registered against the same sub and calibrated / integrated in the same way (as to have same intensity - to be compatible). Then half of either stack is copied and pasted directly over the other. This will create "split screen' scenario for linear data and provide you with means to process both stacks in exactly the same way (whatever you do to process the image will equally affect both sides of the image).
  12. Ok, one last try - Sun thru a rain spotted windshield of a car?
  13. One I linked above has feature to mount handle to both left and right sides of the head for left/right handed operation, but I'm not sure if handle can be mounted backward (I was hoping that this mechanism would allow for something like that).
  14. That is a good question, and I must admit that I have no clue. At one point I did some research into different deconvolution algorithms and while we have "ideal" algorithm that produces accurate results in theoretical conditions - which is just inverse filtering - or division in frequency domain (as convolution is simply multiplication in frequency domain - and inverse is division) - problem is that in real conditions we have two issues with our input. 1. We don't have actual convolved function to perform deconvolution on. What we have is such function polluted / distorted by presence of noise. 2. We don't have knowledge of PSF used to convolve our baseline function We can extract it from stars in the image, but it will only be approximation to a certain degree because in reality - we don't have single PSF operating on our image. We have a range of PSFs that are very similar - but not identical. Stars have different spectra and seeing influence depends on wavelength of light in question. Blue stars will almost certainly have higher FWHM than red ones - when imaged thru reflector. Using refractor adds another level of complexity as no refractor is ideally corrected. Then there are aberrations of optical system that vary with position in field (coma for example - grows as function of distance from optical axis) and even with correctors - we don't get stars that are equal all over the frame. Further - we have impact of seeing that is not the same over the whole FOV. With longer exposure it tends to average out - but still - it won't be 100% the same in all points on the image. Given two points above - we can only hope to achieve so much with deconvolution. Different algorithms deal with above two points in different way. LR deconvolution assumes fixed blur kernel and Gaussian+Poisson distribution - which would be single exposure. More data resembles that - better results it will produce. I don't remember seeing comprehensive comparison of deconvolution algorithms on astronomical data (specifically stacked amateur images).
  15. Given that field is slightly bluish - image could be taken in dusk or dawn - maybe over exposed out of focus Venus in star field?
  16. Yep. It's Richardson-Lucy or LR deconvolution (regularized). I have suspicion that we are using LR deconvolution the wrong way It is a very good algorithm - but it was developed to deal with single sub rather than with stack. Algorithm is using probabilistic approach and assumptions are that noise distribution follows Poisson + Gaussian pattern - meaning shot noise + some amount of read noise. When applying it to the stack - we are using it outside of its intended domain. In ideal world where we stacked with shift + add technique - i.e. no sub pixel shifts of subs and no rotation of subs for alignment prior to stacking - it would be still valid, but since we use alignment with interpolation that effectively changes noise distribution - I wonder how efficient LR really is in that scenario. Btw, this is just a technical rant on my part, nothing really to do with your comparison, I just wondered which deconvolution method you used.
  17. Out of focus almost full moon in star field?
  18. I don't have any actual experience with this video head - but I've been eyeballing it as a cheap alt-az mount for small scopes: https://www.aliexpress.com/i/1005004674888947.html Only issue that I find with such heads is range of motion. They allow for 90 degrees forward tilt but are limited in backward tilt - which is important for astronomy - we don't point our telescopes to the ground but rather up. This means that some sort of wedge must be used between telescope and mount - telescope needs to be mounted pointing slightly up when mount is level to get usable range of motion.
  19. When you say that - what exactly is deconvolution method used?
  20. I'm not entirely sure that it will make a difference in case where you are optimally sampling. Sure, if you take two images with finite sized pixels, with positional shift between them - then sample values will be different, but those sample values will tied to different locations (sampling positions) so it's normal for them to be different. Question is, when you restore original function using both sets of data and ideal interpolation function (sinc) - will you get same or different thing? My guess is that you will get the same thing - it will be original function produced by optical system convolved with function representing pixel shape and sensitivity over that shape, and if I'm not mistaken - convolution is shift invariant.
  21. Geometric distortion is not the same as star distortion. Geometric distortion represents different distance between two stars - depending on where you put those two stars in the frame. In ideal situation - it would not matter where you put stars inside of FOV - they should be separated by the same amount. This is however impossible to do, especially with wide field lenses because we are trying to map sphere to a flat surface (much like trying to picture the globe in an image). It can't be done in such way that you both preserve distances and angles as they are. You need to sacrifice one or the other or make some combination. This leads to strange distortions in some lenses - and that can be corrected in software. Star distortion is something else - it is the way how light is focused into a point - or rather not into a point. It has nothing to do with above geometric distortion. It is much much harder to correct in software, and in principle, in ideal conditions - meaning knowing perfectly characteristics of the lens and having no noise in the image - it can be done, but in reality - it can't. If it was feasible - we would stop using field flatteners and coma correctors and there would be no need for that service mission to Hubble when they discovered that optics was flawed - it could be corrected in software. But reality is - it can't be done. As far as vignetting, yes, it will work, but better practice is to use flat frames to correct for that. Flat frames will correct for vignetting - but also any imperfections in optical train - like dust particles or whatever creates slight light blockage. For this reason - it is the approach that is regularly used in astrophotography over "synthetic flats".
  22. I took a look and I'm not any wiser. I think that it is simply artifact of oversampling + sharpening.
  23. This is true for Solar Ha etalons, but I'm not sure if it is that important for visual filters and light pollution. Difference being magnitude of light involved - both on band and off band, and also the fact that there is something called JND - just noticeable difference which is about 7% for visual. We need something to be 7% brighter to barely notice it is brighter. I think that you won't notice increased brightness of the background if blocking part of filter differs fraction of percent. Again, for contrast - if we have say 7nm FWHM and roughly 300nm of whole range, that is ~ x43 difference. Even if our eyes were equally sensitive to whole 400-700nm range (and they are not, they peak at around 500nm in scotopic vision - so perfect for OIII) - it would take difference of about 0.16% in transmission (0.16% * 43 = ~7%) to even have a chance to notice contrast difference (again due to need to change contrast by ~7% to start noticing). In reality, due to sensitivity to light not being uniform - this is probably more like 1-2% rather than 0.16%.
  24. Performance of the filter depends on two parameters: 1. width of filter band - which is usually expressed as FWHM width 2. Peak transmission. SVBony visual OIII has FWHM of 18nm and while Astronomik does not list FWHM of its visual filter - it is safe to say that it is less than that. Here is transmission graph overlaid one on top of the other. Blue is Astronomik, green is Svbony. Both will be bested by Astronomik OIII-CCD which has 12nm or even narrower one which has 6nm bandpass. Similarly SVbony photographic OIII/Ha has 7nm bandpass and it will perform better than 12nm Astronomik CCD one, but a bit worse than 6nm Astronomik OIII-CCD Bandwidth determines contrast of the target while peak transmission determines brightness (and few percent there does not make visible difference - we need about 7% difference in intensity to notice any difference at all).
  25. Why do you think good visual OIII filter will best it?
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.