Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. These are standard / dyadic filters, not gaussian, right? Gaussian filter uses the same scheme as unsharp mask (which uses single decomposition). I think that wavelet / multi frequency approach is very good idea and just depends on filter used to separate frequency. Problem with deconvolution approach is the fact that you need to know or guess kernel. Wavelets is sort of guided kernel guessing - we use our visual feedback as guide of how good current kernel is. Regular deconvolution is missing that and it also models kernel with single model (say Moffat, Gaussian, Lorentz or similar) while actual blur kernel can be something in between or totally different. In either case - all methods have strong mathematical background and indeed produce proper results - actual detail being sharpened. Sometimes there are artifacts of course if parameters are wrong.
  2. What do you think about frequency decomposition approach? I think it is very similar to wavelets, but I'm not 100% sure how wavelets work. There is excellent paper on wavelets that I need to revisit in order to figure out possible approach. I'm currently thinking of doing frequency decomposition by scaling. Here is simple explanation of approach: You take original image and scale it down by some factor - say 80% of original size. Scaling method is key - as it provides certain frequency response. Then you take that scaled image and scale it back up to 100% again using certain resampling method. Such image will effectively have low pass filter applied to it and you can subtract it from original image to get high frequency component. Doing that many times will slice frequency domain into small bands - each of which you can simply multiply with some value. Here is graphic representation in frequency domain: Here is what decomposition into 50% looks like: I just generated some gaussian noise and created 50% resampled version using bilinear resampling. Then I again scaled back up that and got low detail image. Difference of the two will be high frequency image: We can repeat this process and get multiple scale divisions (in fact it needs to scale as 1/X to have linear character in frequency domain, so 1/2, 1/3, 1/4 and so on of the size). Then it is simple matter of multiplying each scale with "boost" factor - which is inverse of attenuation factor. Catch is to apply resampling filter that produces nice frequency separation.
  3. There is no point in making copies of binned image. Binning actually does SNR improvement and image reduction - you either do that, or you split images and then stack them. In either case, result with regards to SNR improvement is the same. Bin x3 results in x3 SNR improvement and so does stacking x9 more subs - SNR improvement is equal to square root of stacked subs - stack x9 more, SNR improves by sqrt(9) = x3. Only difference is that you end up aligning those 9 subs when stacking them and you don't align them when binning. If you are over sampled - there is minimal difference between the two and it also depends on resampling used for aligning. Sift_Bin_V2.class (place this file in plugins folder of ImageJ distribution). Plugin works on 32bit images and in your case, you would be using following parameters: (binning will try to do some Lanczos alignment on slices, but I think it is a bit buggy - so use it just for splitting slices, subslices in separate stacks can be used for separating bayer matrix or similar - but in this case, leave it unchecked).
  4. Exactly like binning with added bonus of not having some of pixel blur associated with larger pixels (which is very small anyway and only maybe concern in planetary imaging). You don't have to do splitting - you can do software binning 3x3. If you want - I can share ImageJ plugin that I wrote that will split image for you and make several subs. You would use it after calibration and before you align and stack.
  5. I would suggest that you start splitting your subs and keep current setup with mosaic. Fact that you are sampling at 0.42"/px - simply means that you are wasting too much time without benefit on highly over sampled image. Above is 1:1 of you work in progress. Below is same thing at 30% of original: This is close to proper resolution for the level of detail you are achieving. What I'm suggesting is to "split" each of your subs into 9 new subs - each containing every third pixel in X and Y. This means that you'll get 9 properly sampled subs from each of your current subs. This in turn means that you can spend 1/9 of time on each panel - and do 3x3 mosaic in the same time it takes to do a single panel. This technique can be used to get "shorter" focal length equivalent F/ratio scope if you know how to do mosaics. You clearly know how to do those so there is no much point in getting another setup.
  6. Ok, I get it now - it is boost vs frequency image. I would be much clearer if you did radial cross section as graph (similar to MTF diagram) as image does not clearly show values (I had no idea that central part is 1.0 - it looks like close to zero - hence the question above). What puzzles me are high frequency values. In order to restore image - in simple case you want to divide with FT of PSF - that follows from convolution theorem which says that FT of convolution of two functions is product of respective FTs. Say image was blurred with simple gaussian kernel (although we don't know exact kernel and that is the biggest problem). FT of gaussian is gaussian and FT of original image was multiplied with that gaussian. In order to restore image we need to divide FT with again gaussian, or multiply with inverse of gaussian (in general case multiply in frequency domain with inverse of FT of blur kernel). For gaussian kernel, I would expect image to look like this: Or as plot of cross section: (Ignore values on Y axis - I did not normalize kernel). Do you know why high frequencies are not touched?
  7. Could you explain image to the right of each - FT of filter. Is that separation filter that you used to extract particular layer for amplification, and if so, why did you do only single layer and not multiplayer?
  8. 130SLT NEXSTAR is not going to be able to do proper DSO astrophotography anyway since it is AltAz mount. It will be able to do lunar and planetary very well with suitable barlow and dedicated planetary camera. Planetary imaging is a bit different than regular photography as it works by shooting thousands of images - usually in the form of video and then processing that - best frames are selected and stacked and then you move onto further processing. You don't have to spend fortune to get decent planetary camera - something like this will be very good: https://www.firstlightoptics.com/zwo-cameras/zwo-asi120mc-s-usb-3-colour-camera.html and if you have the budget, one of the best planetary performers is not much more expensive: https://www.firstlightoptics.com/zwo-cameras/zwo-asi224mc-usb-3-colour-camera.html Only issue with these is that you'll have to do mosaic to get complete lunar disk in single image.
  9. That sort of looks like atmospheric scatter rather than above. Was it hazy on that particular night?
  10. There are couple of reasons for this and couple of "cures" - atmosphere distorts shorter wavelengths of light more than longer wavelengths - for the same reason glass bends shorter wavelengths of light stronger. notice that long wavelengths (red/orange) are bent much less than short wavelengths (blue/violet) - telescopes tend to be corrected better in red part of spectrum. Lens designers can choose which way doublet curve will "lean" - you can make either red part of spectrum more defocused or blue part of spectrum more defocused. Most opt for blue part of spectrum because of Ha and above fact that shorter wavelengths are bent more. That is why we often see "violet" halo and not "red" halo in achromat scopes Even ED doublet have this - and blue part of spectrum will have a bit more defocus then the rest, or rather defocus "range" - you'll put center of blue in focus but rest of filter will be more defocused compared to green and red where you also focus on center of range. Here is achromat curve - and while it is is much worse than in ED scope - it shows the same pattern - difference in focus between "center line" and "edges" for all three filters. - third problem is spherochromatism. Fact that spherical aberration depends on wavelength as well. This is much more pronounced with small apertures for imaging where airy disk size is close to seeing limit when we image. Look at this image: This is from APO triplet - green lines are straight - no spherical. Red lines are bent outwards and blue bent inwards. One is over corrected and other under corrected for spherical aberration. That blurs things. In ED doublet things are going to be worse than above. Those are reasons - you probably have combination of them. Now onto the solutions: 1. Since your Ha is very good in sharpness - use that as luminance layer. Eye is much more sensitive to bloat in brightness than in color. Compose HaRGB image to get color you want - and then only use Luminance as brightness. This is easily done in Gimp by pasting stretched Ha as layer over color image and then setting that layer to Luminance (last layer option) 2. As suggested above by @The Lazy Astronomer - get additional filter that will cut off most offending part of blue spectrum 400-420nm range for example. Astronomik L3 is good solution if you have issues with Luminance as well
  11. I agree with what has been said - I don't mind achromat scope at all, as long as one is aware of its drawbacks (as with pretty much any other design). We have just touched upon F/5 and F/10 achromats - and F/10 will be much better optically but not as good for wide field, while F/5 will be good for wide field and portable but not as good for planets. I personally find Evostar 102 on AZ4 mount not to be as stable as I would like. How about going in between then? It turns out that there is very good scope that is in between - unfortunately it is too expensive in my opinion in comparison to other offerings. It almost has the same price as ED F/7 with FPL-51 glass. https://explorescientificusa.com/products/ar102-air-spaced-doublet-refractor This is 4" achromat at F/6.5. Another interesting choice - if one is willing to sacrifice some of light grasp is: https://www.firstlightoptics.com/beginner-telescopes/sky-watcher-evostar-90-660-az-pronto.html That is F/7.3 scope - but only 90mm of aperture. Another scope worth mentioning is this one: https://www.firstlightoptics.com/bresser-telescopes/bresser-messier-ar-102s-600-refractor-ota.html although I'm seriously annoyed with plastic finder and non standard shoe on that one. I also don't care much for those oversized dew shields that Bresser and sister company ES have on their scopes.
  12. Skywatcher scopes also have 2" and that was really handy on my ST102 - turns scope into 2" F/10 and Jupiter from blurry mess into proper planet (but limited to about x100 mag) TS, Stellarvue, AltairAstro, Technosky and some other brands all source the scopes at the same manufacturer so they are in principle the same scopes except for detail and branding. Just keep in mind that there are few models of 4" F/7 - one with FPL51 glass (worse color correction) and one with FPL53 glass (better color correction). I think I've seen two different focusers used - 2" crayford and 2.5" R&P. That 2.5" rack and pinion unit is actually very good. I have it on one of my scopes (80mm F/6 TS triplet APO). I think that AZ5 will be ok. I have AZ4 and 80mm F/6 and 102mm F/5 both sit perfectly on it. 102mm F/10 does not - it is too long OTA and vibrations are too large for my liking. Look at this diagram: It explains a lot about chromatic aberration in short focus achromats like ST102. What is called false color is actually color blur created by out of focus light. If you put 540nm wavelength (green) of light in focus - all the rest wavelengths of light will be out of focus - some by small amount some by large amount. When you are out of focus - that blurs the details. It is not just about image having the wrong / false color - it is about it being blurry because you can't focus it properly - as soon as you focus some wavelength properly - others go out of focus. You can focus only two wavelengths of light at the same time. Point of color correction is - how much out of focus are other colors and can you see it. ED doublets use special kind of glass that does not "produce rainbow" as much - and while they have curve similar to above - level of defocus is much much smaller - almost unnoticeable (or even completely absent in top models). Because it has more exotic glass type. These glasses contain rare earth elements and are hard to produce. They have certain level of purity. One is more expensive than the other. 80ED-R uses FPL-53 glass and that is more expensive glass than FPL51. There is also 102 model with such glass: https://www.altairastro.com/starwave-102ed-r-fpl53-refractor-459-p.asp That is £500 more expensive than FPL-51 4" F/7
  13. Simple answer is - don't Full moon in the sky can make 4-5 mags of difference in SQM reading. People recorded SQM 17-18 with full moon in their Bortle 1 sites (which is close to SQM 22). For each SQM magnitude - your imaging time is roughly x2.5 times longer. Say you live in fairly light polluted area and 78% moon only adds 3 magnitudes of sky brightness. That is about x15 times in exposure length. Go out on that night and image for say 6 hours. That is equivalent to 23 minutes of imaging when it is fully dark. Besides - you need software that can properly stack high and low SNR images in order to exploit even those less than half an hour of data.
  14. No, just an image I found online - it shows nicely lid type aperture mask. In fact, I think there is a blog dedicated to making and using mask - maybe it has make of scope and details. https://10minuteastronomy.wordpress.com/2017/02/11/why-and-how-to-make-a-sub-aperture-mask-for-a-refractor/ Yep, it seems to be Bresser Messier AR102S Comet Edition. Not sure if it is available any more.
  15. I think that you can easily fit 1.25" UV/IR cut filter with T2/1.25" adapter. That should not mess up your distance to CC. You might even have one come with camera? You still have T2 thread and 2" nose piece available for connection.
  16. I asked the same question long ago - for focal reducer on RC - and got very nice answer - "Why don't you try it out?" CCs and FRs work in converging light beam, have small aperture (2" max?) and they bend light only slightly - in fact I think that there is greater chance of FR having issues than CC that only corrects for coma - minimal disturbance of wavefront. I don't think there will be much issues due to CC - but again, I can't be certain and it would be best to simply try it out. Quite a bit. After some wavelength - 820nm or so - all three colors have basically the same response to IR. If target is strong in IR - this means that RGB ratio will be disturbed. Say you have 4:2:1 RGB ratio and then you add another 2:2:2 - because all three channels have same sensitivity to IR. instead of 4:2:1 you'll have 6:4:3 Earlier red was twice as strong as green (4:2) and now it is only 50% stronger (6:4), green was earlier twice as strong as blue (2:1) but now it is only 25% stronger (4:3) Some things in the image will have this problem as they emit in IR part of spectrum (stars, clusters and galaxies) while some things won't - emission nebulae, possibly reflection nebulae. If you have combination of the things in the image - color correction is almost impossible - as you need to apply correction to those things affected and not apply correction to those not affected. Whatever you do - some things will have their color be off. You can still do color balance so that image looks nice - but you can't get accurate color. There will always be something in the image that is either too saturated or not saturated enough. Mind you - using UHC or similar filters also makes color correction impossible - but people use them anyway.
  17. I think that you'll have to use coma corrector with 533. Radius of diffraction limited field is F^3 / 90 = 4^3 / 90 = 0.711mm - or diameter is about 1.42mm. Diagonal of 533 is 15mm or so? Although we don't need diffraction limited performance for DSO imaging as seeing will swamp it - I'm thinking that coma will easily start to show after about first third, maybe half way to the edge of the frame if you don't use coma corrector. You can certainly go without UV/IR cut filter and for DSO imaging - that is even better as you will pick up a lot of IR signal - that will boost your SNR. Only issue that you'll have is with color balance - colors will look a bit strange (a bit washed out) and you'll have trouble color balancing properly.
  18. You can use raspberry pi and linux on it. It will run Indi platform. That is pretty much the same hardware as ASIAIR if I 'm not mistaken, but without all the bells and whistles. Basic RPI4 model with 2GB of ram costs something like £35 or so?
  19. Indeed - it is about masking front end of the scope. Outer parts of the lens bend the light much more than inner part of the lens - they have to so that light reach the same point in focal plane. Chromatic aberration is due to the fact that not all wavelengths of light are equally bent - and more you bend the light - more this difference shows. By masking outer parts of the lens - you remove "worst" part of it in terms of chromatic aberration. Down side of it is that you are both reducing light gathering of telescope - very bad idea for DSO observing and you are reducing planetary resolution. Good thing is - it costs you almost nothing, it is not permanent modification (if we can call it modification at all) and you can put it on and remove it faster than you change eyepiece. With fast achromats - resolution loss due to CA is actually much worse then loosing resolution by using smaller aperture and you can try out few different sizes of aperture mask to find your sweet spot - most CA reduced with the least planetary detail lost. You can make mask out of anything - even cardboard. Preferred way is to make plastic mask. You only need to cut smaller hole in central region and find some way to attach the mask. Refractors have dew shield - and that is perfect for "plug" type aperture masks. For example, here is PVC 4" pipe plug that I adapted to be aperture mask on ST102 that I used to own: But people do make it out of cardboard like this: Or plastic lids like this one: If you have access to 3D printer - it is very easy to print one to size. I wanted to point out with regards to original topic - Evostar 120 will provide you with equal planetary performance to Evostar 102 - regardless of the fact that it is faster achromat - both have same focal length and all you need to do is make 102mm aperture mask to go on 120 to get the same performance. It also shows that Evostar 150 can be very potent instrument. It will provide you with 6" of unobstructed aperture, same FOV as 8" F/6 scope and excellent 4" F/12 planetary performer - for the price of one scope (which really needs monster mount to hold it - but that is another topic )
  20. Ok, I get that. I did not pay attention to budget limit. I guess some creative thinking is in order then. Here is another idea. ST120 for example. Most people will say that it will fare poorly on planets and they are right. However - what they often forget is that you can have ST120 - with 120mm of aperture on deep sky and say 3" F/7.9 scope for planets in the same scope? That is very close to Sidgwick standard. In fact - it has the same color correction of say 102mm F/10 (or even a bit better). All you need to do is make 76mm aperture mask and use it when observing planets and moon with higher powers.
  21. Yes that is probably misprint or some other type of error. It is F/7 scope and has 714mm of focal length. It has retractable dew shield so it's even shorter for transport. According to TS website (same scope branded differently) is 73cm in length: https://www.teleskop-express.de/shop/product_info.php/info/p4964_TS-Optics-ED-102-mm-f-7-Refractor-Telescope-with-2-5--R-P-focuser.html On Altair Astro website - more expensive version (same scope just different glass elements) ED-R version is quoted to be: less than 70cm long when dewshield is retracted. ST102 is both lighter and is probably going to be shorter at least 10cm - but if you want to observe both types of targets - one scope is easier to carry and mount.
  22. Well, there is scope that is half way between the two. It will provide you with planetary views similar to that Mak 127 and is "short" enough to provide you with wide fields. It is also more expensive - but when you factor in the price of two scopes - it's actually not that much more expensive. Get yourself 4" F/7 ED doublet. https://www.altairastro.com/starwave-ascent-102ed-f7-refractor-telescope-geared-focuser-468-p.asp That one will have a just a tiny bit of chromatic aberration. If you want to get rid of that too - get a bit more expensive version: https://www.altairastro.com/starwave-102ed-r-fpl53-refractor-459-p.asp
  23. I think you are right - it should not matter which focuser you use. What can matter is distance to camera. Do you have proper spacing there? You might be slightly off and not notice it since you are using ASI533 - it is smaller sensor. Another way to deal with that is to raise primary mirror up the tube. In fact - collimation can impact focus position as it moves mirror up down.
  24. I've read reports that aplanatic CC fits stock focuser without issues. Maybe your moonlite focuser is lower profile than stock focuser?
  25. I don't think it would work. It is easy to split light - but not as easy to "join" light. Just try to make diagram with some sort of mirrors prisms - put two objective lens next to each other and try to bring rays to the same focus - here is example to see why it won't work: You can't make element that takes light coming from the left and direct it downwards and at the same time take light coming from the right and direct it also downwards. One of them will have to go upwards. If you try to bring them to the same focus at an angle - you'll just "warp" the image in focal plane - and warp it differently so it will not align even if warped. In fact - when you think about it - what you want to do is called aperture synthesis and is used with radio telescopes. In such systems you need to be able to sense phase of wave and based on that synthesize final image. It can be done with sophisticated equipment at longer wavelengths - and you still need plenty of "observation" at different points in order to reconstruct the image - you can't do it in real time and it certainly won't work for visual light.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.