Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. What sort of equations? 1. Tweak focus 2. Look at FWHM / HFR value produced by imaging software 3. Repeat above procedure until you get smallest reading in point number two This is how motor focuser works as well - it takes series of snaps and measures FWHM / HFR in each and produces "curve" - but actual curve or its production is really not important to you - it just lets software find best focus spot - with smalles FWHM/HFR values. Best focus must be where arrow is pointing as moving either to the left or right increases FWHM / HFR value.
  2. I would suggest going for Afocal instead of doing eyepiece projection. Eyepiece projection makes it easy to reach focus with DSLR type camera as it utilizes a lot of back focus - and in process it magnifies image. It can be made to act as reducer - but that tends to put focus position extremely out - and eyepieces are not optimized to do that - so image will be very blurry even in center. Sensor also needs to be mounted very close to eyepiece for that to work. Afocal method is better for this sort of thing - but it does require special adapters and camera lens. There are couple of things that need to be taken into account when contemplating afocal setup. By the way - here is schematic of afocal setup: This means that CS/C mount lens will be needed and adapter for SV205 since it has 1.25" filter thread - something like this: https://www.firstlightoptics.com/adapters/astro-essentials-low-profile-125-nosepiece-to-fit-c-mount.html Next thing to worry about is CS/C flange distance and how far sensor is with SV205. ASI cameras are built to accept CS or C mount lenses as is - 12.5mm or 17.5mm flange distance, but this SVBony - I'm not sure as there are no specs for that. Anyway, you need such adapter, CS/C mount lens of suitable focal length and way to secure camera with such lens against eyepiece. Say you take 32mm eyepiece - that will give roughly x47 magnification. Moon will be around 23.5° - say we take 25° to have some room left. As far as I can tell - SVBony camera has 1/3.2" sensor size - which gives x7.6 crop factor. Using this calculator: https://www.pointsinfocus.com/tools/depth-of-field-and-equivalent-lens-calculator/#{"c":[{"av":"8","fl":6,"d":3048,"cm":"0"}],"m":0} we can see that with 6mm lens - we can comfortably put 25° (in fact 29.5° in vertical direction) on sensor. There you go: 32mm plossl and 6mm C/CS mount lens will give you suitable reduction to put whole lunar disk on sensor at once. Only thing left to do is to match exit / entrance pupil (this is not strictly needed - but not matching them will result in lost light and we don't want to loose light if we can help it). 127mm / x47 = 2.7mm Exit pupil will be around 2.7mm and since we use 6mm lens - we need F/2.2 (6mm / 2.7mm) or faster lens in order not to loose light. Perhaps something like this: https://www.aliexpress.com/i/32988089350.html Do take care to position lens at exit pupil of eyepiece for best performance. In the end, I want to stress that SV205 is really not best type of camera to do high quality lunar shots - because it does not support RAW mode and uses compression to stream video. That introduces artifacts and prevents stacking from working properly.
  3. Just be careful - multi band filter when used with OSC camera simply can't distinguish SII and Ha. Tri-band is usually Ha, Hb and OIII Even if you have SII and Ha (like in quad band) - those two wavelengths will be joined together in red channel without possibility of splitting them. Btw - such filters are good way of producing bi-color images - as you shoot multiple channels at once instead of one by one and that saves you time (you can image for couple of hours with multi band filter instead of one hour each channel), but problem is that it is inherently only "bi color" regardless if it is duo, tri or quad band. That is because of already mentioned Ha / SII issue and also because Ha and Hb signal are linked together - you can't have Hydrogen beta line without having Hydrogen alpha line (opposite can happen if Hydrogen gas is not excited enough to cause Hb transition - only Ha). This means that you'll get same signal distribution for Ha and Hb (as it is produced by same gas).
  4. Writing code that will debayer such matrix is trivial - but not much point in doing so as sampling rate achieved will be the same as binned / regular version.
  5. In principle you can, but there could be a few issues. First is that camera is sensitive in broader range than eye. Maybe filter was designed to filter out visible light in all but OIII wavelengths - but does not prevent IR light from passing. Eye won't be able to see that - but camera will. In order to fix this - you can combine such OIII filter with UV/IR cut filter if you find that it passes out of band light. Second issue that can happen is nasty reflections. Again - visually these don't cause any issues but with long exposure and other glass elements in optical path (like sensor cover window or some sort of corrector / flattener) - you can get nasty reflections around bright stars. Even some photo filters suffer from this. Best thing to do would be to try it and see if it works well. Yes you can It will work - but it won't be as sensitive as mono camera and will require special processing. Ha and SII wavelengths are in deep red and only red channel will pick up these with sufficient sensitivity - that means only 1/4 of pixels will pick those wavelengths up. If you image with such filter - best way to extract Ha or SII data would be to do super pixel mode and then extract red channel only as mono for both Ha filter and SII filter (separate sessions of course - producing two mono images). For OIII - you need to consult QE graph as both blue and green capture that wavelength. If this QE graph published by ZWO is to be believed - G is much more sensitive than blue at 500nm - so I'd go with green because of that and the fact that there is 1/2 of pixels in that color (meaning double that of blue and red). Again - method would be the same - use super pixel mode debayer, then stack and extract G channel as OIII mono.
  6. It will certainly be "undersampled" compared to ALMA which has 0.01" resolution - or about x20 higher than VLT.
  7. I have two Bahtinov masks - one large and one small (for 8" and 80mm scopes) - and they gather dust. I never really warmed up to using them and for me FWHM / HFR was always reliable measure of focus. In fact - I focused most of my sessions by simply looking at the stars - make them pin point - you have focus, or make them small enough so that any tweak of focus in or out makes them larger - you have focus. I guess it is easier to do with astro cameras and laptop then it is with DSLR camera and looking at camera screen even with zoom.
  8. If you want purely visual experience and you are interested in DSO more than planets - then consider this scope instead: https://www.firstlightoptics.com/dobsonians/skywatcher-skyliner-150p-dobsonian.html That is the same scope - but mounted differently. It uses very simple dobsonian type mount and provides very comfortable viewing when seated down (get height adjustable chair for best comfort). On the other hand, if you want to try some astrophotography later on, this scope will serve you better: https://www.firstlightoptics.com/reflectors/skywatcher-explorer-150p-eq3-2.html First, you'll get EQ mount that is needed for astrophotography. This is very basic mount - but can be used with simple single axis drive motor to track the sky and will be suitable for camera + lens astrophotography. If you are in DIY - you'll be able to convert it to full goto (there is also upgrade kit that is probably a bit over priced if you consider quality of the mount - diy is a lot cheaper). 150P F/5 is very good imaging telescope. If you go that way - you'll need to get better mount and you'll be able to use it along with suitable coma corrector to get your feet wet in AP with telescopes. For purely visual I don't recommend using newtonian on EQ type mount as EQ type mount tracks the sky and that makes telescope, or rather eyepiece and finder end up in awkward positions. You'll need to constantly rotate tube in rings to adjust eyepiece position. This is much easier to do with smaller and lighter scope like F/5 one - F/8 is very long and bulky in comparison. F/8 will be much better on planets though.
  9. In the mean time - I've found technical info on that image: It was taken at full 3096 x 2080 (ASI 178mc camera) per panel - without ROI (Region of interest - smaller section of chip which produces smaller image) - it was composed out of 9 panels.
  10. You'll soon find out that we measure things differently when doing astronomy and astro imaging. For example - resolution when doing planetary imaging is not about pixel count - but rather about what can telescope possibly resolve as there is limit to what telescope, depending on its aperture size - can resolve. Above image was captured at ~0.36"/px originally and I reduced it to about 0.72"/px in processing (or half the resolution) - as sky at the time of the capture did not support full resolution. You can see whole image here: http://serve.trimacka.net/astro/Forum/2020-07-30/moon.png That means that each pixel represents less than one arc seconds in the sky. Moon is about 30 arc minutes in diameter - that means that we captured at 30 * 60 / 0.72 = 1800 / 0.72 = ~2500px across the diameter. Each panel was done at rather small resolution (pixel count) I think. Camera that I was using has ~3000x2000 pixels, but with planetary imaging it is very important to get high frame count and for that reason, smaller resolution is often used. I probably used 1280x1024 or something like that for each panel.
  11. Hi and welcome to SGL. Best and most cost effective way to get full lunar disk image is to do mosaic. That is done in the same way as you already started - except you take multiple shots of moon (usually 6 to 12 but people have done over 50 panel mosaics). It is a bit more involved as far as processing goes - you need to stitch that mosaic together - but rest is the same. Smaller scope and focal reducers do work - but you loose resolution and sharpness. Doing mosaics is really the best way to handle full disk shots. For example, this full disk image was taken with Mak102 and multiple panels: Actual image is 3084 x 2740 and is quite zoomed in - here is small segment of it:
  12. It says that telescope has angular resolution of 0.2 arc seconds for imaging and has 120m focal length (VIMOS instrument capable of imaging in 300-1000nm range - not sure if that was used). That would make pixel size - ~116µm.
  13. That really depends on how you use it. Say you go with Quark combo and not regular quark that has integrated telecentric lens. Now you have bunch of options depending on scope used. Say you use 4" F/7 scope. That is 700mm of FL - you can get full disk with that sort of focal length and sensible sized sensor. Question is - how do you get ~F/20 for optimal etalon operation? Well, there is telecentric lens as option - but also aperture mask. How about using 35mm aperture mask? That gives you F/20 system and is equivalent to PST in resolution so you'll get decent full disk images. Want to get more close up - throw in x2 telecentric from ES and also use 70mm aperture mask. Now you have 70mm F/20 solar scope. Want to utilize full 4" of aperture? Throw in x3 telecentric and you are at F/21 with full aperture. This is why Quark combo is a good option - you can pair it with existing scope and utilize combination of telecentric lenses and aperture masks to get different systems - from full disk to high resolution close up. I won't comment on quality issues - just that I'm aware that there are issues and they seem not to be thing of past.
  14. Three things go into FWHM calculation - seeing FWHM, gaussian approximation to Airy disk FWHM and guiding FWHM. Some of these are given as sigma and some as FWHM - there is conversion factor of about 2.355, or more precisely 2*sqrt(2*ln(2)). Sigma * 2.355 = FWHM for Gaussian distribution. These three PSFs convolve to produce final PSF (this approximation assumes perfect optics - which is not the case in reality but difference is rather small) and that means that resulting sigma = sqrt(sigma_airy^2 + sigma_seeing^2 + sigma_guiding^2). Sigma guiding is just total guiding RMS. Sigma seeing is seeing FWHM / 2.355 Sigma airy is 0.42 * lambda * F# I use lambda of 550nm because it is middle of 400-700nm range. Btw - above screen shot / formula is from : https://en.wikipedia.org/wiki/Airy_disk#Approximation_using_a_Gaussian_profile Once you put everything together - you get expected FWHM under ideal conditions and then you can get sampling rate from that as FWHM / 1.6 That part has to do with Fourier Transform of Gaussian profile and sampling rate at which frequency attenuation falls below 10% (since we are approximating with Gaussian profile - there is no strict cut off point - Gaussian never drops to zero all the way up to infinity - for that reason we just take sensible point after which attenuation of frequencies is just too great).
  15. Have you tried sampling at proper sampling rate then up sampling image in software in order to process it and then down sampling it to original resolution? That way you can have your cake and eat it too
  16. Ah yes, RMS of 1.5" will cause significant drop in resolution. In 2" seeing with 70mm scope in best case scenario that will be 2.7-2.75"/px
  17. Why do you think that is? What about over sampled images do you find easier to process?
  18. Both, as they both have the same resolution (unless you drizzle for some strange reason) in both sampling rate and captured detail. Yes, that is correct formula and if I understand correctly - you were using camera with 3.72µm pixel size and 420mm of focal length, right? That gives sampling rate of 1.83"/px - that is correct. This, however, does not mean that you'll be properly sampling the image - or in another words, although on paper 1.83"/px sounds ok in principle - it is not always ok. In order to properly sample with 1.83"/px - stars in your image need to have FWHM of about 2.93". If you are using scope with 420mm of focal length - I'm guessing it is F/6 scope with about 70-72mm of aperture? Such scope itself has Airy disk size of 3.67". Add average seeing and mount performance into account and realistic star FWHM that you can get with 70mm scope is 3.2-3.6" - which means sampling rate over 2"/px. In order to fully exploit 1.83"/px - you need something like 1.5" seeing and 0.8" RMS guiding, or if your mount guides at 1" RMS - then you need seeing to be 1" FWHM in order to reach that resolution. What mount are you using and what is your guide RMS?
  19. When comparing large and small instrument - following applies: 1. if you set your target sampling rate - say 1.1"/px, then something like 5" refractor will be x4 as slow as 10" RC. Once resolution is set - aperture tells you how fast system is. More aperture - faster the system, simple as that (in fact speed increase is ratio of aperture surfaces - and even large CO in 10" reduces surface by very small amount). 2. In same seeing and with same mount performance - larger instrument will out resolve smaller one. Just by how much - depends on many factors (or rather relation of these factors - seeing, mount performance and aperture size). Quality of optics as long as both are diffraction limited (Strehl>=0.8) is the least important component for instruments above 2-3" and makes virtually no difference in long exposure imaging. You don't need particularly good seeing to get measurable difference - 10" will out resolve 5" in 2" FWHM seeing and 0.5 RMS guiding: 1.48"/px vs 1.55"/px, but these measurable differences will be hard to notice by eye in processed image. Even x2 difference in resolution does not cause severe drop in image quality (at least not one we would characterize as x2 drop in resolution).
  20. Actual resolution of the image will depend on seeing conditions combined with scope aperture and mount performance. This means that it will not be the same each night. In fact - it won't be the same between subs. Stacking averages sub to sub difference so you can look at final stack to estimate actual resolution of the image. Take average FWHM of stars in your image in arc seconds and divide that with 1.6 to get good sampling rate for such image. You can also see if image is oversampled if you just take a look at it at 100% zoom. Here is part of your image: Smallest stars are circles rather than dots and bright stars are very large. In contrast - this is what properly sampled image usually looks like in terms of star sizes: Smallest stars are pin points and larger stars are roughly the size of your smallest stars. It can also be somewhat seen in nebulosity - as softness / lack of sharpness, although most of nebula in this image is soft itself:
  21. I like the idea of pushing the data only to the point it will let you. As soon as you push it beyond that point - it will start to show. Noise issues will become apparent, and so will some artifacts in the image. Second image in my view is much better at this - it is not nearly pushed too far as first image. In fact it only shows traces of noise and that is because it is a bit over sampled. If it were binned to proper sampling rate and stretched like that - it would not show any noise (and in fact - it does not show it when viewed at say 25% zoom).
  22. I think it would do a good job as 4" all around visual instrument. Maybe not the best at high power / planetary, but otherwise better than your average 4" achromat. If you want very good high power views as well - there is version with FPL-53 glass, but it costs about 80% more. https://www.teleskop-express.de/shop/product_info.php/info/p9868_TS-Optics-Doublet-SD-Apo-102-mm-f-7---FPL53---Lanthan-Objective.html (again, availability is issue - all gear seems to be affected to some extent) In my view, even version with FPL-53 is not really suited for OSC. You can use it with special filters like Astronomik L2-L3. This one with FPL-51 can be used with mono. Narrowband will work great, but even LRGB will need to use L3 for luminance as regular full spectrum luminance will be a bit blurry because of level of CA. In absence of other option - I'd happily use it as imaging scope with a few tricks (I've used Skywatcher ST102 with some trick to produce decent images - so this will be way better experience).
  23. It is probably same as this (and number of other scopes branded differently): https://www.teleskop-express.de/shop/product_info.php/info/p4964_TS-Optics-ED-102-mm-f-7-Refractor-Telescope-with-2-5--R-P-focuser.html It is FPL51 doublet, has some residual color (very small amount by all accounts) and is very good visual instrument. I know I would like one instead of regular 4" achromat - either short one at F/5 or long one at F/10 - it will give better image than both.
  24. It is reasonable to take color subs at bin x2 in comparison to luminance - maybe even x3. I'm not really sure it is reasonable to take images at 2.3µm pixel size though. Maybe with 60mm F/4 scope - but anything more than that and that pixel size in itself is not reasonable. This is because it gives too high sampling rate. With 60mm F/4 scope - it will give 2"/px and that is ok for such small scope - maybe even that is oversampling. Say you want to it with Esprit 100mm that is F/5.5 - That will already give you 0.87"/px and that is not achievable in regular seeing conditions with even 8" or 10" scopes let alone 4" one (mind you - although here aperture, unlike for planetary imaging, participates in resolution only to some extent - it is enough to make difference in average seeing between 4" scope and 8" scope). Airy disk diameter of 4" scope is 2.57" by itself without any influence of seeing and mount performance. But back to the question at hand - yes, eyes are much less sensitive to change in color than in luminance and some small blur in color will almost go unnoticed. For that reason - you can sample color at lower resolution than luminance - and the reason I suggested to bin color data to coarser level than luminance. Here is example: I found this image online and it is quite colorful and has some fine detail. This is the same image except I left luminance as is and I scaled chrominance to 50% and then back to 100% (thus loosing a bit of resolution in it): I honestly can't see any difference. In fact - I'm going to scale chrominance to only 1/4 of the size and see if there is a difference. Again - I really can't tell the difference. It might not look like it - but I really did loose resolution in chrominance part - look at difference between old and new a channel (part of Lab color space where ab is color and L is luminance): same a channel in original image: We can clearly see the difference and same goes for b channel - however, that difference is not seen in original image because most of information comes from luminance and not color.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.