Jump to content

vlaiv

Members
  • Posts

    13,263
  • Joined

  • Last visited

  • Days Won

    12

Everything posted by vlaiv

  1. vlaiv

    Introducing myself.

    Hi and welcome to SGL.
  2. What would be your target sampling rate? I'd recommend that you aim for say 1.4"/px. That would mean you need 6.8µm pixel size, or alternatively - 3.4µm pixel size and bin x2. Binning is really simple procedure - you can do it on camera (and end up with smaller files and faster download) - or you can do it afterwards in software. Again - very simple procedure - just a single command in suitable software. There are a few camera models that have around 3.4µm pixel size - like QHY 550 with 3.45µm or Altair Hypercam 269C if you want color version. Do keep in mind that OSC sensors already sample at half "nominal" sampling rate suggested by pixel size. You just need to use super pixel mode instead of interpolation debayering.
  3. No - you need to know geometry of telescope in order to do that - size of baffle tubes, diameters of openings, size of optical elements and so on. That is usually done with computer in CAD type software. It is really easy to measure it though, in fact - it is standard part of imaging process. It is called flat fielding / flat calibration. We take so called flat exposures - uniform light is shone on aperture and exposure is taken (can even be sky light with T shirt over telescope aperture - but best results are achieved with flat box). With such exposure you can precisely measure percentage of light hitting the sensor. It is also used to correct for the vignetting. Problem is not in vignetting - that gets corrected with flats. Problem is that you lower your SNR in outer parts of the image - light blockage means less signal and less signal means lower signal to noise ratio. If you have 50% vignetting - it's like you exposed edges of the image only half of total imaging time (while center receives 100% of imaging time). Noise in outer parts will be noticeable. Maybe simplest way to think of focal reducers is that they "compress" already existing field into smaller size. Say you have x0.8 reducer and you have 28mm original field. Now you'll be able to image with 28 mm * 0.8 = 22.4mm diagonal sensor - whole 28mm field. If you use 28mm sensor - then you'll in fact image 28 / 0.8 = 35mm diameter. If such 35mm diameter field is vignetted and aberrated - that will end up on your 28mm sensor as well. In fact - most reducers if they are not specially designed like Focal reducers / field flatteners made for particular type of scope, will introduce additional vignetting and aberrations. They have smaller clear aperture and that can cause additional vignetting, and aberrations are inherent in some of their designs. For example, simple x0.5 reducer will make perfect stars into little astigmatic streaks at the edge of the field: (here spot diagram is presented in angular units rather than millimeters from optical axis, source: https://www.telescope-optics.net/miscellaneous_optics.htm) This is very important to understand and take into account. For example here is reducer that is often recommended for one of my scopes - RC8" scope: https://www.teleskop-express.de/shop/product_info.php/info/p8932_TS-Optics-Optics-2--CCD-Reducer-0-67x-for-RC---flatfield-telescopes-ab-F-8.html It is x0.67 focal reducer and works for flat field telescopes of F/8 or higher. RC8" is flat field telescope and F/8 one - so it's "no-brainer" to pair those two together. But there is a catch! If you look at the specs for telescope: https://www.teleskop-express.de/shop/product_info.php/info/p5222_TS-Optics-8--Ritchey-Chr-tien-Pro-RC-Telescope-203-1624-mm-OTA.html they say that field is usable as is for up to 30mm diagonal sensors. In reality, even APS-C will struggle somewhat with 27-28mm diagonal. Now if we take those 28mm and apply x0.67 reduction - we will compress that field into only 18.76mm. That is smaller than even micro 4/3 format (about 22-23mm diagonal). Most people find this reducer much better working at x0.75 (with some reducer designs you can vary their reduction factor by changing their placement / working distance to sensor). If we take again 28mm and apply now x0.75, we get 21mm - you can clearly see why x0.75 is working for people who have 22mm diagonal sensor. Using such reducer of telescope that has excessive field curvature or is fast is likely to result in very prominent aberrations.
  4. Quite right. If you have large enough sensor to capture whole illuminated and corrected circle - you don't need reducer. You can achieve same sampling rate by binning x2 instead of using x0.5 reducer. However, if you have small sensor that won't cover whole circle - then yes, reducer will help to capture whole field.
  5. Out of interest, Jupiter is now 41.88" in diameter. You were using about 3000mm of focal length with 2.9µm pixel size - which would make about 0.2"/px. Jupiter should be around 210px in your image - yet it is more like 385px in diameter. Where does this discrepancy come from? Did you drizzle with x1.5 factor in AS!3 or is your barlow magnifying much more?
  6. Thank you. I have just a setup to test for that - very fast scope that will produce both: - very astigmatic edge of FOV in simple eyepiece design - Plossl - Exit pupil of such system will be much larger than phone lens aperture - hence possibility to only observe part of wavefront I also have artificial star - which will give very good comparison. With text or objects - sometimes it is difficult to judge exact aberration or extent of it - with single PSF it is much easier. Just need to find some time to do the tests. I've been rather pressed for time recently.
  7. I almost forgot - I sad I'll take one with the smart phone as well: Compared to DSLR method - this was a breeze, and I would happily recommend it over DSLR method if it were not for that exit pupil / lens aperture mismatch. Only problem is that at this scale - lens distortions and lens sharpness will compound to those of EP. If we want to avoid that - we need to work on much higher scale and then scale back. That way we will minimize any lens issues - as those are tied to working scale.
  8. Ok, I have to admit - this was one of those surreal experiences As I promised, I set up to take shots with DSLR and fast lens thru the eyepiece. People say it can't be done - but I was skeptical. If you believe optical science - there is no reason for it not to work - and work good. I put 30mm F/4 finder guider on photo tripod (that was optics that I had on hand and did not want to set up more serious scope) and took 16mm ES68 eyepiece and put it in eyepiece clamp. I then focused image at distant houses. All looked good for test (there was a bit shimmer in the air from heat that could be seen at even this modest x7.5 magnification but I reconned it won't matter much for the test). I took my DSLR, attached Samyang 85mm F/1.4 (actually T1.5 - cine version) - it is fully manual lens so I opened up aperture to the max and set focus to infinity. I then aimed lens at eyepiece and looked thru the viewfinder - and sure enough - there was nice image of what could be seen thru the eyepiece - only magnified and just a tad out of focus. I adjusted focus and when I was happy with the view - I took exposure - and voila: I looked at the image and asked myself: What in a world just happened????? So I tried again and again, and then I started to think that I'm loosing my mind or that some extraordinary phenomena is happening. There is mirror involved and maybe some weird polarization thing is happening that is preventing the view that I'm clearly seeing thru the viewfinder - to be captured in image. In the end I called my wife to verify that I'm just not imagining things. She also clearly saw image of the eyepiece in view finder - but could not get it on image. I was ready to accept the defeat at this point - and say that you were right and that it simply does not work like I thought it does - and then it dawned on me - Lens is much larger than the eyepiece - what happens to all the light that goes around the eyepiece? Well - it must end up in image. In above image we see that scope and eyepiece are out of focus - but are there as blur - maybe we could even see a hint of trees in that central part. So I took a T shirt and put it as light shroud around scope and lens - just to see if that will make a difference - and, interestingly enough - it did Look at that - eyepiece view can clearly be seen in image. We can easily image edge of the field as well. In the end - I want to explain what happened, why people thought that it can't be done - and how it can be made to work: This last image is a good example to understand what is happening. If we don't put a light shroud around the scope and lens - well, lens has much larger surface and is faster than scope + EP + lens combination. Image that comes around the scope will simply be much brighter and will swamp the signal thru the eyepiece and make it invisible. If we put a light shroud around EP + lens - we will block all that "outside" light. This essentially happens when people use phone - phone lens is small enough that it gets completely covered with eyepiece - and no external light (one that gets focused) can get thru - only ambient unfocused light can get thru and that just impacts contrast a bit - but does not form an image - like light from far away that directly hits camera lens when lens is larger than EP. In this last image - light shroud was not placed properly and in one corner light did get thru - and we can see distant tower and mountains (that can also be seen in first image). But rest of the image - that did not get external light - shows nicely eyepiece view. So what is the moral of all of this - yes, you can use DSLR to image eyepiece FOV. If you use long FL lens - you'll have to do mosaic. If your lens is larger than eyepiece and you have outside light hitting sensor - it won't work - so you better put a light shroud around gap between EP and camera lens. And yes - imaging field stop works just like I said - you just need to tilt the camera, in this case at about 35 degrees and it will capture field stop nicely.
  9. My main concern with such products would not be the product itself - I'm sure it will work just as fine - but software support. At the time I was using QHY5LIIc - I've found drivers to be rather buggy and temperamental. With ZWO I almost never had such issues (I did maybe have issues once or twice - but new version of drivers sorted it out - and I did not have to wait for it long, it was actually out before I even encountered the issue). That of course says nothing about software support for Bresser - or whether one finds that worth price premium or not.
  10. Lens that you are interested in is F/4 lens so obviously can't be used at F/2. I just wanted to point out that MTF graphs suggest that lens that I used, at F/2 has similar performance to the one you are interested in - at F/4 and 200mm setting. Then I just pointed out that you'll get much sharper image with a scope. That's all I'm trying to say. If you still want to use lens - by all means do. The two images you attached are both nice, and nicely show the difference between good optics and one that is not as good when used on such target / in such a way. Here, look: I'd say that at this resolution - it is very nice image. At this resolution, you can no longer resolve issues that optic has - and therefore, it is good optics to work on that resolution. Similar thing happens to lenses - they are fine for low resolution work - but not for high resolution work. This is the same image - except presented at resolution it was shot at - you can clearly see that optics is not capable of resolving properly at this resolution. Lens are like that - they can't properly resolve things at pixel size / sampling rate you are going to use them with. Scopes on the other hand - can and will - your other image shows this: It might be somewhat lacking in exposure time and processing - but these are things that can be improved upon with work - otherwise, image is sharp and stars are nice. That was all I was trying to say - lenses have their issues when used for astrophotography. If you are aware of their limitations and work with them in proper way - they can produce nice results. However, they can't compete with scopes in some areas - they simply can't be as sharp.
  11. Not sure if I can help you much with sharpness of that lens. Best I can do is this: https://www.lensrentals.com/blog/2018/08/mtf-tests-for-the-canon-70-200mm-f4-is-ii/ In particular MTF charts. Look at green line. It is 30 lpmm and it is already below 0.8 to start with and drops as you move towards edge of the frame (for ASI533 it will stay pretty much at 0.8 as it has 16mm diagonal if I'm not mistaken). Compare that curve to this one: That is the lens I have at max aperture setting of F/1.4. - It is at 0.7 for 30 lpmm Here is what M31 looks like with that lens at F/2 In my book that is really not very nice image of M31 - look at those bloated stars. Maybe situation would be better if I stopped the lens to F/2.8 or F/4. There is simply no comparison to what 80mm scope can do: Yes, it won't fit whole M31 in single FOV - but it can easily do so in two panels and sharpness and detail simply can't be achieved with lens.
  12. Well, I guess that test is in order tomorrow. I have all the equipment needed and I'll happily test this with DSLR, fast longer FL lens (85mm F/1.4 Samyang), small scope, two or three eyepieces with different magnifications and eye reliefs as well as AFOVs.
  13. Not sure what you mean by this - simply focus your DSLR at infinity and it will image eyepiece nicely - in the same way phone camera does it.
  14. Wander what? There is no my solution. There are well known deconvolution algorithms which all have their drawbacks. In general case, when PSF is not known and in presence of noise - you simply can't solve deconvolution problem. There are only approximate solutions and all of them increase noise - provided that you can guess PSF sufficiently accurately which is not the case here. Solution is to use different type of optic - one that does not have that level of aberration. Alternatively - since this is a hobby - one can resort to using morphological transforms to round the stars, and that is fine - as long as one understands what they are doing (that it won't fix whole image just stars) and accept the process as is. If this problem was readily solvable - then there would be no point in purchasing field flatteners or coma correctors - as software would solve that at much lower cost.
  15. Actual correction of optical aberrations in software is very hard. What above algorithm does is simply replaces what is perceived as star - with nice circular shape. It is called morphological filtering. That would not correct underlying image - only stars. You would not be able to correct daytime blur or nebulosity in the image. However - these are outer parts of the image and when sky is dark - any defects in background go unnoticed so you can use this technique to make your stars nice and round. @alacant will no doubt explain it better and explain software and process used to correct star shapes.
  16. That is mostly lens aberrations. You don't seem to have significant trailing at all. Look at stars at center of the frame - that is sharpest part of the lens - they are mostly round. If you look at MTF of that lens: It clearly shows that at 30 lpmm (equivalent to 33µm pixel size - gray lines show sagittal and tangential MTF) - there is significant sharpness issue at 13mm away from center - you are using APS-C sensor with diagonal of 28mm - that is 14mm distance from center to corner. No wonder that stars in corners are astigmatic (dominant tangential component) Even at 10 lpmm (red graph) - you'll have some elongation in the corners. This also affects distant light sources that are ground based (hence not moving like stars): If you create image with larger pixel size - you simply won't notice that. Lens are not diffraction limited optics like telescopes and have issues even on optical axis - stars are not perfect pin points (within diffraction limit). This simply means that you need to use them to produce lower resolution images than you would otherwise with a telescope - just bin your data to make pixel size much larger and you'll have nice looking image (or alternatively - just resize this image so that it can't be fully zoomed in - if it bothers you). I think that with lens it is always better option to shoot with higher focal length lens and then do mosaics to achieve wider field. This is problematic when you don't have tracking - so you'll have to settle for this kind of resolution.
  17. There is very simple trick to orient camera the same every time. Just orient RA with either vertical or horizontal axis of the camera. Place camera on the scope, aim at bright star and start exposure - then slew scope in RA direction for couple of seconds. Stop exposure. Exposure should have a line formed by star. If line is horizontal or vertical (pick one - although for square sensor it does not matter) - you are done. If not - rotate camera and repeat exposure until you get horizontal / vertical line. With just a few corrections you'll get it aligned good enough. Alternative is plate solving - it gives you camera orientation in degrees and you can rotate camera until you get orientation in degrees that matches previous session.
  18. Ah, ok, this probably only includes diffraction limit of aperture. I'm going to show you full math physics of things (brief). There is critical sampling rate or pixel size for any aperture / focal length there. That is fairly simple calculation. https://en.wikipedia.org/wiki/Spatial_cutoff_frequency Or if rearranged a bit f = aperture / (lambda * focal_length) Critical sampling rate is twice sampling frequency (Nyquist theorem) and that means pixel size = lambda * focal_length / (2 * aperture) where all three are in millimeters. Factor of two is there because pixel needs to be half of wavelength. Say you have 180mm aperture 2700mm FL MCT telescope - what size pixels should you use? We can take lambda to be 0.00051mm (that is 0.51µm or 510nm - center of spectrum - or you can go for blue if you wish at 400nm - slightly different result). pixel size = 0.00051 * 2700/ 2 * 180 = 0.003825 = 3.825µm This is so called critical sampling and is used for planetary imaging. It does not have much to do with deep exposure imaging that is dominated by seeing. For that, we need to look at some other formulae. First is - resolution of the image is determined by PSF. PSF in long exposure image is simply image of single star - since stars are point sources. Stars are mostly (in absence of serious aberrations) just Gaussian (or Moffat in large telescopes) shape. Gaussian shape is characterized by either sigma or FWHM - there is simple relationship between the two: sigma * 2.355 = FWHM. FWHM of the star in the image that you'll get from diffraction limited optics is convolution of three different blurs: 1. seeing blur 2. mount tracking error blur 3. Airy disk blur (same as above where we discussed only diffraction limited case - without points 1 and 2). Seeing blur is usually expressed in FWHM in arc seconds. Most of the time seeing FWHM is 2" - 1.5" is considered very good. 1.2" is exceptional. Values below 1" usually don't happen in regular observing sites. You can get the idea of what you can expect for your location if you look at seeing forecast - found here: https://www.meteoblue.com/en/weather/outdoorsports/seeing/albuquerque_united-states-of-america_5454711 (make sure you put your actual location) Ok - this just proved me wrong - it forecasts 0.4" seeing in Albuquerque in next two days. That just never happens to the rest of the world Look here is forecast for my town: Although seeing columns are green - values are around 2" In any case - that is the seeing - it is hard to predict and it is variable quantity. Next is guiding performance - that you take your total RMS that your mount is usually capable of. Stock Chinese mounts like HEQ5 / EQ6 guide at about 1" RMS. Tuned and modded will go down to 0.5-0.6" RMS. High end mounts go in 0.2"-0.3" RMS range. In the end we need Gaussian approximation to Airy disk - found here: https://en.wikipedia.org/wiki/Airy_disk#Approximation_using_a_Gaussian_profile That is 0.42 * lambda * F_ratio in millimeters or angular sigma = 180 * arctan(0.42 * lambda / aperture) * 7200 / PI or if we use small angle approximation that tan(alpha) = alpha for small angles airy_sigma = 173.26 * 0.51 / aperture = 88.364 / aperture where aperture is in millimeters. In the end we get total_sigma = sqrt ( seeing_sigma^2 + guide_sigma^2 + airy_sigma^2). Let's do example. Say that you image with 8" scope and 0.5" RMS mount in 1.5" FWHM seeng - what is resolution that you can expect? First - let's see what sort of star FWHM you can expect. 8" scope is 200mm so airy_sigma = 0.44182 0.5" RMS just means guide_sigma = 0.5" 1.5" FWHM seeing means that seeing_sigma = 1.5 / 2.355 = ~0.637" We can calculate total sigma as being sqrt(0.1952049124 + 0.25 + 0.405769) = ~0.9225 and corresponding FWHM 2.1725" Ok, now we have FWHM of Gaussian PSF - last thing we need is proper sampling rate. Here we need to do approximation - as Gaussian profile is only approximation. Fourier transform of Gaussian is Gaussian - so there is no clear cutoff frequency for that. We can adopt that once frequencies get attenuated to 10% or below - that will be our cut off point. That happens at sampling rate equal to FWHM / 1.6, so our proper sampling rate in this case will be 2.1725 / 1.6 = 1.36"/px From all of this - you can see that 1" requires perfect conditions and large aperture and good guiding. That is not something that will happen often. That is why I said that you need at least 8" of aperture and very good mount and steady skies to even attempt going at 1"/px. In reality - most of the time amateur setups are limited to maybe 1.2-1.4"/px or even less than that if seeing is not good. Now this only holds for diffraction limited scopes - any imperfection in optical system will lower max resolution still - however, that is pretty hard to asses - maybe spot RMS for particular scope could be used instead of gaussian approximation to airy disk - but that is just off the top of my head, that would need verification (and not many systems have proper spot diagrams, and spot diagrams are best case scenario - in reality scopes don't perform that good).
  19. Best approach for variety of reasons is to take all panels every night. Imagine you image two nights and in first instance take half of galaxy on first night and half of galaxy on second night. It also happens that first night seeing is good and stars are tight and transparency is good, but on second night seeing is poor and transparency is poor. You'll be left with image that shows distinction both in sharpness and in signal to noise ratio - split in two. If you image both parts on first night and both parts on second night - odds are that you'll average out both seeing effects and transparency issues. Image won't be as sharp as first night or as blurred as second night (and same goes for SNR) - but it will be the same average on both halves.
  20. I actually figured that out by measurement of really sharp lens, but I think you can figure it out from lens MTF. Here is Samyang 135mm F/2 lens - generally accepted as very sharp lens for astrophotography. You'll see that it has about 92-93% MTF at 30 lpmm. That is 33µm (1mm/30) pixel size and it will not be perfectly sharp. MTF of ideal aperture looks like this: X axis is sampling frequency and Y is contrast loss. This function is pretty much linear. That means that attenuation to 90% happens in first 10% of frequency (or lowest 10%). Since we want wavelength and wavelength is 1/frequency - 1/0.1 = 10. So sharpness falls to 90% at 10 times the size of max wavelength that aperture can support. Say we want to compare that Samyang lens at F/2.8 (which people say is sharp) to ideal scope of same aperture. 135 / 2.8 = 48.2mm of aperture. At F/2.8 that corresponds to ideal pixel size of ~0.5µm for diffraction limited scope (critical sampling rate) - and x10 longer wavelength is 5µm. Yet from above graph we see that 90% is already reached at 33µm - far sooner (with larger pixels) than it should happen for diffraction limited lens. If you've ever read lens review - you'll notice that they mention F/8 to F/16 as being setting at which image starts to get softer due to effects of diffraction. Above calculation supports that - sharpness of the lens below about F/8 to F/16 is determined by lens itself and not by diffraction effects.
  21. Problem with regular photo lenses is that they are not diffraction limited. You can't hope to make very sharp Andromeda image with fast lens.
  22. I'm not sure that I ever had a chance to look thru astigmatic eyepiece when my pupil was smaller than exit pupil of eyepiece. For that I would need really fast telescope and long focal length eyepiece - or perhaps really destroy my night vision with flash light to conduct a test?
  23. Ok, so what you need to understand is - there is no On/Off approach here. Saying that your camera is useless for Ha is simply not true. It can only be useless if it has 0 sensitivity in Ha line and that is not true. What is true is that your camera might need x5 total exposure versus other Ha modded camera to achieve same signal to noise ratio. Similar reasoning goes for use of filter if it is properly used. It will shorten the time needed to reach certain signal to noise ratio. Only difference being - which part of Signal to Noise Ratio (SNR) it operates on. Modified camera improves signal detection - so it operates to increase signal part. Filter on the other hand removes light pollution and associated noise - it lowers noise part of SNR. You can achieve same result with unmodded camera as you can with modded camera - if you put in more time into particular image. Similarly - you can achieve the same result unfiltered as you would with use of filter - given enough imaging time (if you use it against purely emission target that whose emission lines filter passes - sometimes you won't get the same result if there is something that filter otherwise blocks). It is just the matter of spending time on target - and we want our results fast, so what ever you can do to improve your time is good thing - that would mean using filter and also if you can - get modded or dedicated astronomy camera (at some point).
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. By using this site, you agree to our Terms of Use.